MATTHEW MESSMER:
Welcome everyone, thank you for coming. There is more people here than I expected. So that's great to see. My name's Matt Messmer. I'm going to be talking today about migrations in Drupal nine and Drupal ten and a little bit about myself. I'm a Drupal architect at Clarity Partners here in Chicago. We work on a lot of websites for county and other government agencies around the area. I've been working in Drupal for about 12 years now and it's probably done around 15 to 20 migrations in Drupal eight and nine so far. Part of what I'm going to be showing you today, later half of the session is going to be a demo, and I put together a demo repo that I mentioned a little bit before. So if anybody wants to go grab the code, it should be a fully functioning Drupal nine site using Lando as the local development environment. I'll be showing off a little bit of the special things they put together in the repo later, but it's out there publicly. If anybody wants to go grab it might be helpful I hope.
There's an overview of what we're going to be covering today. We're going to be looking at the use cases for migrations and then quickly going over some contrib modules and drush commands you can use while doing migrations. And then I'll be doing a deeper dive into writing custom source and process plugins for migrations. So just to get a feel of the room, how many people here have used the Migrate API in Drupal 8 or 9 or 10? So about half of you or a little bit more. How many have written their own plugins? OK, great. And how many have used the migration API to do something other than migrate Drupal 7 or WordPress into Drupal 8 or 9? OK, so only two of you. So this is great because that's what I'm going to be talking about today. So when I look online or see the sessions at DrupalCon or Drupal camps about the Migrate API, it's almost all exclusively about that use case. You have a Drupal 7 site or maybe a WordPress site and you need to migrate that site to Drupal 8, Drupal 9, Drupal 10 and I think that's what most people think of when they think of the Migrate API in Drupal, but it can be used for a lot more than that.
If any of you remember back to Drupal 7, there was the feeds module and you had the feeds module and it could consume XML or maybe at that time it could consume JSON, I don't remember, but you could take this data that's outside Drupal and get it into Drupal and more often than not that's how I find myself using the Migrate API in Drupal 8 and Drupal 9. You can use it for the other type of migration in Drupal 7and it's great at that, but that's topics really covered ad nauseum and other places. So I'm not really going to be talking about that specifically, although some of the things I'm talking about here today you could use in those types of migrations. So our use case for this topic, for this session in a broad sense, is that we have data that's not in Drupal and we want to get that data into Drupal so we can use it and we want to reduce the time spent on content entry. Because of course you can just go in and manually enter all your content or you can use other ways to load in content with MySQL.
But the goal for me is to minimize the time that we spend on the content entry and try and find the most efficient way possible to pull it in. In the past I've used the Migrate API to pull in data from various sources. We had GTFS data, which is transit data for different things like buses and trains to import schedules on regular intervals or import the station information for the trains. I've used it on projects where we had an external product information system or PIM and we needed to nightly fetch the updated product information that was being managed by a completely different department of this large company that we were working with. So we had to fetch the updated pricing, all this other product information and get it into Drupal so it could be utilized by the Drupal website. And another use case that I find is really common is when we're building the website, we need to load the site with content and instead of doing manual content entry, we can use the Migrate API to pre-load the site with data from CSV files or spreadsheets.
And there's a workflow that I'm going to show that will hopefully reduce some of the time spent on the manual content entry. And it also makes for a nice workflow where we don't have to be exchanging databases back and forth because the data is all saved in spreadsheets and can be managed in a central location or even managed by non-technical people who are working with our clients in other departments to get the content ready for the Drupal site without them having to know Drupal itself. So, so far I've been talking a lot about using the Migrate API to get this sort of data into Drupal. But why use the Migrate API if there's plenty of other sources we can use? There's the like default content module in Drupal that you can use to replace that CSV thing I was talking about where it'll load in default content into the website. We could manually import the content. What is the migrate module really offering us? And these are the reasons that I think make a compelling argument for using the Migrate API.
The data we have in Drupal is usually interconnected. We have nodes referencing taxonomy, nodes referencing other nodes, nodes referencing files. The Migrate API in Drupal handles all this for us. It's already built in that it will handle all these references as long as we have consistent IDs that we can map between the different sources, it'll handle all this for us. We don't have to be tracking it ourselves. In addition, number two, usually when I'm working on data like this, it's not just one section of the site that needs data loaded in. It's a bunch of different types of content, and I like to keep things as consistent as possible and use one approach for many different solutions where possible to make it easy for me and other developers so that we know how there's this one way that we're doing it. We don't have to be juggling a lot of different approaches. Number three, The Migrate API comes with a built in system for tracking what content it's already migrated. There's a migrate map table where it tracks all this.
So it can keep track of which content's new in the source, which content needs to be updated. And that's just another thing that we don't have to code. There's also a built in system for deleting the content, for rolling back. Often when doing development, I'm constantly importing the content, seeing how it worked, did it have any errors, rollback, delete everything, go again. It handles all the content deletion for me, even when doing things like the file migrations, it'll delete the files that it imported, that it put into the sites, all files, the directory. So it does all that for us. And it's also a system that if you're not using it to run migrations from an API or an external source repeatedly, if it's something you're just using to load content in at the site launch, you can use it and then remove these modules, uninstall them. And I found that it's never caused issues. It's real clean to get rid of if you don't need it later. And the last one that I think is really cool is that I've often been able to use the Migrate map table.
That's where it tracks the source ID and the destination ID, usually the destination ID being the node ID or taxonomy term ID. It provides this mapping table where if you have a source where like in, I'll show this in more detail later, but like if we have a CSV where we have a string ID of whatever terms we're importing, like we don't have to keep memorizing all these node IDs or taxonomy term IDs when it gets migrated in, we can use the lookup function in the database table to search for our source ID and get returned the Drupal node ID or term ID that could be dynamic and this can make our code a lot more workable when we're dealing with multiple environments where sometimes the node IDs don't match. And without a solution like this, we have to sometimes trade databases back and forth, and that can cause its own problems. But conversely, there's sometimes when migrations might not be the best fit for your solution. I have a joke here about how I named the session, Migration for all seasons.
So here I am giving you reasons why you might not want to use it. There's some irony in that. I'm not very good at making jokes, but there's points that I think we need to consider when choosing our approach. And one of the points is the level of effort, because writing these migrations can be a lot of effort. And if you don't have a very big data set or the data set's really simple, it can sometimes be a lot more effort to write the migration than to just manually create your five taxonomy terms and just call it a day. The other thing to consider is does the data have a fixed pattern? This is really important. If the data you're trying to import doesn't have a set structure and a reliable pattern to it, it can be more difficult to map your source data to the fields in Drupal because the migration is like this one to one mapping of your source data field to the destination data, your field in Drupal. So if it's something where it doesn't have that fixed pattern and you'd have to be making more of a decision each time about how it works, then you're probably going to have to end up doing it manually to be able to utilize that.
And also do you need to have real time data updates? I mentioned before, we did a migration for the GTFS data or the PIM data. This was something that needed to be updated, but it wasn't something that was constantly changing. The PIM data could be updated once a night. It's not that we had to update the price on the website from the PIM immediately, like two minutes after it, it got updated. It could just be updated once a day. If you have data that needs to be updated in real time as soon as possible, the migration may not be a good source because it needs to run on a fixed schedule and there's going to be some delay between when the data is updated in the source and when it's going to be reflected on the Drupal website. Some other potential pitfalls that I've encountered is WYSIWYG fields. This isn't so much a problem if you're going from the same WYSIWYG to the same WYSIWYG. It's like if you have in your old system, you're using one type of WYSIWIG, I'm blanking on the name of the one in Drupal 7 we used in CK editor, but the WYSIWYGs have their own codes that they put into the text to tell it how to work and that can, I have never been able to successfully migrate that programmatically from one system to the other.
The other issue is that when you have your spreadsheets or CSVs for importing data, and I'll give an example of this later, if you have content that has really complex relationships, especially paragraphs where you have one type of paragraph for layout and then it's nested with text and image, it's very hard to manage that through CSVs and trying to manage that data through the spreadsheets or CSVs can often be more complicated than if you would just manually enter the content. And the other fall is you have to write code. So if you don't like to write code, if you're scared of code, then well, probably isn't going to be fun for you. So let's take a real quick look at some of the contrib modules. I think most of this is going to be familiar to the people here, since everybody, almost everybody said they'd done migrations before, but just to get everybody acquainted. Drupal core comes with the Migrate module. That's what we're using. It provides the base functionality for everything. There's also the Migrate plus contrib module.
This is what allows us to export our migration config into YAML, which is how we can save it into GitHub and move it around easily. And it also provides some different source plugins that we can use for making migrations from XML and JSON. The core migrate module provides the database connection source, so that's where if you have your D7 site you'll try to migrate from. There's also migrate tools, which is another contrib module. This used to be more required because it provided the drush integration, but as of drush 10.1, a lot of the Migrate integration for drush is now part of drush core. So having said that, Migrate Tools does still provide a few additional options which are useful. It provides a sync flag when running the migrations and this is something that you can use to delete any content in Drupal that isn't still in the source depending on your use case that that might be something you want to do. We have a lot of cases where that's not what we want to do. We don't want it to delete all the old nodes that aren't in the API call anymore.
Migrate Tools also provides a migrate executable class which can be used to run migrations in code. Specifically, if you have a cron hook and you're trying to run migrations from the cron hook, I'll provide an example of that later. And then lastly there's the Migrate source CSV module that I use a lot. This provides a CSV source plugin which is going to be used a lot. Yeah.
SPEAKER:
There's also a migrate debug module which is also very useful.
MATTHEW MESSMER:
Migrate debug. OK. I haven't used that, but as part of the demo I'm going to be showing how you can debug your migrations using X debug. And then again, some of these drush commands are probably familiar to most people here. But let's just take a look real quick so that when I run them as part of the demo, maybe we can remember what they are without me having to explain it every time. We have drush ms, which is the migrate status, this will list all your migrations and show how many items have already been imported. There's MIM, which is the migrate import. You provide the migration name or you can send it a group. I'll show the groups later as they show the migration YAML files. The useful flags here that I wanted to point out, not just group, but there's also limit, which is if you have a big migration that has like 5000 entries and you don't want to run all 5000 of them at once, you can pass a limit flag to limit it to however many you want to run. This with drush 10 works a little bit weird in that, let's say you already imported ten, if you send a limit of ten, it will count the ten you already imported as part of that ten and then won't import anything unless you pass the next one or the next is the update flag to update the already existing nodes.
It didn't used to work this way. It used to then, if you had ten imported and you ran it with limit ten, it would import ten new ones. But now you have to then pass limit 20 so that it will count the ten that are already there and then import ten new ones. There's also the ID list flag. You can pass a specific source ID, Let's say there's one particular piece of content you're using for testing, but it's not the first one in the list. You don't normally get that as the first thing that would be migrated when you run the import, you can send the specific ID so that you're always testing with that particular ID. I found that's useful sometimes where this particular one had an error and I tried to fix the migration to fix that error. So then instead of running through the 200 items that are before it, I can just run the migration again for that particular one that had the error and see if it got fixed. The update flag I already mentioned that will update your nodes that have already been imported through the migration.
So if the source changed, you can update them. And there's the execute dependencies flag the soul with the migrations, you can set dependencies of other migrations. So you got your nodes that reference taxonomy. You can say the taxonomy migration is a dependency, so it imports the taxonomy first. And with this flag, instead of the migration giving you an error and saying your taxonomy hasn't been imported yet, it'll run the taxonomy migration as well as the other migration that you were trying to run. The next command is mr, this is the rollback, migrate rollback command. This will delete all the content that was previously imported through the previous command. This one you can also send the group flag so if you want to delete multiple migrations at once, you can do that. There's the mrs, which is the reset status. This one's useful when you had an error when importing and it gets stuck and it says it's still importing or it says it had a problem, you reset the status to idle and that'll let you run the migration again.
And then the m message flag, which is migrate message. You can run this to show the error message that was logged when your migration failed. So let's briefly talk about how we create migrations and the different components of them, and then we'll get into the demo. So this is an image that I got out of the drupal.org migration API documentation. It shows the extract transform load concept where we want to extract data and transform the data and load the data and how this translates into Drupal and the Migrate APIs. We have a source plugin, a process plugin and the destination plugin. So we extract the data from the source plugin. This is where our data is coming from. So the CSV file or the other database or the API, whatever we're pulling data from, that's the source and the source plug in controls how we connect to that source. Then we can transform data using process plugins. So this is the real meat of it. Often the data that's in the source doesn't match how Drupal wants its data.
Most notably, it's always like dates. Drupal database wants the date in a certain format and your API is giving you the date in a different format or the API has a value of zero for a certain field and this needs to get transposed into a different Drupal field value that you want to store or something even more wacky and complex. And I'll show you in some of the examples. And then we have our destination, which is something in Drupal because we're migrating into Drupal. So usually this is a node or a taxonomy, but there's other destination plugins. You can just save your data into a database table and the migration API will handle all the SQL commands that are needed to save the data. I find that I've never really needed to write a custom destination plugin. Maybe you need to do that if you're using a custom entity or something, but usually we're just using nodes. So just use the, we usually just use the default ones that come with the core migrate module. But we've written a lot of custom source and process plugins and we'll be getting into that.
One thing that I want to note here is that even though this is the sort of ideal situation because this is Drupal, it doesn't really adhere to this. Often the easiest way to transform your data isn't to do it in the process plugins. You can use a thing I'll show with the prepare row method in the source plugin and in prepare row you can just like do whatever you want with your data and add whatever data and manipulate it. And it's way easier than writing process plugins. But we'll see how both approaches have their merits. But I think it's funny that they say it's this, but it doesn't really work like this. So just to cover this real quickly, we have the core migrate module and the migrate plus module provide a lot of different process plugins that are ready made for us and these can be really useful if we have pretty simple manipulations that we want to do. These are some of the ones that I find myself using frequently and there's links to the documentation where you can see there's a ton more.
There's the default value, plug in, this allows you just in your YAML to say that this field has a default value. It's not going to think about it just when it runs the migration every time this field gets this value. So often you use this to set like every node that this migration creates, just assign it a specific user ID, just don't think about it, just give it this user ID. There's format date, as I mentioned before, that dates are often a stumbling block of the migrations because the Drupal fields expect the date to be in a certain format or the created date, the created time on the node expects it to be a Unix timestamp, so you need to convert it and format date allows you to specify your source time format like in the PHP format and then the destination format and it'll convert it for you. There's static map where you can have, you can in the YAML, define a map of source value to destination value. Like if zero should be converted to yes and or zero should be converted to no and one should be converted to yes.
There's the migration lookup which you'll use all the time when you have migrations that are referencing other migrations. So you get your nodes, entity reference field connected to your taxonomy migration. And there's skip on empty, which is nice if you have a certain value in the migration source. If this value is empty, we want to skip this and not import it. And then migrate plus provides the skip on value which is similar to skip on empty except instead of it being empty, you can say if the source has this particular value in the field, we want to skip it and there's entity lookup which is similar to the migration lookup, but instead of mapping based on the source IDs, you can run like an entity query of certain values to look up what entity you want to reference. These can all be chained together so you can call one and then call another and call another. I find that once you start chaining many of them together, it can get kind of confusing what you're doing. And I like writing codes, so I often will be transforming really complex things like that in the prepare row, like I mentioned, or writing my own process plugin.
So that's what we're going to be getting into. So for the demo, we have four migrations that we're going to be looking at, although really it's three because the last two are kind of part of the same thing. I've put together this data that I'm migrating, it's a mock up of Chicago Parks, which was one of our clients, and this is a simplified version of part of what we did for them. All the data is going to be coming from an open data API that the city of Chicago kindly provides. Although for the purposes of this demo, I've saved the data into the repo so that it's not relying on calling the external API which might fail during the demo and then I'd look like an idiot. So we have the parks migration. This is going to pull data from a JSON API that I just talked about and create nodes for parks that are across Chicago. There's like 500 of them or something and it has a custom source plugin that I'll be showing. We have a staff migration which is mimicking, you have staff that work at the parks and these are nodes and they're coming from a CSV.
So let's say when we create the site, we want to show all the staff and the employees of the Chicago Parks District are preparing a CSV for us that contain the staff bios. And then we have a ratings migration and movies migration, because you may not know, but the Chicago parks in the summer, they'll have movies showing in the parks and you can go to the park and watch a movie on a projector. So these are events that are happening at the park. So we have these event nodes, the movie nodes, and the movies have a rating. So we want to show on the website what the rating is of the movie PG, PG 13, G. And these are a taxonomy that we're migrating in. So without further ado, let's try to show the demo and hope that this works. So we've got our Drupal 9 website with no content in it. And the first thing we're going to do is look at the park's migration. So each migration that you create will have one of these YAML files. I find the easiest way to do these is just to create them directly in the config directory and you don't enter the UUID and you create it by copying it from somewhere like the migrate plus has some examples.
And if you import and then export the config it'll automatically generate your UUID so you don't have to have it in the config install folder in your custom module because then you're tracking two versions of the YAML file, you have one in the module and you have one that's being exported and they get out of sync. So our YAML file here is basically broken down into four parts. We have our part up here at the top where we have our ID. This will be referenced throughout the migration and then it's broken down into three other parts. We have source and process and destination. So that maps up with the thing I was showing before where we have our three types of plugins. And for this migration, our plugin is the park source. This is a custom source plugin that I created that's extending the URL source plugin that's part of migrate plus, this is what allows us to pull the source data from the URL.
SPEAKER:
Could you zoom in a little bit?
MATTHEW MESSMER:
Is that better? So the source data for the migration needs to be mapped. So we have a API call here where we can see all this different information that's returned about the parks and we don't need most of this is, there's just certain fields that we want to use. And so we need to tell the migration which ones we want to use. So we have our park number. This is the value in the API call and we tell it this is the source ID and this is what we're using here to say that this is the source ID of the migration. This will be what's saved in the migrate map table. We specify a bunch of other values from the API that we're using. And then these get mapped in the process section of the YAML file where we have our Drupal value on the left. And then on the right side of the YAML is the value that we declared up in the source section of the API data. So for more complex fields like the address field where it has different elements, we have the different selector for the address line one, the locality, just using that default value process plugin I mentioned.
And then the real magic thing that we're going to be looking at is here for the geo field. This is using the geo data source, but geo data isn't up here in our fields that are coming from the API. So where is geo data coming from? And this is coming from a separate API call that we are adding in the prepare row method of this custom source plugin that I created. So as I mentioned in Prepare row, we can add or manipulate the information, the source data that we're going to be migrating. And this is going to be pulling in from an additional API every park, it's going to make another API call to pull in this geo coded polygon data, the shape of the park on the map. And we need to do this because the initial API doesn't have this information. It has like two GP's coordinates, but that's just one point on the map and we want to show the bounds of the park on the map. And there was a separate API that we can call that has a query parameter to restrict the call to just one park. And so for every park we call this API to get the big long JSON of all the different polygon coordinates.
And this will dynamically call using the park ID, which is being, we get the source property, the source ID sets, that park number, the unique identifier in the API of that particular park. It's not going to be our node ID in Drupal, but it's mapped to that node ID so we know which park is associated with that node ID. And it calls the API and pulls in that data and processes it and sets the source property of geodata with our polygon data. So we can run this. And I didn't update Lando because I thought if I update Lando and it doesn't work, then the demo is going to break. So just leave well enough alone. And we have 581 parks here and we are migrating them in and it goes pretty fast because it's running all the calls locally on the machine. And now if we go back to our site and we refresh this, we should see a bunch of different nodes that got created. We have all our parks. Yeah. And if we go to this view that I set up beforehand, we will see our map with a ton of parks on it. These are all the coordinates that we pulled in from that API.
It got saved into the node in Drupal and through the magic of the view handler that comes with the geo whatever geolocation module, it makes this map. So that's magic trick number one, calling a subsequent API call for every single row to get additional information loaded in that isn't part of the initial API call we're using for the migration. So the second trick is in our staff migration. And that trick is that we have a CSV migration that's going to create these nodes and it's going to not just create the node, but it's also going to create a media entity and the file entity that's referenced by the media entity, all part of one CSV source file. So it has to be three separate migrations because each entity that's being created has to be its own migration. But we have one CSV file which will then create the three different nodes and they'll all reference each other. The node references the media entity, the media entity references the file entity, and it will also pull the files into the Drupal file system and put them in the correct directory.
So that migration, we have our three migrations here. We have our staff CSV migration. The source here is sources plug in CSV. This is the plugin that's provided by that migrate CSV contrib module that I mentioned. This isn't a custom plugin that's extending it's just the one that comes with the module. We have our path to our CSV file which is here in our custom module. There's a CSV file here for staff. And this is just a standard CSV file. It's kind of hard to see, but we got our stuff up here. One thing I'll mention, make sure you don't have space after your comma because then it won't work. Make sure there's no spaces after the commas. I ran into that when hand typing out the CSV. So we have all our different properties here. I think it, yeah, it'll show it here in a table. We have our ID and this is just something that you can, in this case, you can give it whatever you want. It can be a sequential number identifier, it just has to be something unique. And then there's two fields here, first name, Last name.
These are concatenated using one of the ready made process plugins that concatenates the first name and the last name properties to create the title of the node. And then we have all our other mappings here. And down here we have our field image. We have the migration lookup plugin, so it's going to look up to the media migration, the staff CSV, media image migration, using that CSV ID as the value to look up which media entity to reference. And because we only have one image for each staff member, we can just reuse the same ID across all three migrations. So here for the media image, we're also using the CSV as the source. We are creating mapping our different media fields, including the target ID, which is then using the image, the file image migration to map it. And this is where we've set our process here for where we want to save the file to. So, we can run all of these together using the group because they're all part of the staff group and it'll run all three then it creates two items for each one.
SPEAKER:
The group automatically orders them, right?
MATTHEW MESSMER:
It seems to, because I've set the dependencies. I've set the dependency here. So each one is depending on the subsequent one and it seems to run them in the correct order. So we have our node and it's got me with my image. This image was saved originally in the module where it's referenced from, but then it gets moved into sites, default files, staff image, because that's where we told it to put it here. And if I do rollback, it'll delete not only the nodes but those images disappeared as well.
SPEAKER:
I'm sorry. Remember the source images? I lost track there.
MATTHEW MESSMER:
The source images for this demonstration were in a custom module. But those could be anywhere. You would just need to tell it where they're coming from. It's here, the source base path is in modules custom migration demo images. But that could be anything. It could even be a remote image. You would put the full URL there of where you're pulling the image from. It could be just anywhere that's accessible where you can have a path. And then here we're using the public files directory as our destination.
SPEAKER:
If you were going to migrate images along with them, would you just write a separate migration (INAUDIBLE)?
MATTHEW MESSMER:
They are separate migrations. It's just that the CSV source file is the same. So you could use this independently. And I could run, migrate, import and the ID of the migration here for just doing the images. And it'll just run that one. I'm running out of time here, but for the last trick is we have the JSON migration of the movies, and this has some special filtering of which things we're going to import based on some settings that can be set in a config form in Drupal. So first to quickly show the settings form, we have a just standard Drupal settings form which makes a page here where we can set information. So we're going to do this. And there's this promoted parks field, which is an entity reference lookup, whatever field, the entity autocomplete field. So this will autocomplete our parks that we migrated so we can specify a park that we want to set as the promoted park. And I set it to Wicker Park because I like Wicker Park. I used to live around there. And then in our movies migration, we have a thing here for promoting the node.
The feature in Drupal core that you probably never use to promote items to the front page. This will set that and this is just for the purpose of this demo that you probably wouldn't want to actually do the promote thing, but I needed something to set dynamically. So this is the promoted thing and it's set to custom promoted. So where does custom promoted come from? And this comes from the prepare row. But in this case, we're not using a custom source, we're just using the URL plugin that comes from the module. So how do we do our prepare row if we don't have a custom source plugin? And you can do it here in this hook, hook migrate prepare row, which is in the process of being deprecated, but it hasn't been yet. So I'm using it. And we switch based on our migration ID, movies, JSON, that's the idea of our migration and it's going to set the promoted status based on park. So we want to get the park ID which is in the source that ID that we're using for mapping to the node ID and it's going to get our promoted park from that settings form that we just made.
But the promoted park is going to be a node ID, So how do we remedy that, that we have a node ID, but the migration source, the API doesn't know what our node IDs are? So we can use that migrate map table that I was talking about. So we have a helper function here for map lookup. And it's going to do a DB select query on the migrate map table which we can see here. We have our source ID and our destination ID. The source idea is that park number and the destination IDs are node ID and it's going to get the source ID from, it's going to get the yeah, it's going to get our node ID based on the source ID because we're looking up the source in the migration and getting our node ID to see if it matches the node ID that we've saved in the form which was Wicker Park. So if we then run this with the movies JSON, it'll import. OK, great. It failed. I ran this this morning. And there we go. I don't know why I had to run it with the update flyer, but now we have our movies and the ones that are for Wicker Park will be shown here on the front page in the default view for the Drupal front page that it shows the promoted content.
And we can go to the park page itself, we see the movies in the sidebar in the view that I made. So these are the movies for Wicker Park. If we go look at a different movie, this is in this park that I'm not going to try and pronounce, and it's got one movie for Avengers: Infinity War, and this one didn't get promoted to the front page because it's not associated with the correct park. For this simple example, you wouldn't really need to do this. You could just, like have your front page promoted parks just always be showing Wicker Park ones instead of relying on the promoted value. But maybe you can imagine how in a more complex situation where you have users who want to tweak what's being migrated dynamically without having to update the code of the migration, this could be useful. We've used it in more complex situations where the parks have events that are in, they call it like seasons, and some seasons they don't want to have automatically published on the site so they can enter the name of the season and then the nodes will get imported, but they'll be unpublished.
And this is something that they can manage on the site without having to do a code deploy. Because otherwise that's what you would have to do and you'd have to tweak the migration YAML to set it to Wicker Park using one of those other process plugins provided by the migrate core module or migrate plus. So I think we're about out of time for my demo. Is there anybody have some questions? No, it's all clear.
SPEAKER:
Have you done any implementations when you're using the CSV source plugin or kind of creating a dynamic form to where users import their own CSVs (INAUDIBLE)?
MATTHEW MESSMER:
No, but you could do that. You'd have to create your own custom source plugin that's extending the CSV plugin and then have something here, you'd have to have something in your custom source plugin that would then be like in the prepare row fetching the location of that CSV and parsing through it.
SPEAKER:
Ian could actually do something similar (INAUDIBLE)
MATTHEW MESSMER:
Why didn't he ask me about it?
SPEAKER:
(INAUDIBLE)
MATTHEW MESSMER:
He should have been giving this presentation.
SPEAKER:
Is there any reporting tool (INAUDIBLE)
MATTHEW MESSMER:
So the question was, is there any tool that would give a pretty report of what's been migrated or such and not built into it? We have built things where when it migrates, it will send an email every night, which basically was just capturing this drush command output and then putting it in an email because that was all the prettiness that needed to be. But you would have to build something like that.
SPEAKER:
That said like if obviously you're using drush to run the migrations but in the migration UI itself, I mean that's pretty nice looking and it gives you, you know, when I do migrations you'll have like, yeah, like a group for like nodes and it's nice because it does give you...
MATTHEW MESSMER:
I actually, I forgot that that even existed. There is a UI, there is a UI that I never use that does show it'll list your migrations. So it does exist, I just forgot about it. Because I never do it through the UI. Because it might be better now, but it used to be you could execute it, but this would always break. So I just got out of the habit of even checking that this exists. Since we're almost at the end, one thing I thought we would run out of time that I wanted to show was that here's the code for the the cron that I mentioned I would show that you can run your migrations through cron using cron. This will just run it every time you run cron. We always use the ultimate cron module to track the timings of when the cron jobs get run. If you're not using that, you would have to build some logic in here that's checking the timestamp on when it was last run. But this is going to load your migration. It's going to reset the status to idle just in case so that it won't break and it'll set here, if you want, you can set the update flag so that previously imported things will get imported again and their values updated.
And then here's where you would want to do something to send emails or send some sort of report. Well, if anybody wants to talk to me about migrations, I don't know where my slides are anymore, but I had a slide with my, I don't know, I had a slide with my email address on it I don't know. Here we go. So, yeah, please provide feedback and if you have any questions, feel free to contact me. Thank you.