Description
Key Learnings
- Learn about how automated model checking is being used in the industry
- Discover how building owners with modeling requirements are benefitting from working with their consultants
- Discover how large consulting firms are benefitting from working with building owners and their modeling requirements
- Learn from the perspective of a small/medium consulting firm and discover how it is benefitting from automated model checking
Speakers
- BDBrendan DillonBrendan Dillon is the Manager of the Digital Facilities & Infrastructure Program for Denver International Airport. DEN is the sixth busiest airport in the United States and has developed a comprehensive BIM and Asset Management plan unsurpassed by any airport in the country. DEN’s DFI program manages over 120 projects at a time with a net value in excess of $2 billion. Prior to joining DEN, he had managed over $1B in BIM projects, including as the BIM standards coordinator for the design team on Denver International Airport’s South Terminal Redevelopment Project. Brendan is also the founder of the annual Airport Information Integration and Innovation (AI3) forum and is the founder of Red5ive Consulting, specializing in BIM deployment and integration for airports. Along with managing DEN’s Digital Facilities & Infrastructure program, Brendan still enjoys getting into the weeds with Revit, writing scripts in Dynamo and generally getting his hands dirty.
- Chuck MiesChuck Mies, LEED AP, Assoc. AIA Senior Manager, Worldwide Field Operations, Autodesk, Inc. Chuck is a member of Autodesk World Wide Field Operations and focused on looking at the application of technology to the entire ecosystem of a project, extending from preliminary design through operations and maintenance. In this role Chuck works on a global scale with owners in Industrial and other segments as a resource to assist these clients and the firms that work for them understand the value of Building Information Modeling and the ways to maximize the value of BIM. Chuck’s background includes 35 years of professional experience spread across the practices of design and construction, facilities management and technology consulting.
- Jason KunkelJason has worked across the design and technology spectrum of the AEC industry for over 25 years. After graduating from the University of Virginia School of Architecture, he began his career as an architectural designer for a major mid-Atlantic architecture firm specializing in large, public sector projects. Discovering a passion and knack for technology, he migrated to the IT support world, spending over a decade as the Director of Information Technology, where he applied that passion to help architects and engineers leverage technology in new and exciting ways, and save time in the process. Working at CADD Microsystems, Jason has been able to apply his knowledge and experience to help a wider range of customers achieve the same goals. He is one of the founders of RevitRVA, a Revit user group in the central Virginia area, and has a wide array of knowledge and experience with both software and hardware to help companies improve their processes and work more effectively.
- EKEddy KrygielAs a Major Projects Development Executive within the AEC Solutions team at Autodesk, Eddy focuses on BIM and technology workflows for Architectural, Engineering, and Construction clients. He works with large project teams and owners to help leverage end-to-end technology solutions to optimize design, construction, and facility management outcomes. Eddy has almost twenty years of experience in architectural offices and on a range of projects from single family residential to office, federal, civic, and aviation clients.
- TMT.J. MeehanAfter receiving his architectural degree, T.J. Meehan began his career by working in several architectural firms across the United States, gaining experience on both commercial and residential projects. Transitioning from the design side of the industry, he has now become a recognized expert on the technology side, working for one of the nation's most successful Autodesk, Inc., partners as their vice president of professional services. He is an Autodesk Implementation Certified Expert and has presented at many industry events, including Autodesk University. A registered architect and LEED-accredited professional, Meehan capitalizes on his skills to help companies successfully train and implement architectural, MEP (mechanical, electrical, and plumbing), construction, and operations and maintenance software.
EDDY KRYGIEL: All right. Hello, everybody. Sorry for that a little bit of a rough beginning where I think we're all learning this new Swapcard tool. But welcome to The Benefits of Automated Model Checking in the Cloud. So if you're in the right AU room, this is BLD 5,021. What we're going to be talking about today is model checking on BIM 360 in a cloud-based environment. Here are the session details, so hopefully this looks like what you signed up for. This is a panel discussion, so we're going to talk a little bit about the tools, how you can check models in the cloud, review some possible new offerings around Autodesk strategies for cloud-based model checking, talk to Denver International and Jacobs, as well as our development team, CAD Microsystems.
So here we're going to learn about how automated model check is being used across industries, discover how building owners with model requirements are benefiting from working with their consultants in a data-driven environment, and discover how large consulting firms are benefiting from working with building owners and modeling requirements. So hopefully you'll learn something from this class. When you do have questions, we do ask that you please use the question pane and not the chat pane. I don't know why there are two, but the question pane allows you to vote on questions, and the most voted questions filter to the top for what we see.
And we'll try to answer those questions first. We'll get to everybody's questions, hopefully, eventually. But that does help us kind of sort and keep track of things. One more logistics slide is our safe-harbor statement. So what you will be seeing today are some technology previews. We're going to talk about some products that are not quite available on the market yet but hopefully will be soon, and so I'm sure you guys have all seen this slide before. And if you haven't, you'll probably see it several more times during AU.
So I'm your panel host, Eddy Krygiel. It's good to meet everybody here today. I miss not having everybody in person, but the walk to the room is much shorter. We also are being joined by our panelists today. I'm going to do a brief introduction for each of them. Starting on the left, Brendan Dillon is the program manager for the digital facilities and infrastructure program at the Denver International Airport. DEN is the third-busiest airport in the United States and has developed a comprehensive, leading-edge BIM program for aviation. DEN's BIM program is engaged in over 200 active projects with a NET value of over $2 billion. Brendan is also the founder of the Annual Information Integration and Innovation forum, AI3, an airport-only council to share emerging technologies.
Chris, Mr. Pittman, is a project architect and BIM manager with 16 years of experience, ranging from mining to healthcare to education to aviation. Currently he's dedicated to multiple expansion and renovation projects at the Denver International Airport. He leads the BIM management efforts to ensure all project models meet client standards, and pushes for innovative uses of technology with those [AUDIO OUT] projects to streamline and automate workflows.
Chuck Mies is a member of the Autodesk global business development team, focused on looking at the applications of technology across the entire ecosystem of a project, so extending from preliminary design through operations and maintenance. In this role, Chuck works on a global scale as a resource to assist owners and their consultants to maximize the value they get out of BIM.
And last but not least, Jason is part of the development team for some of the tools that you'll see and is a technical and team lead for the consultants at CAD Microsystems. [AUDIO OUT] helps coordinate and align team skills with industry needs; oversees staffing, scheduling, and quality control. He's also responsible for software design, and development, and designing and documenting workflows around custom software solutions. Finally, he performs process and standard assessments for AE firms; delivers education, outreach, training, and presentations. In reading all of that, I did learn that he does not do windows or babysit.
So with that, we will step into the program. So what we're going to do is actually walk you through some of the solutions we've developed and actually have a panel discussion around some of the values and some of the pitfalls that these, your panelists, have seen in trying to get to know how to use the tools and technologies [AUDIO OUT] model and geometry deliverables for large-scale projects. So with this, I'll turn it over to Jason.
JASON KUNKEL: Awesome. Thanks, Eddy. It's great being here, everybody. Thanks for joining our session today. So we wanted to take a quick look back and talk about the foundation of the model checker on Forge. And that foundation really comes from the model checker which is in the desktop application. It is part of a suite of tools called the BIM interoperability tools. You may have heard of it. You might use it. Model checker is one of them. The classification manager is in there. The COBie extension is in there. The spatial data tool and the equipment data tool are all in there as well. But in terms of the model checker, its desktop version, at its core, what it wants to do is look inside of Revit models, look at the elements inside of Revit models, and look at the parameters inside of [AUDIO OUT] rules.
There's some checking rules that it can do. It can read that data out. It can compare those datas based on regular expressions, based on a series of input you can have. It can check to see simply does that parameter exist, or is it empty? So really, at the fundamental level, it is a data checker inside of there. There are some prebuilt checks that are in the model checker as well, that go a little beyond the data specifically, into model validation, like file size and file name, et cetera. That kind of try to make a holistic approach and view for the model checker to check your Revit models.
Once you check, you can export the reports. It's saved inside the model. You can share those reports. You can use a cousin application called the Configurator to create your own check sets, so based on your own standards or your own rules. And you can also use a series of public library check sets that we've created and shared out there that align with a lot of the more popular and larger used BIM standards out in the world there.
Next slide, please. So as the desktop tool has been out there for years and years in use, we've come across some limitations. And for those of you who have used the model before, you're probably familiar with these, or if you've been using really most desktop software, it's a lot of what these limitations are shared around. It runs inside of Revit, so you have to have Revit. You have to have Revit running on your desktop. Takes time, takes resources to run. We introduced a feature to the desktop version, which is supposed to assist with automating the workflows, automating the checking, but it can be a little challenging, frankly, to try to automate those steps and to kind of understand how that works.
And certainly, of course, scaling can be an issue there. You've got one workstation with one model. And when that's done, it has to go to another model. If you have a series of two dozen or 50 models that you want to check against, that's several hours, if not days, worth of processing power on your machine you have to sit and monitor. And then there was the issue of maintaining consistency across check sets, being able to point to a single check set without having a individual or user go in and kind of make modifications and tweaks between checking.
Next slide, please. So that led us to work on the model checker on Forge. Of course, it's on the same code as the model checker itself, so all the check sets that are built, all the checks that have been built-- the understanding of how it works is universal from the desktop tool to the cloud-based tool as well. We tried to create in such a way that it's a pretty straightforward interface. If you can read and understand a website, and you are familiar with Revit-- and the model checker especially-- typically, looking at the website, that's all you need to know to get in there and start setting things up.
It runs specifically on ACC or BIM 360 projects, published Revit models in those projects. Clearly you don't need direct access to Revit because it uses the Forge APIs. It uses design automation directly in there. You can create tasks, so it allows you to point to a model or a folder and, say, every hour-- which, frankly, every hour is a little too much-- maybe every day, maybe once a week. Model checker on Forge, use this check set, compare it to these two models, and send me the report. And on top of that, it maintains a history, so you can always go back to the web page and take a look at your old reports and see what the status is in there as well.
So going through each section, next slide, please. The top panel is our tasks panel. This is simply the overview of the different tasks we have created which are going to check the models. It tells you what the check set is, tells me what the schedule is. It also maintains the old tasks that I might have only created to run once or I've paused to go from there. And from here, I can edit. I can delete. I can run. I can copy. But most importantly, I can hit that new task button.
Next slide, please. And that gets me just to the second page of the entire user interface, which is creating a new task for model checker on Forge. And this, simply broken down, you assign a name. You tell it what year of Revit you are running your check against, so what year of Revit your model is that you're running your check against. I can now come in here and select multiple Revit models. I can select a folder and have it monitor all the models in the subfolder. I Identify which checks that I want to use, and this gives me access to both the public library checks that have already been created. It also lets me upload my own checks and store in the model checker on Forge interface and let me run against those.
I then tell it when to run. Do I want it to run now? Do I want it to run every day for the next two months? Do I want it to run once a week indefinitely? A new feature which we're rolling out and working on now is allowing it to run on publish. So if I make modifications in my Revit model, and I publish that to my [AUDIO OUT] and that will tell that model check to run. Tells me what to do with my [AUDIO OUT] reports. Do I want an Excel file? Do I want a HTML file? Do I want it emailed to me? And then that is the criteria I use to create the task.
Once my tasks are done and running-- next slide-- I can go in and monitor the reports. So this is back to that first page. This is the second panel down. The reports is really the history of the individual model runs that I have done. And I can go in here. I can select the model that was run, and it's going to give me a preview of what that looks like. And we'll go to the preview in a second. Basically here, I can export my Excel or HTML from here as well, so it doesn't have to be in my task.
But again, this is the history. I select which report I want, and then, next slide, please. I scroll down further, it gives me, right there on the web page, the summary and all the details of my run right here that is typically found-- if you're used to it-- inside of my Revit model. So again, from here, I can export to Excel. I can export to HTML. And I can save those settings there. So again, very straightforward user interface. I create tasks. I monitor my reports, and then I export that data. Allow me to automate my checks and really take a load off me having to do this all very, very manually. That's it for the overview.
EDDY KRYGIEL: Now passing the conversation along to Denver International Airport.
BRENDAN DILLON: Thanks, Eddy. Go ahead and next slide. So DEN has been working with or collaborating with Autodesk on the model checker process-- the model checker tool in general-- for a couple of years. Our asset check is part of the current default library of configurations, and we expect our new version to become part of that library as well. So we've been using on a number of projects and on a number of applications internally for our asset models, our facility models, for several years now.
Next slide. The one downside, as Jason had mentioned, to the existing model checker is that it does take a good amount of time because you do have to have an active Revit model or active session of Revit open. And we have 300-plus active projects on BIM 360. Some of those are in our define phase, which is pre-design. A lot of them are in design or construction. And we have projects all the way through warranty management. Plus, we have our common data environment, our asset information models, posted up in a dedicated project folder on BIM 360. And that's a lot of content to check, especially if you're going to be trying to check it on your desktop or even a row of desktops. It took a lot of time to just stay on top of all of that. So a lot of the time, we weren't staying on top of it the way that we would have liked to.
Next slide. With the new model checker, that has changed substantially. Now we have reconfigured our internal model checker. Instead of having-- I believe it was four items that we were checking with the asset check-- now we're doing 102 items that we're checking. 24 of those are pass/fail, and 78 of those are list checks that list out all of the content that's relevant to that check. And if you've ever done any work on the model checker in the past, if you've gone looking at lists of families, that particular check can take a pretty substantial amount of time. We have some models that when we run the check, our full check, it takes just under an hour to do the entire check. And the first two minutes are all of the checks, except for all of the families. So being able to run that all up in the cloud allows us to actually get all of the checks done that we need to get done.
Next slide, Eddy. So now that we've got this new and improved tool, we've started deploying it in slightly different ways. So we've got our AIM asset data verification, which is we have set up an integration through Forge between our common data environment up on BIM 360 and our maximum deployment. And that does our regular check of new assets with the correct pieces of information identifying them as assets and populates for Maximo, which is our maintenance system, with all of the data for those particular assets. The model checker we use to run on basically a weekly basis to double check that data so that we have some idea-- so that we have some verification that what we're going to push over to Forge is right and correct and clean before that happens.
The next one is our asset information model checking. I've already touched on that a bit. Our models were not as clean as we would have liked them to be. I think our average percentage on the pass/fail was 75%, which, for our internal models, not great. We've managed to start pulling that up now that we've got the model checker running so that I think we're closer to 80% at this point. But we have enough asset information models that it's going to take some time to get that all hooked up. Lastly, there's the project information model checking, which Chris will touch on a little bit more. But with all of those active projects, we could never run regular checks on them otherwise. We don't have 30 people running checks. Being able to run this in the cloud and have it done in under an hour has been a huge time save for us.
Next slide. Is that mine?
CHRIS PITTMAN: It's mine.
BRENDAN DILLON: Sorry, there's no intro slide for you.
CHRIS PITTMAN: I didn't get an intro.
BRENDAN DILLON: Take it away, Chris.
CHRIS PITTMAN: So let me talk about a little bit of how we're using it in the Jacobs Denver office. And the key thing was we were using this tool when it was still just the desktop version, which, like Brendan and Jason referred to, is labor intensive. On the large projects, it was key just because we had to keep an eye on the trends of that project. So if we have a project that starts trending in a negative direction, then we want to start maybe making some corrections to it before we start losing work or losing productivity time. We also, when it moved to the cloud, I'll brag that I would set up checks, while sitting on my couch, from my iPad, which I couldn't do previously. So work from home is great. And we can kind of configure it to the needs of every single project. So a big project like I'll speak about next, we can configure it to maybe do more intensive checks to make sure that we are spotting these trends and are able to troubleshoot stuff, versus maybe the smaller projects, we're not running it as often, and maybe not diving as deep into it.
So next slide, please. So going to highlight one particular project just to give an idea of how this tool was used, was-- this is two large expansion projects that are currently in construction at the Denver Airport. Concourse C, on the top left up there, we're adding 16 gates. I think it's roughly half a million square feet. And then concourse B, we're adding the extension on the east side with 10 new gates-- or, sorry, 12 new gates, 10 hold rooms. I can't remember the square footage. I want to say it's about 200,000 square feet. But a nice perspective is, if you took concourse C and stood it up on its side like it's a skyscraper, it's as tall as the Empire State Building is. Of course, we'd lay it back down for ease of use.
But at its peak, we had about 100 users in these files on a daily basis, and keeping tabs on 100 users and what 100 users might be doing to your file is very tough, unless you have 100 people looking after them at the same time. So we were setting up these model checks to run on a weekly basis. And then, when it moved to the cloud, we automated it by letting the cloud take care of it. But I'm happy to report-- and I'm going to brag a little bit-- that through the length of this project-- and knock on wood because we're not done yet-- cost us any productivity time, which was key on this schedule [AUDIO OUT] And so the construction could start and DEN could start using these gates.
Next slide, please. So how do we use this data? So what we were doing was running these checks and exporting out to the Excel format. And then we would import that Excel form or the templates that Autodesk actually provided. And we would import that data into these dashboards, and then these dashboards would be sent to the discipline leads and the discipline BIM leads on the project. So they could get an idea of how does their file [AUDIO OUT] trends where someone imported a bunch of CAD, and now the file has exploded. Did someone put a bunch of very large, intensive families into the project and elevate the file size where that starts slowing down other disciplines? And that was just key for the length of that project to keep just accurate tabs on things. And so that way we could deliver a good product to DEN in the end. Which we're not at the end yet, but we're getting there. But the next slide, please Yeah.
EDDY KRYGIEL: Transitioning over to Chuck. Chuck, do you want to talk about why Autodesk created this model checker in the cloud?
CHUCK MIES: Absolutely. I appreciate the question. So the primary reason we created the tool was we really wanted to explore what the opportunities were that were presented to us by the design automation for Revit for JPI. I've seen a couple of questions come up in the chat panel. I'll talk a little bit about Forge. So Forge is an underlying set of APIs that our cloud platforms are built upon. And one of the APIs that we've been utilizing is called the Revit Design Automation API that allows you to take code that has been written for a desktop application, like our desktop model checker, and then port that to our Forge platform, so it can interact with our cloud services-- both of the docs platforms, BIM 360 Docs and Autodesk Docs. So it was a great opportunity for us to flex this design automation for Revit service and really starting to understand.
The key driver, though, was about BIM deliverables, even for how long now. But what we're seeing is a tipping point of people are now starting to really care about the information in a model. As Brendan mentioned, they're directly connecting the information in these models to their asset management system. So in this particular case, making sure that that information is correct, formatted correctly, filled out, and validated has become a significant benefit. And that was one of the key drivers. It's also about ensuring model health.
Chris talked quite a bit about their use of the dashboard, and I'm sure we'll talk more about this, but as these models take on additional importance, it's really key to make sure that we're tracking the health of these models over time. So that we understand if there's a problem with the model, we can get ahead of it, and we can take care of it. But I think the key driver, from an Autodesk perspective, was to take this concept of validation and move it from a select group of stakeholders-- someone capable in Revit, who had a copy of Revit, who could then run a model check and export some results and report it out-- to anyone who may have access to a project on the Autodesk construction cloud, and has a particular type of information that's important in that model. They now have the ability to validate that.
So it was really about those key tenets; flexing the design automation API and really understanding what it could do, which we've been incredibly impressed with that; ensuring that we're really getting good quality information in our model set-- I in BIM; making sure the models are healthy, and that they're performing correctly; and then enabling all of our stakeholders to have access to all of this rich capabilities. And I think, Jason, what are your thoughts on that?
JASON KUNKEL: Yeah, so I'll hop in and share a couple here as well. As was noted from my apologies for my extremely long bio, the application development is not the only thing I'm doing here. I work a lot just boots on the ground with companies, with organizations, with their standards, with their BIM requirements. And while this presentation is certainly focused around from the owner's perspective, we see this trend all the way up, from designers to engineers to contractors to owners as well.
And doubling down on what Chuck was saying, we're kind of seeing that promise of BIM starting to come through. So we've been talking about 15, 20-ish years at this point. Owners and companies are starting to reuse models, so they want to make sure there's good integrity there with the quality of the information and the data in there. We're seeing the models getting used for data almost as much as just the drawings as well. And so with the need for that data comes or need for reliability and consistency within that data as well.
And certainly, back in the day-- I come from working in an architecture firm. I come from doing BIM management and IT work-- early on, we all would generate our own gut feeling and understanding of what made a good model. Why does this need to be there, and this doesn't need to be here? This is allowing us to get past that gut and start validating those suspicions we always had along there, build up reports, build up evidence for it and create rules, create training, create new standards around that as well.
CHUCK MIES: Thank you for that. I think, just speaking as an architect who's been using-- I don't know. I started using Revit back in 2003, when Autodesk first acquired it. That I in BIM has always been a real hard thing to get, and I think, for the longest time, it's been very subjective. Right? To Jason's point, you have a gut feeling about what good is. I think where this tool is hoping to get us to is making that more objective, like we can actually measure the value of the data or the quality of the geometry within the model. But I want to ask what kind of data are you getting, and how are you using it? Just because we have data doesn't mean it's valuable.
BRENDAN DILLON: I think that's for me, right?
CHUCK MIES: It is for you.
BRENDAN DILLON: OK. So the data is as useful as-- what data are we getting? Oh, wait a minute. This is Chris's.
CHRIS PITTMAN: You can keep going, and then I'll just tag on with you.
BRENDAN DILLON: OK. Did I have a slide here? Oops.
CHRIS PITTMAN: I don't think so.
BRENDAN DILLON: Go ahead, Chris, because you get the video here.
CHRIS PITTMAN: OK, so thanks. What we are using it for, the data we're getting, as like Eddy alluded [AUDIO OUT] away from that I know this is wrong because I've been doing it for five years. Versus an actual objective data report that kicks out every possible thing that we'd want to look for in a model and really compares that against known industry standards. Versus my gut feeling of I don't like CAD imports, or I don't like exploded CAD, which no one on this call should like. But just that ability to do that, and then to take that data and use it for troubleshooting. So if a discipline calls you up and says, Hey, the model's misbehaving. Having this data and this historical data from the start of the project to 90% construction documents, it helps you to better troubleshoot and help that team to succeed, versus just jumping in blind and hoping you can find the obvious thing.
So we find things pop up on a weekly report, where a discipline imported a CAD file and exploded it, where we may not have caught that until it got to deliverable. And then it was in Brendan's hands, and I'm embarrassed because I've turned that file in. So that's the most helpful part of the data for us.
EDDY KRYGIEL: Brendan, do you have comments you want to add around that?
BRENDAN DILLON: Not beyond what Chris said, really. It's all been good data. We had no idea just how bad our asset information models were until we were able to run this. And while they're still not where they need to be, they're getting better.
EDDY KRYGIEL: Well, that's good, right? Process is about improvement. I want to ask, what are other customers doing with their data? Jason?
JASON KUNKEL: Yeah, let me. So we're seeing some customers make some dashboards. We're seeing some other customers make some dashboards. And thirdly, we're seeing some customers make dashboards. But backing up from that, this idea that we're in a visual industry, this idea that we're trained to communicate visually, these dashboards have exploded. As we've mentioned, there is a template right on the BIM interoperability tools page that's provided there as a power BI template. That is intended as a jumping-off point. We expect everybody to kind of make their own standards and their own check sets, and then build dashboards off of that for the information that they need to see. But this idea that we can get a consistent, repeatable, easy-to-look-at way to start ingesting this information at a high level is very attractive and very, very useful to a lot of individuals and firms using the tool at this point.
I kind of mentioned this before, but developing standards based on what can be reliably checked, there's a philosophical discussion that we have about, is something that actually a standard if nobody is actually following it? So at that point, it's just kind of a sentence written down, but you can hone standards, and you can design them. And then easily check them with the tool and kind of modify and update the standards and what is expected in the Revit model deliverable based on what can be quickly checked inside of the model checker here. And then, like Chris said, I think he spends too much time on the couch now, so just finding what to do with all this free time that he's getting from saving and working on model checker on Forge.
CHRIS PITTMAN: Don't tell my boss.
EDDY KRYGIEL: Netflix.
CHUCK MIES: So Eddy, if you could go to the next slide, I'd like to add some thoughts here. for us, it's really about enabling owners, like Brendan in Denver Airport, to create a process for accuracy and quality that becomes collaborative and not punitive. Right? The current processes around model quality, in my experience, and I'm like Eddy. I'm a recovering architect and started using Revit when Autodesk first acquired it. But the processes around model quality are all very punitive in nature. In other words, you submit a model, and somebody comes back and tells you that your model doesn't meet their standards, or it's insufficient or what way.
By having a tool such as this, that you can start to then create checks that run at an interval, you have the opportunity to get ahead of the problem. You can start to look at model quality being proactive instead of reactive. So you can anticipate problems, understand this. One of the really-- there are several workflows I'm going to mention in this answer that have come out of the technology demonstrator that we've been working with. One of the workflows that I thought was really interesting was, why not create a deliverable check set for what you would expect in the way of information in the model at each of your deliverables? So at a 30, 60, 90.
At 30, you expect this much information. At 60, you expect this much information, et cetera. As far as my architects, engineers, and contractors, we're seeing them build this directly into their process and, again, using a workflow that came from one of our customers out of the beta program. With one of the new things that we've added inside of our technology demonstrator beta program, it's called on publish. So if you think about a cloud-sharing workflow in Revit, and let's say that I'm the architect, and I'm working with a mechanical engineer who has just said, I'm submitting a new model. Why not have your model check run against that model on publish before you actually incorporate it into your environment and accept it and bring it in? So some interesting workflows that were happening there.
For all the stakeholders, start thinking about exploring the alternate workflows and thoughts. What you see on the screen here is a workflow that we actually have a YouTube video on, but it's embedding a simple dashboard directly into your BIM 360 home page. So every time-- let's say that I'm a project manager. I may or may not understand Revit, but what I do want to understand is the health of my models. Well, if we've integrated this into the project home page, every time I log on BIM 360 or Autodesk Construction Cloud, I'm going to have the opportunity to see and understand what the health of those models are. So it really starts to make this a much more interactive process.
And my last thing for all stakeholders-- and I actually saw this pop up in the question pane, so somebody is getting an early bonus answer here-- is to start thinking about 19650. Right? ISO 19650-1 and -2 have been particularly about file naming and the workflows for managing and accepting the documentation. When -3 starts to come along and brings along the idea of asset information requirements and an asset information model, if you think about assigning those-- or creating a model check against those asset information requirements. Now, in a 19650-3 workflow-- and you do this with the desktop tool, also. This is an existing workflow you could do-- is make sure that you're starting and understanding your completeness against that asset information requirements that are going to be part of 19650-3. So a lot of things to say on that, Eddy. But I had a lot of thoughts on that.
EDDY KRYGIEL: Thank you, Chuck. That was really, really good feedback. Much appreciated. And we are seeing that drive to a more standardized kind of model environment, model engagement. Right? So it just helps level set the expectations for everybody on the project, and I do like your comment about carrot over stick. Right?
CHUCK MIES: Yeah.
EDDY KRYGIEL: How do we help people do the right thing in the right way without being punitive about it? I want to ask next, how could this change the way you work with your suppliers? Brendan, this one is for you. Could your expectations for deliverables change, and how would you see that currently evolving?
BRENDAN DILLON: So as far as working with our supply chain, I would expect that once we start rolling this out on a broader basis, that, one, our supply chain is actually going to look at the content that's going to come out of it. And, two, that they're going to take action on it. Is this going to change our expectation for deliverables? Yes. Currently, for a consultant to get feedback on their model, they either need to run the model, run one of the model checkers themselves. Which previously, we hadn't had it configured for our standards quite the way that we would have liked. So they would need to run something like the model health check, plus maybe our asset checker.
So they'd have to do that themselves, which takes time. And then assess the outputs and take the time to correct it. Or they need to wait until they do a deliverable to us. At which point, we could take up to two weeks for review, and it's a fairly in-depth review. And they would get the information then, but that means they were probably only going to get three, four, maybe five points of feedback during a typical project. Some of our projects are larger and get more than that. But with something like this, we can set it to run weekly or monthly, depending on how we end up deciding to roll it out, and they'll have a much more current and dynamic view of how their models are performing, at least in terms of a pretty significant subsection of our standards.
And we've set up our model checker to comply with-- not just to comply with-- but to reflect the organization of how we have our standards and our model review all written out and grouped, so that pretty simply for them. Now running the model checker and getting a pass [AUDIO OUT] And we actually included that as language within our model [AUDIO OUT] check, but it should at least get them a good chunk of the way there.
EDDY KRYGIEL: How is this changing how you work with your internal teams? How has this changed how you've engaged with project management or contracting or any of your Denver resources?
BRENDAN DILLON: Well, in fairness, to this point, it hasn't much because it still is in beta, and we haven't had a chance to roll it out internally a lot. The folks that we have engaged with, particularly the project managers, have been enthusiastic about the idea of being able to run a model check, say, a week or two weeks before a deliverable. Then when the deliverable actually comes along, it's going to be less likely to require a resubmission, which of course creates more work and delays, and nobody wants to have to resubmit anything. Outside of the project managers, we haven't had a lot of feedback yet, but we expect to have much better content to go out to our asset group, to go to our GIS groups, much more consistent content.
EDDY KRYGIEL: So it's emerging. Let me just say--
BRENDAN DILLON: Very much so.
EDDY KRYGIEL: Turn the tables on that question and ask Chris. How does this change how you work with your owners, especially ones with data requirements?
CHRIS PITTMAN: Well, yeah, with the BIM-savvy owners, like DEN, it helps us because we can kind of work hand in hand with them to help develop these tools and to make sure-- because it benefits us. If we produce a better model and a better deliverable for Denver, maybe they'd hire us on the next job. But for less BIM-savvy owners that maybe don't have as well-developed BIM standards or have model delivery requirements at all, it helps us to at least-- I hate to keep using the word-- but helps us to provide at least an objective data report that shows that, hey, these are the 10 models we're going to deliver at the end of this project.
You might not have requirements for it, but we're at least going to show you that we did it up to industry standards or that our models are healthy. And when you take this on to the next project as as-built data, that it's in good shape for the next person. So whenever you do start to develop those BIM standards, which I-- we're seeing more and more clients have their own internal BIM standards or refer to a known, published BIM standard, that they're in better shape.
EDDY KRYGIEL: Thanks for that. I want to ask one more question and kind of a roll-up here. So as we all kind of, let's say, begin to utilize newer technologies and newer workflows, I know sometimes-- I don't know-- you take a step back to take two steps forward. And you have those aha moments. I know, developing the tool, we've had several of those, and even Brendan has pointed one out, where we've had a number of users in our beta pool-- Chuck mentioned this earlier-- basically checking their own templates. So they can make sure that their templates and their current content is actually up to snuff before they share it out. It was a use case, we quite honestly didn't expect. So I want to ask everybody here on the panel before I turn it over to questions from the audience, have you had an aha moment? And if so, what has it been? Brendan, let's start with you.
BRENDAN DILLON: So my aha moment was the first time-- actually, no. I finally went through and saw exactly how much time it would take for me to run these checks on just our asset information models. We've got 80 models, and that's not all the models that we need to have. It's just what we have currently. And the total runtime on that was 1,480 minutes, so over 24 hours. Now that's not counting the amount of time, if a human were to do this, that it would take for them to just notice that, oh, the run is done and then to save the run or to save the outputs. It doesn't account for anything failing. And then you need to either fix the run, or, again, if you're not watching it like a hawk the entire time, it's going to sit there doing nothing for a while.
And it also doesn't account for the fact that if you start running that many model checks, you're going to do some reboots. Depending on the model, it might be after each model, or it might be after every five or 10 models, but you're going to have to reboot your computer. So you've got probably another six, eight hours on top of that in an actual, real-life situation. So the amount of time that this is enabling us to-- the amount of work this is enabling us to do, the amount of checking, is pretty fantastic for us. It simply wouldn't happen otherwise.
EDDY KRYGIEL: I think that's great. You have a higher-- people get to actually do the jobs that they were hired to do and not just open models, run a check, and repeat that indefinitely. Chris, as AE, how has this changed, or what was your aha moment? Anything?
CHRIS PITTMAN: To piggyback onto Brendan, yeah, the time savings, when we could move to the cloud, was just-- I don't want to say immeasurable because obviously Brendan did measure it-- but it got architects and engineers that might be in the BIM side of things to-- they can now go back to do architecture and engineering again, which is what most of them want to do more anyway. But the other aha moment we had is, when we deliver models to DEN, we called them dead models. They're no longer our live models. We pulled them off the live, and their in their own deliverables folder. They're not cloud enabled anymore, but they live within our folder structure on the BIM 360 hub. We can actually run these cloud checks against those dead models and then provide a report to DEN at the end that-- they already have a report in hand that we've run against their standards. So we try to save them time, and we can do this with other clients as well. Being able to do it on non-cloud-enabled files was a nice, happy moment.
EDDY KRYGIEL: It's always nice to have predictability in the outcomes and trust between owner and AE here. If you're running checks, and it's the same checks that Denver is expecting you to run, and you're delivering those models with the results, I can't imagine that doesn't make the entire process smoother and easier for everyone. Chuck, what about you? What do you see internally as an aha?
CHUCK MIES: Yeah, so I think the first aha, from an Autodesk perspective, was the robustness of the design automation for Revit API and the fact that we could move our desktop code over there and then start to take a look at optimizing the workflows. The feedback from the beta users, one of the new things we've added-- and I mentioned this earlier in an answer-- is on-publish trigger. That came from a beta user. They were like, Hey, if you could give us a tool that would publish, run a check as soon as I publish a model, that would be great. So this input from our customers is just amazing.
You mentioned the workflow, Eddy, that we had that discussion with one of our customers about. I'm going to use the desktop Revit tool to create my templates. And I'm going to use the desktop model checker to build a check against those templates. Then I'm going to post the templates and the model check to the cloud and set up the cloud checker to run on my templates once a day to make sure nobody's messed up my templates. Because if something happens in a template file, that can quickly proliferate downstream. And I've had this tantalizing image on the screen the whole time. This is one of our customer's models and the progression of these models over the entire technology preview period.
The same models, running same time, and what you'll notice is, early in August, there's a real sharp drop-off in the amount of time it was taking to process. And that came from a beta user suggesting to us, I think the design automation service is actually opening the work sets and processing all of that information. Maybe you could set it up to not do that. And when we did that, we gained 40% in productivity by making a very simple change in the code. And Jason, if you want to elaborate on that, you can. But it was stunning, what happened.
JASON KUNKEL: Yeah, and honestly, frankly, it was shocking. And I've been using Revit for 10, 15-ish years at this point. My understanding of the data that's accessible work sets are not open changed overnight. Like Chuck said, we turned off work sets, which ultimately apparently just meant we couldn't see it. We couldn't touch it through the user interface. And when things are being done in the cloud, we don't need to touch it in the interface. And so that was, like Chuck said, just learning from the beta users as information has come in has been huge exciting.
EDDY KRYGIEL: Right. Thank you, guys. Now we're going to turn it over to audience questions. But before I start reading off any of the questions from the question panel, I want you to know that you can find this group at BIMinteroperability.Autodesk. And you can find resources on how to use the desktop tool, the check sets, the dashboard. All of that content is actually posted online and available for free. And you can get that at Interoperability.Autodesk.com. Just looking through the chat window and the questions that have popped up, one of the first questions and the most requested is, Chuck, when is this going to be available?
CHUCK MIES: Yeah, so Eddy started the presentation with a safe harbor slide. So I'll remind you all of our safe harbor slide that we're now talking about futures. We are officially right now a technology preview/beta, and we're gathering customer input. And as we've mentioned multiple times in this presentation, we're being really surprised by some of the things that our customers are helping us understand and learn about this process.
So as far as what the next steps might be, we're going to see where this technology preview in beta takes us, what we learn from the customers, and make decisions then about, does this get released as a product? Does this get released as a service? But none of those decisions about what happens from here have been made yet. So for those of you who have been giving us the testing, we're now 9,000 model runs into the testing and over a half a million individual checks. We've learned a ton, and we're going to continue to learn. And when we feel that we've learned enough, we'll make some decisions about going forward.
EDDY KRYGIEL: Thanks, Chuck. Jason, I've got a question here about Forge and what role Forge plays in this. We've talked about Revit. We've talked about desktop. We've talked about cloud. We've talked about BIM 360. Can you help weave a picture between those items?
JASON KUNKEL: Sure, yeah. And Chuck started talking about this a little bit earlier on this question, so I'll just try to fill in some gaps and double down. Forge's, Autodesk's collection of different APIs. So APIs are kind of the back door that developers can use to access different functionality of the software that's not typically exposed to the user interface. So most of the functionality within BIM 360 and ACC is directly exposed. So if I want to copy a file, if I want to create a new folder, if I want to do any the functionality on the ACC project side.
Now the one Chuck also mentioned was what we're leveraging is design automation for Revit. Excuse me. And that really is a version of Revit without a monitor, without a mouse, that sits running on a cloud server. And so we've needed to kind of tweak and readjust the back end of the model checker application itself, so it can be injected. And you kind of tell it to do things in different ways, rather than a mouse click, but Forge like I said, it's the collection of these different tools and ways we can build programs that access ACC projects and then also access this headless version of Revit in the cloud itself. I hope that that helps.
EDDY KRYGIEL: It helps me. So Chuck, next question for you. Does the cloud version have the ability to create custom checks? And can you talk about maybe some of the effort in creating checks and interoperability between the check sets?
CHUCK MIES: Yeah, absolutely. That's a great question. So first of all, our desktop tools that have been shipping inside the Revit are still a great place to start. I posted in the chat panel a couple of times links to the desktop site, Interoperability.Autodesk.com. You can go there and learn about the desktop tools. Included with the desktop tools is the model checker configurator for Revit. That is a tool that runs standalone that can actually help you build those checks. And it's important to understand, I mentioned the benefits of using the design automation for Revit, since we're essentially running the same checker in both the desktop and the cloud, the checks are fully compatible with each other. So any check you make for the desktop tool will also run on the cloud tool as we start to make decisions about that moving forward.
So as far as the level of effort goes, one of the things we've been really specific about is that our check sets had to be some form of open source, and there XML files. For any of you who have been around as long as me and have ever written a LSP file, everybody learned LSP by opening up somebody else's LSP and see how they do it. We ship with an entire library of sample checks that you can open up in our configurator or in an XML editor and see how we did it. How did we make sure a particular parameter was valid and had data, and the data was formatted correctly? Guys, beg, borrow, and steal. Open these checks up. Pull information out of them. Assemble your own checks. That's the best way to get there.
EDDY KRYGIEL: Brendan, I want to keep rolling on the same question here. We've heard from a developer. We've heard from an Autodesker about how easy it is to create checks and how flexible they are. As somebody who's made them as an end user, what was your experience?
BRENDAN DILLON: A lot easier than writing it in Dynamo, which is what we were doing originally. There's still some things that we're going to continue checking in Dynamo because I don't know how to write the checks in the checker to cover those particular details. But we're looking at moving further and further away from that because sending a Dynamo script to our consultants and telling them to run that to check their models is intimidating for a lot of them. And this model checker is simple, whether they run it on the desktop, which they can still do, or when they run it up on the cloud, which they will hopefully very soon have the option to do as well.
And yeah, like Chuck said, beg, borrow, steal. I did not write most of the checks that we're using I stole them from other checks. There's a few exceptions, but I'm not a coder. I'm not someone that knows coding language. Not that you really need to know much, but some of the more detailed stuff that we needed to get into that were hyper specific for our standards required a little of that. That was a lot of trial and error. Mostly error, but we eventually got there. And we're looking forward to fleshing it out even further.
EDDY KRYGIEL: Thanks. Chuck, we've gotten some really interesting conversations in this chat window. One of, I think, the more interesting ones, and I would say it's really hyper focused, but I actually don't believe the answer will be. A big setup here for this question, but what kind of advantage or comparison do you have to assemble?
CHUCK MIES: Ah, interesting. Yeah, Eddy set this up because there's been a recent internal conversation. So we were actually showing the tool, and we were actually showing both the desktop and the cloud tool internally to one of our people who had a interesting background with Assemble Systems. They actually came over to Autodesk from the Assemble acquisition, and they looked at our Excel export, realized that our Excel export had the asset or the unique IDs in it, and that, by using the Assemble Connect module, they could take our Excel-- so they could, from Revit, publish the model to assemble, then run the model check, and then link the Excel spreadsheet through Assemble Connect directly back to the Assemble.
So now you have a full visualization engine of the checks, what's failing, what you may have issues with and stuff, directly in the assemble. So it's been a really, really interesting conversation. This conversation has happened over the last seven to 10 days. So this is what I mean when I said earlier, we want to learn as much as we can from this technology preview before we make decisions about what the future looks like as far as what we do with the product.
EDDY KRYGIEL: Thanks for that, Chuck. I think that would be really interesting, not only to be able to visualize your outputs in a dashboard, but to be able to visualize them in an integrated model environment I think would be really powerful. I want to ask one more question here. I think we have time for hopefully at least one more, but there's a question here about metricizing stakeholder engagement. So Chris, question to you. When these reports are created, and you're doing the model analysis, and you've got all these experts, and you've built a dashboard and publicize it and send it around. How are people engaging? Do you feel, or do you see a high level of engagement? Do you see changes in behavior or changes in workflow because of those dashboards and reports you're putting out there? Can you speak a little bit to how that might have improved the project process?
CHRIS PITTMAN: Yeah, so we had mixed results. So when we first started doing the model dashboards, we would send them out as either a PDF or screenshots to the design team. And emails on a project this size or easy to ignore when you're getting 200 a day, so it was kind of understandable that some disciplines would not react to those emails, or they didn't know, really, what they were looking at. What really pounded at home was we started adding it to the opening screen of each discipline's model.
So we would take screenshots, and just the second they open that model up, they had no choice but to see these reports. And they could see the trends over time. And I think we got better engagement when you kind of just threw it right in their face. And that's why I think we started to see improvements in the model. We started to see errors start to go down. Assets we're getting tagged like they were supposed to be. People stopped making-- I don't want to be rude but-- the dumb mistakes of CAD imports or overly detailed families and groups and model groups. They had the red flag there, and they would just self-correct themselves, which was fantastic news to me.
BRENDAN DILLON: DEN hasn't dashboarded it yet, but when we did start doing reviews, initially, we did about six months of reviews where we weren't telling people we were tracking them. We were getting about 65% compliance on our requirements. After those six months, we started showing people, here's the results. And we're, by the way, going to report this information to the project managers, and, when we're doing contracting the future, this will be considered. And we're out of time.
EDDY KRYGIEL: Thank you, everybody for attending this. Thank you all, for the panelists, for your time and your effort here to go over these new tools, tips, and features and workflows. And I hope you all enjoy the rest of your AU. I think--