Josh Vogel

OR-Tools – Optimize Your Company’s Workflows

If you read my summary of PyCon Israel day 2 you would know that my favorite lecture over the two days was one that used OR-Tools to solve some scheduling issues the presenter was facing at her company. Aside from it being a great lecture, lightbulbs immediately went off in my head. Could this package solve issues faced by companies around the world?

What is OR-Tools?

The OR-Tools package solves several different types of combinatorial optimization problems. While I knew these types of problems existed, I never really spent much time thinking about them. Even less had I thought that there was software that would make solving these problems on a grand scale as easy as just writing a few lines of code. The OR-Tools package has code to solve the following types of problems:

Now that we know some of the things that OR-Tools can help with, let’s see what I think are its major strengths compared to traditional machine learning.

Why do I find OR-Tools so fascinating?

Ever since I read “Cheaper by the Dozen” in elementary school, I have been kind of obsessed with finding the most optimal solution to tasks I face in my daily life. (No, I don’t button my shirts from the bottom up). I’m the kind of person who organizes their shopping lists by how items are laid out in the store. I also try to group my errands together and not pass by the same place twice. When I was in the dating period of my life, I joked with my friends about an ACPD (average cost per date) metric, which I assumed was somehow correlated to the prospects of actually marrying the young woman (study in progress). So yeah, while I may not have known it, optimization has had a special place in my heart for a while (I lost track of the APCD for optimization).

OR-Tools – ML without Mounds of Data

Since learning of OR- Tools two weeks ago, I’ve crawled down various rabbit holes of the combinatorial optimization world. I have watched a few graduate-level classes from a course uploaded to YouTube during Corona, as well as another more basic college course (which seemed to have been edited by AI to remove all the dead air making it very choppy). I also stumbled upon a YouTube video from Barry Stahl called Building AI Solutions with Google OR-Tools. He is clear and gives useful examples. This has been the best video I have seen on the subject so far and is definitely worth a watch.

But what did he say that was so revelatory for me? Among all the other great things he says, he mentions something along these lines: “Many machine learning models require mounds of data to give you the best result. But optimization problems aim to solve them given the only the data that is presented.” What he means is that if there is a way to make a formula out of your problem before analyzing your mounds of data, you may want to try that first, especially if you have nothing else to go on. You can optimize your profits with your inventory, only needing to know your profit margins, regardless of historical performance (though it helps). Or you can get your teachers and students assigned to their classes each semester without hours of moving around notecards.

As a side note, the lecture also explained why I have ALWAYS had an issue solving Sudokus. On the surface, this kind of puzzle seems like something that would be perfect for my interests. After all, it is a complex problem that needs solving. But, as Barry explained, it is IMPOSSIBLE to validate all the possible combinations. Instead, if you approach it as a “constraints” problem, solving them becomes that much easier. If I ever get a few hours, I may even try and program a Sudoku solver, then I can claim that I have mastered the puzzle.

Conclusion

I still have a lot to learn about OR-Tools and this area of math. The great thing is that OR-Tools and other similar packages have a robust support community around them. As I plan to start a few projects that will leverage OR-Tools, I look forward to both learning from them and helping you, my readers, on your journey through ML.

TWIL – This Week I Learned – SQL Optimization, Versioning and HTML Nuances

Week of July 10, 2022

A (hopefully) weekly column where I keep you updated on some of the topics I learned about during the week.

SQL Optimization

Grr

Last week, a query that I wrote was given and then modified on my own went into production. Then everything started going haywire. CPU usage went up to 100%, systems slowed down. Despite testing, the query needed to be optimized, and big time. The issue was that those who were skilled at optimizing, didn’t know the table schema the query was based on, and the person (me, but not really) who knew the schema, didn’t really know optimization. Time for the self-didact to didactitate (?)! I hit the books, and along with guidance from the professional optimizer in my company, focused on something called Common Table Expressions, which according to the top hit on Google is:

A Common Table Expression (CTE) is the result set of a query which exists temporarily and for use only within the context of a larger query. Much like a derived table, the result of a CTE is not stored and exists only for the duration of the query.

chart.io.com

Many professions (medicine for example) are known to be part science, part art. Coding too also borrows from both areas. SQL Optimization though definitely falls more into the art category. This is because as those experienced in SQL know, there are a million ways to write a query, and so too each time you go and estimate your execution plan, are the ways it can be executed. Finding a fine balance between readability, execution time, and load, and a million other factors makes composing SQL quickly leave the realm of science and join the world of art. Maybe that’s why we compose SQL, it is like managing a symphony!

The Kennedy Center GIF - Find & Share on GIPHY

Versioning

This one is quick. Lost a bit of time this week working on projects because I was starting again on something old and all the versions had changed. Might be time to consider something like Docker or even just a VENV that can make this go quicker. Note to look into that.

HTML Nuances

I am running a small computer camp for my son and his friends in a few weeks, so I have been working hard on putting together the material for that. The goal (aside from teaching them that the monitor is NOT the computer) is to also teach them proper coding techniques before they can learn bad habits. So while looking into how to write HTML, I was curious if values for attributes officially need single or double quotes. The answer (a rare one in the world of coding) is that you can use both. Go figure.

Beyond!

  • Signed on a new client who needs help with a site written in ASP.NET. It’s my first time working with this codebase, but basically, it works the same as everything else. Needs its own set of packages, IDE, etc. Look here for more updates on how this progresses. Also, it is silly easy to get IIS running on your local machine. If Windows weren’t so bulky (and costly) it could have had something 20 years ago. Oh well.
  • Bought a mopping robot this week, I have it run every night. Haven’t touched a broom all week!
  • Famous YouTubers I had never heard of: Mr. Beast, Ace Family. This is an ecosystem that is way larger than anyone can imagine (note for blog post); it is so large there are even YouTube channels (with millions of subscribers of their own) that track gossip about YouTube personalities.

What is Proof-of-Concept Testing? – Part 3

I just wanted to take a moment or two to wrap up my recent discussion on the importance and value of proof-of-concept testing. I mentioned in parts 1 and 2 that proof-of-concept testing comes in two forms. First, a race to MVP, or minimally viable product. Second, pushing already existing technology to its limits to find out what it is capable of. But what is the value proposition for doing a proof-of-concept?

Learning

While this may not be an aspect that brings monetary returns to the client but doing a quick proof-of-concept can be a learning experience on multiple levels. First, you will learn more about your idea and/or your product. You will learn definitively what it can or can not do. Second, you will learn if your contractor (hopefully me) will be a good match for your vision and goals (I pride myself in my customer service). Either of these things alone can save you hassle at a later point. You don’t want to create a product that doesn’t work or which doesn’t create value. Additionally, switching contractors after you’ve decided to “Go Big” can be costly both in terms of time and money.

Monetary

I touched upon this a little bit in the previous paragraph, but performing a proof-of-concept test first can save money in multiple areas. Either you now know what direction you want to take the project in or are confident that you can work together with the contractor. But perhaps what is even more valuable is that with an MVP, you can now shop it around to other potential funders of your project, knowing that it will work and that it will wow those facing whatever problem you are trying to solve.

Conclusion

Proof-of-concept testing is definitely niche, as most people like to just dive in and make everything work. Often this is the wrong approach and sets one up for failure before they have even begun. Testing allows one to make the right choices without losing a lot of time. It also brings you to market faster in a world where a million other people have the same exact idea as you and have possibly even iterated over it. So now that you know the value of proof-of-concept testing, go make your ideas a reality, you could be the next unicorn!

What is Proof-of-Concept Testing? – Part 2

In my previous post about this topic, I discussed how I could quickly build out an app that will bring your idea to life. I’ll discuss another approach in this post about “Proof-of-Concept” testing.

“Can what I have also do this?”

Often times a client of mine might have some bits and pieces of an app or project but they want to push what they have to the limits. Or the client is unable to grasp how a software acquisition works. So that’s when they call me up and say “Josh, take a month and figure out everything this can do. When you finish, report back on what you think and what might need to change in order to maximize its potential.”

My Approach

First, I’ll take some time to fully understand the subject area. Are you looking to integrate IoT devices into an informative network? Then I’ll research IoT devices. Do you need to aggregate international phone call protocols across the world? Then I’ll spend some time researching how the world’s phone network is built. I firmly believe that in order to be the best at what you do, you need to understand how that thing works. For example, I’m running a computer camp for my son’s friends this summer. So, instead of diving right into code, we are spending one full day learning about computers and how they work. Once I have a good grasp of the topic area, I’ll start learning the mechanics of the program. How does it load data? What platform does it use? From there, I go and complete the project, documenting the issues and where the application excels.

The Report

In this scenario, as the project wraps up, I will submit full documentation about the work I performed. It can be a written report, a presentation, or both, and I am happy to come and present in person to the stakeholders in the project.

Summary

  1. Learn the subject area
  2. Understand the mechanics of the program
  3. Complete the project
  4. Submit a report with positives and negatives along with suggestions for future tracks of development

I’ll put together one more post about what benefits proof-of-concept testing brings to your projects.

Protobufs – Protocol Buffers

As you can see from my blog, last week I attended the PyCon Israel 2022 conference (Day 1 summary, Day 2 summary). As one would expect for a Python conference in 2022, the hot topic was Machine Learning. Along with Machine Learning comes something called “Big Data”, which is still an ambiguous term, but basically it means your app or your company is sitting on large amounts of data. These companies often either want to leverage their data, or, more importantly, process it using a Machine Learning algorithm or model. So how do Protobufs help solve this problem, and what even are they? (No, they are not some sort of Neanderthal swamp monster wielding clubs, which is the image that appears in my head when I hear the word).

The Problem

If your data is of a certain scale, then you run into a few issues:

  1. Storage can become very expensive
  2. Individual records may become too large to process in the model efficiently

If your application has millions of users, and is complex (think electronic medical records, or financial documentation), then storing your data across multiple tables and running tons of joins and unions to get results will be slow. This has led to a movement to something called “NoSQL” in which all of the data is stored in a single object, or model, and then ALL the data is retrieved when requested and the application itself finds the value it needs to perform the task. Many NoSQL solutions are based on JSON or XML. Storing text as JSON or XML means that as your data model grows, so does the size of the object. This then increases the amount of storage you need, and if your model is so large, then consuming and traversing that data can slow down the execution of your program.

Sample JSON
{
    "glossary": {
        "title": "example glossary"
    }
}

The Solution – Protocol Buffers

Probufs solve this problem by reducing the entire model and its data to a string of characters. Then, embed encoding and decoding into the program.

Model for Protobuf:
message Glossary{
  optional string title = 1;
}

In this example, the “1” refers to the “title” field and then the model encodes the actual as well. Load the message model into your code during both encoding and decoding your data. Doing so reduces the amount of data that is transmitted over the network. Now, since machines are processing this data and not humans, then there is no need to include readability of the actual data as an aspect of data transmission.

Protobufs: are they overkill?

As with anything, the answer depends on what you are trying to achieve. There are many other serialization tools out there on the market. This includes ones which have increased readability than Protobufs, such as PyDantic. So if your goal is fast transmission of data between points, it would seem that Protbuf should be your choice. But if you want to maintain readability even outside of your code, then perhaps serializing your data another way is advisable.

Watch a video that taught me the basics about Protobufs. It also has a really cool example at the end.

What is Proof-of-Concept Testing? – Part 1

If you take a look at my homepage, you’ll see that one of the things I like to focus on in my work is “Proof-of-Concept” testing. I know this is a vague term, which is why I’m writing a blog post about it! The way I see it proof-of-concept testing can take two forms:

  1. Designing an MVP (minimally viable product) that fulfills the core functions of the client.
  2. Testing an already existing product or app to see if there are any bugs or features that are not working fully.

So what do I do in each of these scenarios?

Designing an MVP

Basically, this would be the same as any intake process. I spend time with my clients to understand what the goal of their project is. Often clients have this idea, or say “I think it would be great if the app could do this” and that feature is the most minor of things. Creating a web app is like writing an essay or a story, you need to have your goals and points neatly laid out so that you can get to market as quickly as possible and not get bogged down in extraneous details or features that hamper achieving the final goal. Building a plane while you’re flying it is fine (we used to say this all the time when I worked for the Massachusetts Department of Public Health) as long as you know what kind of airplane you want; there is a big difference between a Cessna and an F-16.

Then we need to make sure your product is sufficiently differentiated from what is already on the market. I am a big believer that “there are no new ideas” (Ecclesiastes 1:9) but that’s not to say that they can not be reiterated or that when combined with something else that there isn’t potential. But nothing would be worse than building something and spending money only to find out that someone else is drinking your milkshake.

via GIPHY

Assuming we’ve made it this far (the previous steps can take a while and the better I understand your idea, the better I’ll be able to execute it) then it comes time to best understand which Framework and set of tools will get the task done. Sometimes it is just a Wix or a Squarespace site. Sometimes we need to pull out some “big guns” like a full-on Framework, and you should also know there is also a very robust “no-code” community these days and perhaps your idea would be best suited through one of those (or a combination thereof).

Once everything is set, I’ll get to work. As this is supposed to be a proof-of-concept, it isn’t supposed to take very long to stand something up, think 2-4 weeks before you can start showing things to other people (hopefully people with money, haha). Obviously, development is continuous, but it is also important to set milestones so that you can tell if your product is meeting its goals and also before you have sunk too much time or effort into it that it becomes a losing proposition.

There is a lot more to say about this topic, so look forward to additional insights as the week continues. (Also see part 3 in the series)

PyCon Israel 2022 – Day 2

Here is what I learned and need to research more after attending half of day 2 at PyCon Israel 2022.

Keynote: Yam Peleg

This morning I caught the early bus to Ramat Gan and was there not only in time for the keynote but also for some light breakfast. Yam Peleg’s claim to fame is that he was in the top 5 of several different categories on Kaggle, though he got there not by doing his own work, but by training some ML to post and solve challenges for him. Though he doesn’t say it explicitly, this approach got him banned from the platform. He used the rest of his time to explain some of the powerful things language-based ML can accomplish, including full academic-type papers and impersonating an Elon Musk who is cool to Dogecoin. He also had a well-illustrated presentation, I’ll try and link to it once all the lectures are posted to YouTube.

Concepts/models to research: GPT-x, Beam Search, RoBERTa, TOPP

Why Does “Don’t Stop Me Now” by Queen Make Us Happy? Feature Analysis

To be honest I chose this because it was the least of three “bad” offerings in this time slot and it was in the “Data” track and being primarily a data scientist I figured I should go to the data track offering. Luckily, I made the right choice. This was probably among the better presentations I attended over the two days of the conference. She made it fun, she had a great comparison song for a sad song (Elton John’s “Sorry Seems to Be the Hardest Word“). I learned what those squiggly patterns on graphical representations of sound files show (it is called a frequency graph) and came out with three libraries to look into.

Libraries: Librosa, Hilb transform in Scipy, MonkeyLearn

What are we busy about? (That was the title)

The best talk at the conference was on how to use Python to solve scheduling problems

While Layla may have been the only (can’t say for sure) Muslim/Arab speaker at the conference, there was decent representation from the Muslim dev community at the conference, and I’m hoping there will be more at future conferences both in terms of attendees and speakers. Layla spoke about how they overcame complex scheduling issues using ML and stuff to make sure they always had doctors online to help their customers. Her presentation was clear, and practical, and will possibly also help me in some of my work as I try to triage calls to the correct individual in our call centers. The funniest moment of the speech may have been when she unequivocally stated that “Healthcare in the US [United States] is very bad”.

Packages: OR-Tools

Data Class Serialization The Right Way

So until yesterday, I had never really heard of serialization, or at least never heard it called that. Then came yesterday’s presentation about Protobuf and then this presentation today. Basically, it is trying to solve the problem of how to save state either in your data or in the program. It ends up that base Python is actually terrible at this and you need some additional packages to really up its game. The one they suggested was Pydanctic, which when combined with Type Hints, really changes a lot about what Python is able to do to save state. For those wondering, Pickle is not a good choice because it stores the logic and data in a single object whereas Pydantic will separate these which will pay benefits as your project grows.

Wrap Up

I had to leave early in order to attend end-of-the-year ceremonies for my children so I wasn’t able to make it to the presentation about JSON (sometimes after a conference you look at the titles and you’re like, “this sounds boring”). Overall I had a great time. It was definitely worth what I paid (NIS 350, or a little bit more than $100). I hope next year the organizers take into account that the last week of June is not the best week for developers with families as it was like playing frogger with all of the different events my kids were having this week to squeeze in the conference and make sure that all my kids were properly supervised at all times. Also, as I mentioned yesterday, it seems like in some areas we are outgrowing the space. It was very hard to interact with the vendors because it was very noisy in the lobby, and when food was being served you just couldn’t move around.

I almost forgot about the swag! What is a conference without swag? As attendees, we got a reusable shopping bag, a t-shirt, a mousepad and mouse, and some stickers. Obviously, there was much more to be had at the vendor tables but:

  • My aforementioned issue with the noise in the lobby made it difficult to learn about the products, especially when trying to understand and speak in Hebrew when it is not my first language
  • I am also concerned about “gneivat da’at” issues. Is it ethical for me to sign up for marketing or make them give me their whole pitch when I’m not looking to buy their product?
  • I get enough junk mail that not even two separate raffles for an Oculus would get me to sign up for your marketing

So with that being said, I left a lot of swag on the table. Maybe next year if there is a proper vendor floor I would be more willing to discuss with the product ambassadors.

My swag

All in all, it was an enjoyable change of scenery for a few days from my typical desk and computer chair (workstation review coming soon). See you next year, PyCon!

PyCon Israel 2022 – Day 1

Months ago, I was among the first people to buy tickets to the first post-Corona Python conference here in Israel. I use Python extensively in my data science work as well as in some of my other projects so I thought this would be a great place to learn more about one of the tools I use every day. The good news is that it was. I learn about things at all levels, from some neat coding tricks, to how do some higher-level machine learning that is directly applicable to my job, and also about new packages and SaaS offerings that can or will play a role in work I do in the future. So, what were some of the highlights?

Keynote Panel

The day started off with a keynote panel (instead of a keynote speaker) discussing some of the pros and cons of varying levels of strictness when it comes to code review. It was interesting, but it isn’t really part of my world right now, plus nothing beats this keynote speech:

Supercharging Pipeline Efficiency with ML Performance Prediction

This was actually pretty cool. Mostly because it pertained directly to the work I am/will be doing in the world of Machine Learning Operations. This company that the presenters work at has clients who have various levels of data they need to process based on the client’s level of spending on online advertisements and as such their smaller clients were not able to always have their data processed by the company because the bigger companies would take too long and the smaller clients’ jobs would get cancelled as a result. So they applied machine learning (what else) to predict which client’s jobs would end up being large or small on any given day.

Apps to check out: Celery

Concepts to apply/research: serializing models, using AWS Athena to query S3 buckets

Detecting Anomalous Sequences

An employee from Paypal demonstrated to the room how to use various word vector models to weed out incongruous/suspect text in an effort to increase security on the Paypal platform. The presentation was rushed due to technical difficulties.

Concepts to apply/research: Word2Vec (this is a common topic at Python conferences), Bert, Autoencoders

Zero To Hero: Few Shot Learning + Multi-armed Bandit

Two trainers live coded some ML models which at the beginning was used to predict Boolean (True/False) answers to questions from a dataset, then was applied (I think) to help predict which of a set of theoretical slot machines was most prone to returning a sizeable jackpot. I actually learned a neat trick that you can increment a variable in a Python script by putting an if statement after the “+=”. I think that saves a line of code but may cost in terms of readability.

Concepts to apply: I work with True/False data every day, is there something I can use from this to predict my outcomes?

Monorepo – One Repo To Rule Them All

This presentation was fully scripted, which was a little weird to listen to, but overall interesting, as it showed how one can easily combine multiple repos into a single one and help integrate that into one’s CI/CD (until two weeks ago I had never heard of CI/CD, now it seems to be everywhere) processes. Presenter was clear about what the pros and cons of this approach are.

Packages mentioned: ToMono

Effective Protobuf: Everything You Wanted To Know, But Never Dared To Ask

Honestly, I have no idea what this was talking about. I need to research this more on my own but as it was presented it was way over my head.

Concepts to research: Protobuf

There is always another way: Sharpen your NumPy skills with the 8 Queens puzzle

This was a cool presentation. “8 queens” is apparently a famous puzzle and the presenter found five different ways to solve it with NumPy (apparently pronounced Num-Pie, even though English would tell you it is Num-Pee; I have discussed this on my Facebook page). I learned a ton of new functionality in NumPy that I may never use, but it is cool to know that you so easily manipulate dataframes and the like using this tool. The presenter said he read the documentation searching for functions that could help solve the problem (obviously reading the documentation is important, but I’m not sure I would read it like a book! Good for him!). In the end he was able to solve the problem with code that took less than 1 second to execute. Very cool.

Presenter’s GitHub

Meet the Best Feature in Python 3.10: match-case

So Python 3.10 apparently has this new feature called match-case, which is basically a derivative of a switch-case that you may be familiar with if you ever did PHP programming. This one though seems specialized to detecting conditions involving objects. He used match-case to create a custom linter script that would highlight when he made unintended comparisons between Boolean values and tuples. Definitely a lot to look into here and it was cool to see how what he wrote actually triggered an error while he was writing code in VS Code.

Research: customizing linters, ast, Flake8, how to build a CLI

Exploring the Cheese Shop – What’s in the Python Package Index?

This was a high-level overview of how PyPI works and what pitfalls can befall you if you fail to be vigilant when using or updating various packages. Apparently, it is very simple to upload your own package to PyPI using just pip upload, who knew.

Research: Poetry, Twine

Overall Impressions

I learned A TON and hopefully I’ll have an opportunity to write some further blog posts on the follow up learning I did from the things I learned today. The presenters were knowledgeable, and I even had a good conversation with one later in the day who also happened to be a New England native, but that was about it in terms of the networking I was able to do today. There were no “semi-facilitated” networking sessions and I would say about 80% of the participants from what I could tell were attending with their team from their employer as opposed to individual contractors. The space is juuuust the right size right now, but they may want to consider a larger venue come next year. More to come tomorrow!

What did we do before CodeAcademy?

As a product of the 90s/early 2000s, getting educated in computers was rough. While we had at least one computer per classroom in elementary school (I went to a private school) we only had one semester of computer programming in high school, and it was in BASIC, already an outdated language (though if you know BASIC and can post videos to YouTube, you might have a chance at being successful there – I should have paid attention more, ha). Then in college I’m pretty sure there was no computer science major, and the one class I rembmer taking was in Flash animation (good thing I didn’t go down that road). So having been born in this awkward age where either I was consigned to learning outdated or terrible platforms, and was too early to benefit from the plethora of democartized learning platforms available today? Hold on to your seats, the answer may shock you:

Read Texas Am GIF by Texas A&M University - Find & Share on GIPHY

That’s right, books. And the book I used may shock you even more:

Yup, it was a Dummies book. And you know what, even with the Dummies book, it took me a couple of hours reading and re-reading the code snippits to fully understand how an if statement worked, and what a loop did. But on some level it was really good because I was able to stare at it and run the lines of code in my head and there were no “points” or feedback, or anythign else you might get on a coding site today. It wasn’t judgmental or pressuring. It was me on my own time just studying, because I enjoyed it and I wanted to learn it. And boy, when it finally all clicked was that a great feeling. It still is a great feeling to write 1, 100, or 1000 lines of code and see it all working perfectly (ok no one writes perfect code the first time). While my copy has met the recycling heap (there isn’t room for everything on a 40 foot lift across the ocean) this book is still availble for purchase, I hope it will be for a long time coming. This book alone has been responsible for some of my career choices as well as supplemental income probably totalling close to $100,000 by now. I guess that would make it one of the greatest investments of my life. Good luck to everyone in their coding journey!

Pull Request Update

As you may know, I am the lead developer on a group of sites run by JewishLanguages.org. The site, not of my own choosing, runs on CakePHP 4. A few years ago we had the choice to switch since it was on an old version of CakePHP, but I figured that since I was already familiar with the framework it would be best to stick with it instead of building everything anew in a new framework. Perhaps we will change the framework when CakePHP 4 reaches the end of its lifetime we will swtich to something else, but that is for another time.

Anyway, my time with CakePHP has gotten me very familiar with its documentation, and as the sites I work on grow in complexity (it now supports three languages, with more planned in the future) so does the number of things I need to learn how to do in the framework (please hold your “I hate PHP” comments, there isn’t anyhting I can do about it here, or in my specific situation). Anyway, after Googling how to do something and mashing together a few solutions from StackOverflow, I decided I was going to take a crack at improving the documentation for this one piece I was unable to figure out from the documentation alone.

So I went to Github, found the part of the docs I though could use some clarification, and then added the explanation as well as an example suing the example they use throughout the documentation and did a pull request. I had no idea what to expect next, but what followed utterly impressed me. A group of individuals who I had never met and who had no knowledge of my skills, actualyl took seriously my contribution. There was some back and forth for a day or two, but then it mostly went quiet. I went and checked on it once a week or so, but nothing. I was unsure of the etiqutte, so I just left it. Today, though, I see that there was a tag for another person who seems to work on the documentation to make his call. And from his response a little while ago, it seems like my change will go in! There is a bit left to go, but I am very excited and I have a small amount of renewed hope in humanity, no matter how it turns out when it is all over. Stay tuned!