Skip to content

Commit

Permalink
works with claude too
Browse files Browse the repository at this point in the history
  • Loading branch information
alnutile committed Apr 19, 2024
1 parent 18996b0 commit bd1e901
Showing 1 changed file with 1 addition and 5 deletions.
6 changes: 1 addition & 5 deletions tests/fixtures/orchestration_message_array_after_claude.json
Original file line number Diff line number Diff line change
@@ -1,10 +1,6 @@
[
{
"content": "Can you help me make a list of slide ideas from this content",
"content": "This is data from the search results when entering the users prompt which is ### START PROMPT ### what do the documents say about deploying code ### END PROMPT ### please use this with the following context and only this, summarize it for the user and return as markdown so I can render it and strip out and formatting like extra spaces, tabs, periods etc: 49\nOne approach is to require every change be \napproved by someone else on the team as part \nof code review, either prior to commit to version \ncontrol (as part of pair programming) or prior \nto merge into master. \nThis can be combined with automated thresholds \nthat bound changes. For example, you may \nimplement checks to not allow developers to push \na change (even with peer review) that will increase \ncompute or storage costs over a certain threshold. \nThis lightweight, straightforward-to-implement \nprocess presents a clear opportunity for \npractitioners to improve change management. \nSome proponents, supported by IT service \nmanagement (ITSM) frameworks, claim that formal \nchange approval processes lead to more stability, \nand point out that these have been the norm in \nindustry for decades. \n \nUsing code review to implement segregation of duties \nrequires that all changes to production systems should \nbe recorded in a change management system that lists \nthe change along with the person or people who \nauthored it, and logs authenticated approval events. \n If youre using version control as the source of truth \nfor all changes to your systems (a technique from the \nparadigm known as infrastructure as code or gitops), \nyou simply need to be able to record who approved each \nchange. This has the added benefit of making the review \nand submit process highly auditable. Our previous \nresearch also shows that comprehensive use of version \ncontrol, including for system configuration and scripting, \ndrives performance.\nIMPLEMENTING \nSEGREGATION \nOF DUTIES 44\nupdating a transitive dependency, creating \nincompatible dependencies (for example, \nthe diamond dependency issue), and breaking \nfunctionality. Tooling that can help avoid \nthese errors or illuminate the consequences \nof code changes can improve design decisions \nand code quality for all engineers. \nDebates about tools or code organization \nare easy to fall into. Its important to focus on \noutcomes: Are we enabling or preventing \nsoftware performance and productivity? \nAdvanced users such as developers, testers, and sysadmins were \npreviously neglected when considering the usability of their tooling. \nSometimes management assumed thatas relative technology \nexpertsthe technologists could figure out any tool they were given. \nThis isnt an uncommon mindset. In World War II, pilots were \nselected and trained based on their ability to operate overly \ncomplex cockpits. Then usability experts realized that complex work \nlike piloting an aircraft was difficult enough. It was better to design a \ncockpit to be easy-to-use and understandable, and let pilots spend \ntheir attention safely piloting the aircraft.\nOther times, usability needs are ignored because management \nassumes that technologists needs are like those of regular end \nusers. Today, we know that power users (such as engineers) often \nhave special use cases, with unique design needs. Technologists \nalso include broader skill sets and backgroundssuch as UX, infosec, \nand database engineersas well as diverse abilities. Making tools \nthat are accessible and easy-to-use is an important consideration for \ntool vendors.\nWith this in mind, in this year's research we studied the usability \nof the tools used to deploy software because technical practices that \nsupport software development and deployment are important to \nspeed and stability. \nThe usefulness and ease-of-use of this deployment tooling is highly \ncorrelated with CI and CD. This makes sense, because the better \nour tools are suited to our work, the better we are able to do it. \nWe also find this drives productivity, which you can read about \non page 55.\nUSEFUL, EASY-TO-USE \nTOOLS FOR \nDEPLOYING SOFTWARE 40Accelerate: State of DevOps 2019 | How Do We Improve SDO & Organizational Performance? \nSimilarly, deployment automation at the team \nlevel will have little impact if the teams code \ncan only be deployed together with that of other \nteams. This points to an architectural obstacle \nthat must be resolved at the organizational level \n(which, in turn, is likely to require work from \nindividual teams).\nWe will look at these capabilities in more detail, \ninvestigating them through the lens of team-level \nand organization-level capabilities.\nRemember that our goal is improving our ability \nto deliver software, which we accomplish through \ntechnical practices in delivery and deployment \nwe call continuous delivery (CD). CD reduces \nthe risk and cost of performing releases. \nContinuous delivery for the sake of continuous \ndelivery is not enough if you want your organization \nto succeed, however. It must be done with an eye to \norganizational goals such as profitability, \nproductivity, and customer satisfaction.\nTeams can deploy on-demand to production \nor to end users throughout the software \ndelivery lifecycle.\nFast feedback on the quality and \ndeployability of the system is available \nto everyone on the team and acting on this \nfeedback is team members highest priority.\nHOW WE MEASURED CONTINUOUS DELIVERY 43Accelerate: State of DevOps 2019 | How Do We Improve SDO & Organizational Performance? \nArchitectural approaches that enable this strategy \ninclude the use of bounded contexts and APIs \nas a way to decouple large domains, resulting \nin smaller, more loosely coupled units. Service-\noriented architectures are supposed to enable \nthese outcomes, as should any true microservices \narchitecture.\n18\n Architecture designs that permit \ntesting and deploying services independently \nhelp teams achieve higher performance.\nIt takes a lot of code to run our systems: Facebook \nruns on 62 million lines of code (excluding backend \ncode), the Android operating system runs on 12 to \n15 million lines of code, and a typical iPhone app \nhas 50,000 lines of code.\n19\n With the huge amount \nof code it takes to run our organizations, we wanted \nto investigate which code-related practices really \nhelp drive performance (or if they do at all), \na construct we called code maintainability. \nOur analysis found that code maintainability \npositively contributes to successful CD. Teams \nthat manage code maintainability well have \nsystems and tools that make it easy for \ndevelopers to change code maintained by \nother teams, find examples in the codebase, \nreuse other peoples code, as well as add, \nupgrade, and migrate to new versions of \ndependencies without breaking their code. \nHaving these systems and tools in place not \nonly contributes to CD, but also helps decrease \ntechnical debt, which in turn improves productivity, \nsomething youll see in a later section.\nOrganizations that elevate code maintainability \nprovide real advantages to their engineers. \nFor example, managing dependencies is hard. \nUpdating a dependency could open a rabbit \nhole to issues such as breaking API changes, \n18 \u0007It's important to avoid premature decomposition of new systems into services, or overly fine-grained services, both of which can inhibit delivery performance. \nMartin Fowler covers this in MonolithFirst \thttps:\/\/martinfowler.com\/bliki\/MonolithFirst.html.\n19 \u0007 https:\/\/www.businessinsider.com\/how-many-lines-of-code-it-takes-to-run-different-software-2017-2 57Surprises\n03 \nDocumentation practices negatively \nimpacted software delivery performance. \nThis is at odds with previous reports. One \nhypothesis is that documentation is becoming an \nincreasingly automated practice, especially among \nhigh-performing teams. Until we collect additional \ndata, we have little evidence to either support or \nrefute this belief. \n04 \nSome tech capabilities (i.e., trunk based \ndevelopment, loosely-coupled architecture, \nCI, CD) seemed to predict burnout. As \nmentioned above, many respondents in this sample \nwere notably earlier in their careers than participants \nin previous years samples. Hence, we might have been \ntalking to people responsible for implementing the \ncapability as opposed to those responsible for creating \nor overseeing the initiative. The implementation \nprocess may be notably more challenging than its \noversight. We want to do further research to better \nunderstand this finding.\n05 \nReliability engineering practices had a \nnegative impact on software delivery \nperformance. One explanation is that these \nare not necessarily causally intertwined. This year \nwe noticed in a new clustering analysis (see How do \nyou compare) that a subset of clusters seemed to \nfocus on reliability while ignoring software delivery \nperformance. We believe that these are decoupled, in \nthe sense that you can do one without doing the other, \nbut ultimately, to make software delivery performance \ncount in terms of organizational performance, \nreliability needs to be in place. \n06 \nWe added SLSA-related practices to \nunderstand whether teams are adopting \nthese approaches to maintaining a secure \nsoftware supply chain. While we expected there to be \nsome association between implementation of security \npractices and performance (e.g., use of technical \ncapabilities, better software delivery performance, and \nbetter organizational performance), we were surprised to \nsee that security practices were actually the mechanism \nthrough which technical capabilities impacted software \ndelivery performance and organizational performance. \nAccelerate State of DevOps 2022 22Accelerate: State of DevOps 2019 | How Do We Compare? \n8 \u0007In 2017: https:\/\/www.informationweek.com\/devops\/capital-one-devops-at-its-core\/d\/d-id\/1330515\nTHROUGHPUT\nDeployment frequency\nThe elite group reported that it routinely deploys on-demand and \nperforms multiple deployments per day, consistent with the last \nseveral years. By comparison, low performers reported deploying \nbetween once per month (12 per year) and once per six months \n(two per year), which is a decrease in performance from last year. \nThe normalized annual deployment numbers range from 1,460 \ndeploys per year (calculated as four deploys per day x 365 days) for \nthe highest performers to seven deploys per year for low performers \n(average of 12 deploys and two deploys). Extending this analysis \nshows that elite performers deploy code 208 times more frequently \nthan low performers. It's worth noting that four deploys per day \nis a conservative estimate when comparing against companies \nsuch as CapitalOne that report deploying up to 50 times per day \nfor a product,\n8\n or companies such as Amazon, Google, and Netflix \nthat deploy thousands of times per day (aggregated over the \nhundreds of services that comprise their production environments). 18Accelerate: State of DevOps 2019 | How Do We Compare? \nAspect of Software Delivery Performance*\tElite High\tMedium Low\nDeployment frequency\nFor the primary application or service you work on, how \noften does your organization deploy code to production \nor release it to end users?\n \nOn-demand \n(multiple \ndeploys per day)\n \nBetween once \nper day and \nonce per week\n \nBetween once \nper week and \nonce per month\n \nBetween once \nper month and \nonce every six \nmonths\nLead time for changes\nFor the primary application or service you work on, what is your \nlead time for changes (i.e., how long does it take to go from code \ncommitted to code successfully running in production)?\nLess than \none day\nBetween one \nday and \none week\nBetween one \nweek and \none month\nBetween one \nmonth and \nsix months\nTime to restore service\nFor the primary application or service you work on, how long \ndoes it generally take to restore service when a service incident \nor a defect that impacts users occurs (e.g., unplanned outage or \nservice impairment)?\nLess than \none hour\n \nLess than \none day\na\n \nLess than \none day\na\nBetween one \nweek and \none month\nChange failure rate\nFor the primary application or service you work on, what percentage \nof changes to production or released to users result in degraded \nservice (e.g., lead to service impairment or service outage) and \nsubsequently require remediation (e.g., require a hotfix, rollback, \nfix forward, patch)?\n0-15%\nb,c\n \n0-15%\nb,d\n \n0-15%\nc,d\n \n46-60%\nMedians reported because distributions are not normal.\nAll differences are significantly different based on Tukeys post hoc analysis except where otherwise noted. \na,b,c Means are significantly different based on Tukeys post hoc analysis; medians do not exhibit differences because of underlying distributions. \nd Means are not significantly different based on Tukeys post hoc analysis.\n*For a visual presentation of the Four Metrics, please see Appendix A. \n 32How do you improve?\nLast year, we examined the technical DevOps \ncapabilities that predict the likelihood that a team \npractices CD, and discovered that factors such as \nloosely-coupled architecture and continuous testing \/ \nintegration were among the strongest predictors. This \nyear, in addition to examining the factors driving the \nuse of CD, we analyzed and identified the effects of \nCD alone as well as its interactions with other DevOps \ncapabilities on development outcomes.\nCD drives software delivery \nperformance\nSimilar to previous years findings, the use of CD is a \npredictor of higher software delivery performance, both \nalone and in combination with other DevOps capabilities. \nTeams that rated higher on CD are more likely to have \nhigher frequency of deploying code to production, and \nshorter lead time for changes and service restoration. \nContinuous Delivery \nContinuous delivery (CD) is a software \ndevelopment practice that:\n \n1. Enables the team to deploy software to \nproduction or end users at any time\n2. Ensures the software is in a deployable \nstate throughout its lifecycle, including \nwhen working on new features\n3. Establishes a fast feedback loop that enables \nthe team to check the quality and deployability \nof the system, and prioritizes fixing issues \nblocking deployment.\nNote that continuous delivery does not necessarily \nimply continuous deployment, the practice in which \nevery software build is automatically deployed. \nContinuous delivery requires only that a software \nbuild can be deployed at any time.\nAccelerate State of DevOps 2022\nFrank Xu 66Who took the survey? \nDeployment target \n2021 was the first year we decided to look at where \nrespondents deployed the primary service or application \nthey work on. To our surprise, the number one deployment \ntarget was containers. This year was no different, although \nat a lower proportion (54%) than last year (64%). \nWe also added more options in the hope of giving \nrespondents more ways to reflect where they deploy. \nAccelerate State of DevOps 2022 16How do you compare?Accelerate State of DevOps 2022\nThese clusters differ notably in their practices and \ntechnical capabilities. Given the high levels of \nsoftware delivery and operational performance \ndemonstrated by the Flowing cluster, we decided \nto look into how they differ from the other clusters \nin their practices and technical capabilities. We found \nthat relative to other clusters, the Flowing cluster \nfocuses more on:\n Loosely-coupled architectures: the extent \nteams can make large-scale changes to the \ndesign of their system without depending on \nother teams to make changes in their systems\n Providing flexibility: how flexible a company is with \nregard to employee work arrangements\n Version control: how changes to application \ncode, system configuration, application \nconfiguration, etc. are managed\n Continuous integration (CI): how frequently \nbranches are integrated into the trunk\n Continuous delivery (CD): capabilities focused \non getting changes into production safely, \nsustainably and efficiently\nCuriously, the Flowing cluster tends to focus less on \ndocumentation. Last year, we found that documentation \npractices are central to both delivery performance and \noperational performance (SDO). How does the Flowing \ncluster have strong SDO performance without a heavy \nfocus on documentation? For one, there are many routes \nto strong SDO performance outside of documentation. \nFurther, perhaps the Flowing \tcluster is continuously \nrefactoring its code to create a more self-documenting \nprocess and, thereby, have less of a need for documents \nas we describe them in the survey. \nThe Slowing cluster, which makes up the highest \nproportion of our respondents, tends to be made up of \nrespondents from larger organizations that tend to be less \ncloud-native than other clusters. Picture a very mature \ncompany with some calcified processes that, at the end \nof the day, still provides end-users with stable and \nreliable (and valuable) experiences. This cluster exhibits \na performance oriented, generative culture.2 One of the \nSlowing clusters most interesting characteristics is that \nit has low throughput and high positive-work-culture (a \ngenerative work culture, according to Westrum); this \nis an uncommon combination. Its more common to see \nproportional throughput and work culture (high\/high or \nlow\/low). We hope to conduct future research to better \nunderstand the relationship between throughput and culture. \n2 (Westrum) ",
"role": "user"
},
{
"content": "This is data from the search results when entering the users prompt which is ### START PROMPT ### Can you help me make a list of slide ideas from this content ### END PROMPT ### please use this with the following context and only this and return as markdown so I can render it: brain dump context talk, middl I list bullet point lead into. good sens context I turn slide surfac Technolog parts. I think I like interact Q I possibl audienc like that? I ask build specif applic I contract Pfizer requir lot nuances. product I build thing present. (final list end) One, I go start concept technolog doe alway matters\/ I want focus technolog abl achiev product owner goal understand goal key. build saa person compani 100 just know goal use right tech goal. 13) use technolog team work skill did current team 14) Consist Train document (team standards, ci\/cd etc) ideal stack amaz document community. 15) Lint help team standard lead right tech I go make right tech lone dev team just tri figur right tech let share I went figur tih Pfizer 12 year ago I think work today.",
"role": "assistant"
}
]

0 comments on commit bd1e901

Please sign in to comment.