This collaboration of various teams in a multi-program organization is also referred to as “teams of teams.” “Guilds” or “tribes” are other terms frequently used in the context of such organizations.
EXPERT TIP
Hand over responsibility to squads
The work in squads is most similar to that in create-ups. One company that falls into this category is Spotify, whose management consistently relies on tribes, squads, chapters, and guilds. Like most create-ups, Spotify has a powerful vision (“Having music moments everywhere”) that enables the individual tribes and squads to align their activities accordingly. Especially in technology-driven companies, a higher level of employee excellence can be achieved this way in a short time.
At Spotify, employees work in tribes. A tribe has up to 100 employees who are in charge of a shared portfolio of products or customer segments and are organized as simply as possible according to their dependencies. So-called squads form within the individual tribes that take care of one problem statement. Squads act autonomously and organize themselves. Experts from various disciplines are part of the squad and perform various tasks. Every squad has a clear mission; at Spotify, this can refer to the improvement of the payment or search functions, or of features such as Radio. The squads establish their own story and are responsible for the market launch. The mission is part of the clearly defined vision. The individual chapters ensure exchange on the technical level. Communities with the respective skills form, and are usually overseen by a line manager.
Guilds emerge on the basis of common interests. Interest groups form around a technology or market issue, for example, and then act transversally across the tribes. A guild might deal with blockchain technology and discuss its use in the music world of tomorrow.
The organization is a network with flat hierarchies. The squads work directly with one another, and boundaries between them are fluid. In such structures, we tend to collaborate radically in order to solve a problem; for instance, we meet ad hoc and dissolve the collaboration in the same way. This way, networked organizations emerge. For such approaches, it is advisable to bid farewell to traditional role descriptions and hierarchy levels.
Character of autonomous squads:
You feel like you are in a “mini start-up”
Self-organization
Cross-functional
Five to seven people
EXPERT TIP
Hand over responsibility to squads
When implementing market opportunities in large organizations, we cannot avoid measurability. The dimensions of traditional balanced scorecard approaches and well-known key performance indicators are not expedient and should be replaced with new questions. In particular, if the company is in a transformation phase, they must be reconsidered and/or discarded. We recommend including the key elements of a future-oriented organization in the cause-and-effect chain. The ability to think in ecosystems and the passion of teams for the execution of the mission can be crucial elements of management.
EXPERT TIP
Continuous exchange of ideas between the design teams
Innovation projects and problem-solving projects are used in different areas of the company. This obviously means that the time horizon and the definition of the future are also different for the individual design teams. In agile companies that are heavily based on technology, the time horizon for new services and products is usually no longer than one year. In the area of product groups, the cycles last between 12 and 24 months, depending on the industry focus. For weighty decisions regarding platforms with large investments, the standard is a time horizon of up to five years, not least due to the payback period. Strategic foresight as a design element has a perspective of five to 10 years. In addition to the desired market role, the question as to which business models will bring the revenue in the future is reflected upon. Furthermore, assessments are made as to how mega-trends affect the company and its portfolio. We have found that the continuous exchange of ideas transversally between the teams is a factor of success if we ultimately want to be innovative in a targeted way—always being aware, of course, of the time period the design team has in mind. In the terminology of a modern organization, the chapters make such a transversal exchange possible. In addition, the respective departments and business units, or squads and tribes, need the planning insights of an overriding strategy, so they can put their mission in the right context. Networking outwardly is a crucial factor of success.
KEY LEARNINGS
Implement solutions successfully
Determine the relevant stakeholders in the company at an early stage and involve them in your design challenge.
Develop an implementation strategy with specific measures on the basis of a stakeholder map, before the implementation is initiated.
Establish agile and lean organizational structures that accelerate the go-to-market.
Lend the implementation projects additional drive through external cooperation projects with partners, start-ups, and customers.
Follow a step-by-step approach for the transformation into an agile organization. First, establish small and agile teams; then, scale the procedure with a clear strategy and guidance for the employees.
Accept that not all projects, industries, and tasks are suitable for being realized in an agile organizational form.
Always define a clear vision in agile organizations. Otherwise, tribes have a hard time specifying their tasks. Squads need the overriding vision to align their mission to it.
Establish an awareness for the fact that the design teams have different planning cycles.
Promote a transversal collaboration, such as through guilds.
3.5 Why some design criteria will change in the digital paradigm
Peter is fascinated more and more by the possibilities of digitization. Step by step, robots will be deployed on various levels and they will autonomously interact with us. Bill Gates once said: “A robot in every home by 2025.” Peter believes that this development will take place even earlier. Cars drive autonomously on highways and private sites already, and new possibilities are continuously emerging in the area of cloud robotics and artificial intelligence. New technologies, such as blockchains, will allow us to carry out secure intelligent transactions in open and decentralized systems.
But what does that mean for the design criteria when we develop solutions for systems of tomorrow?
In the future, intelligent, autonomous objects will also be users and customers!
In a nondigitized world, the relationship to people is primary for an improved experience. When we look at the development of digitization with its various priorities, the design criteria are extended over time. For the next big ideas in the field of robotics and digitization, new criteria become relevant, because the systems interact with each other and both (robots and human beings) gain experience and learn from each other. A relationship is created between the robot and the human being. They act as a team.
Therefore, among other things, trust and ethics become important design criteria in the human–machine team relationship. So-called cognitive computing aims at developing self-learning and self-acting robots with human features. Nowadays, many projects and design challenges, depending on the industry, are still in the transition phase from e-business to digital business. Digitization is thus a primary focus for companies if they want to stay competitive and exploit hitherto unknown sources of income through new business models.
What do the design criteria of the future look like?
The design criteria begin to change when the machines act semi- autonomously. In this case, human beings collaborate with robots. Robots perform individual tasks, while centralized control is still in the hands of human beings.
Things become really exciting when human beings interact with robots as a team. Such teams have far-reaching possibilities and can
make faster decisions,
evaluate many decisions synchronously while doing so,
solve difficult tasks, and
perform complex tasks.
Relevant criteria, which are to be fulfilled by a human–robot team, are inferred from the specific structure of a task. Design thinking tries to realize the ideal composition of task characteristics and characteristics of team members. But if human beings and robots will act together on a team in the future, the question arises of whether it is more important for us humans to retain decision-making authority or to be part of an efficient team. In the end, a good team performance is probably more important. Creating a functioning team is a complex affair, however, because three systems are relevant in the relationship between human beings and robots: the human being, the machine, and the social or cultural environment.
The great challenge is how the systems understand one another. Machines can simply process data and information. Human beings have the ability to recognize emotions and gear their activities accordingly, while both systems have difficulties in the area of knowledge. Knowing what the others know is pivotal! And then there is the element of the social systems. Human behavior differs widely due to its individual forms of existence in different cultures and different social systems. Not to be forgotten, ethics: How is a robot to decide in a borderline situation? Let’s assume a self-driving truck gets into a borderline situation in which it must decide whether to swerve to the right or the left. A retired couple is standing to the right; on the left, there’s a young mother with a baby buggy. What are the ethical values upon which a decision is made? Is the life of a mother with a small child worth more than that of the retirees?
A human being makes an intuitive decision in such a borderline situation, which is based on his own ethics and the rules known to him. He can decide himself whether he wants to break a rule in a borderline situation, such as failing to brake at a stop sign. A robot follows the rules it has been fed in this respect.
Even a simple action such as serving coffee shows that trust, adaptability, and intention in the human–robot relationship become a challenge for the design of such an interaction.
The question is: How should the world of robots and autonomous objects be integrated in the development of new digital solutions?
Today, Peter’s design thinking reflections still focus on human beings. He builds solutions that improve the customer experience or automate existing processes. You might call it digitization 1.0. At higher maturity levels of digitization, things get far more challenging. With increasing maturity, robots also become more autonomous. Not only are individual functions or process chains automated, but robots interact with us on a situation-related basis. Thus they act multidimensionally. Trust, along with adaptability and intention, will be one of the most important design criteria. This means that good design will require all these design criteria in human–machine interaction in the future.
Peter has a new design challenge that he wants to solve in collaboration with a university in Switzerland. He’s in contact with the university’s teaching teams. Peter’s design challenge comprises finding a solution for registering drones and determining their location. For the most part, today, autonomous drones are not yet out and about. But they are getting increasingly more autonomous and will fly by themselves in the future. They will perform tasks in the areas of monitoring, repair, and delivery; render corresponding services; or will be simply of use in the context of lifestyle applications.
Design challenge:
“How might we design the registration and tracking process of drones (> 30 kg/< 30 kg)/(> 66 lbs/< 66 lbs) on a central platform?”
The participants in the “design thinking camp” get down to work. A technical solution for registering the drones and identifying their location should be found quickly. Interviews with experts from flight monitoring corroborate the need for such solutions. An incident at a French airport when an airliner evaded a drone at the last minute during a landing only underscores this need.
Because all stakeholders are involved in such a design challenge, the students go one step further and interview passers-by in the city. They soon realize the general population is not very enthusiastic about drones and only accepts them to a limited extent. The design thinking team has come up against a much more formidable problem than the technical solution: the relationship between human and machine. Especially in the cultural environment of Switzerland, where the design challenge takes place, it seems important to pay heed to general norms and standards such as protection from encroachments on the part of government or other actors upon personal freedom. The participants see a complex problem statement here and reformulate their design challenge with the following question:
New design challenge:
“How might we design the experience of interaction between drones and humans?”
Based on this new design challenge, the question is illuminated from another side. The result is that the technical solution is put on the back burner, while the relationship between man and machine takes center stage in a more heightened way as the critical design criterion. Expanding the design criteria serves as a basis for a solution in which everybody can identify drones and, at the same time, can get expanded services from the interaction.
“I know who you are, and you seem to be friendly”
In this case, a prototype that was developed consists of an app that is networked with the potential cloud in which the traffic information on the drones flow together. Through the position data, the “Drone Radar App” detects the drone. The key feature is that the drone for which the information is retrieved greets the passer-by with a “friendly nod.” This feature was quite well received by the people interviewed and shows how human behavior can minimize the fear of drones. Other prototypes also show that making contact in a friendly way or an associated service improves this relationship.
Because Peter has carried out the “drone project,” he wonders where else robots will interact with humans in the future. What use cases are there?
Which senses can be captured by robots?
EXPERT TIP
Coexistence of persona and robona
As the examples of autonomous vehicles and drones have shown, the future will be characterized by a coexistence between humans and machines. The relationship between humans and robots will be decisive for the experience. For initial considerations, creating a “robona” together with a persona has proven to be of use.
The creation of a robona arises from the human–robot team canvas (Lewrick and Leifer), with the core question being the one about the relationship between them. Interaction and experience between robona and persona are the crucial issues. For one, information is exchanged between the two. This exchange is relatively easy because certain actions are usually performed 1:1.
Things become more complex when emotions form an integral part of the interaction. Emotions must be interpreted and put in the right context. The exchange of knowledge requires learning systems. Only a sophisticated interplay between these components can properly assess intentions and meet expectations. That complex systems require complex solutions is especially applicable in this environment. The complexity is stepped up a notch in the human–robot relationship and its team goals.
EXPERT TIP
Design of “trust” with robots
Trust can be built up and developed in different forms. The simplest example is to give a robot a human appearance. Machines might come into being in the future that communicate with people and at the same time make a trustworthy impression on their human interlocutor. The projects of the “Human Centered Robotics Group,” which has created a robot head that reminds the human interlocutor of a manga girl, are good examples of this. The creation, based on a schema of childlike characteristics (big eyes), makes an innocent impression—it makes use of the key stimuli of small children and young animals that emanate from their proportions (large head, small body). The robot also creates trust because it recognizes who is speaking to him: It builds eye contact, thus radiating mind
fulness. Not only the way a robot should act but also what it should look like often depends on the cultural context. In Asia, robots are modeled more on human beings, while they are mechanical objects in Europe. The first American robot was a big tin man. The first Japanese robot was a big, fat, laughing Buddha.
Once robots become more similar to human beings, they can be used more flexibly: They help both in nursing care for the elderly and on construction sites. Trust is created when the robot behaves in a manner expected by the human being and in particular when the human feels safe due to this behavior. Robots that do not hurt people in their work—that stop in emergency situations—are trusted. This is the only way they can interact on a team with people. Both learn, establish trust, and are able to reduce disruptions in the process. The theme of trust gets more complicated in terms of human–robot activities in different social systems or when activities are supported by cloud robotics. Then the interface is not represented by big, trust-inducing eyes but by autonomous helpers that direct and guide us and thus provide us with a basis for decisions.
The Design Thinking Playbook Page 24