But as users we expect that the information uttered by the devices should be true, distinct and precise.
Not derived from statistical calculations based on heaps of unstructured data.
To achieve that, user information has to be semantically enriched by knowledge graphs.
Applying iiRDS in this context is a huge step towards consistent and highly reliable statements.
1. Pre-session Assistance. Through the integration of both large and small models, characteristics of inbound calls from consumers and a summary of the human-computer conversation can be provided. This approach reduces the intent-seeking time for human agents, thus enhancing efficiency.
2. In-session Assistance. Through large models, the fuzzy user questions can be rewritten and identified to find the standard solutions. This approach reduces the answer-searching time for human agents and decreases the length of single-session, thus enhancing consumer satisfaction.
3. Post-session Assistance. Through both large and small models, the information of the session content can be automatically extracted and data entry can be fully automated, This approach reduces the effort and time of agents on data entry, thus increasing production capacity.
2. From the reader's perspective, the combination of text and graphics is beneficial for understanding. Take product after-sales service as an example. When customers request part replacements, yet employ different part names from the manufacturers, the drawings can assist both customers and manufacturers in pinpointing the exact parts.
3. Production specifications and quality criteria for document drawings. Engineering or research drawings are transformed into document drawings, ensuring the accuracy of document drawings while considering the confidentiality of the drawings. Drawings and text complement each other. Line graphs, being vector graphics, can be easily zoomed in and out on the web. Bitmap can balance clarity and file storage.
4. Pictorial documents. Documents primarily composed of images with the text as secondary elements. This sharing mainly introduces the 3D rendering of pictorial documents.
5. Dynamic documents (defined as videos). In the age of media, videos serve not only to entertain the public and promote products but also to demonstrate product disassembly and maintenance.
In this topic, we will use ODX file (Open Diagnostic Exchange, which is a standardized file format used in various industries for the exchange of diagnostic data between different diagnostic tools and software applications) as the inputs, through the utilization of combined tools incl. R&D ODX, ChatGPT, CMS, Quality tool and appropriate Service & Diagnostics knowledge, we will talk about and demo a brand-specific approach & practice on how to significantly improve the error codes description production time, optimize the fault-tracing efficiency and the user experience from aftermarket diagnostics technicians' perspective.
Your team applies terminology lists, which do not (entirely) comply with the terminology lists of other teams in your company. Your company’s terminology is inconsistent for team colleagues and customers alike. All this leads to unnecessary confusion and miscommunication.
Then let us take you with us on our terminology race!
In this presentation, we, Porsche project lead Ira Rotzler and Kerstin Berns, managing partner of berns language consulting will share with you, why and now we started the Porsche terminology race, where we stand right now, and how we plan to cross the finishing line 😊!
We will demonstrate to you how we unified a great number of very differently formatted terminology lists into one neat data set and how we cleaned up all that data. We will also share with you how we managed to lay the foundation for high-quality terminology management which is useful to the entire Porsche brand in all countries, worldwide! And of course, we will shed light on how we automated this process and which methodologies and tools where applied.
In the end, we will talk about how the project team successfully created a reliable and useful database that benefits Porsche in various ways, e.g., by improving recognition of vehicle functions and their understandability throughout different customer touchpoints.
Finally, we take a sneak peek into the international future of multilingual terminology work at Porsche as well as at the future usefulness of clean terminology for simply everything!
1. Changes brought by the emergence of large models to database technology dissemination
The impact of technological development on the workflow of technical dissemination. Iteration of the workflow
2. Exploring the introduction of large models into database technology dissemination
An introduction to exploring technical dissemination practices and workflow improvements using the OceanBase database as an example
3. Comparative display of offline LLM results based on corpus optimization
Online large model corpus testing
4. Future directions and focuses of database technology dissemination
This workshop explores the evolution of STE from an aerospace standard to its potential in revolutionising documentation practices in the software industry. By embracing STE, participants will gain insights into how this linguistic paradigm shift can help organisations across diverse sectors enhance clarity, efficiency, and comprehension in their software-related communication. The flexibility of Simplified Technical English allows for customisation to suit unique requirements. As the specification primarily involves additions to the dictionary, clients themselves often make customisations to cater to different projects within the organisation.
Even if some is still trying to move from PDF to HTML, we think that the next era will be direct human interaction with that content
Ekrai is an example of a Personal Digital Expert: she is an expert and you can ask her any question related to any detail of your product, both for pre-sales of after sales and technical documentation needs.
We will explain how we made it using latest Generative AI technologies and how it could be considered a “best practice” to be followed in many companies
First, define objectives. While maintaining the same level of delivery quality, consider the following:
- How can we reduce the frequency of mouse clicks?
- How can we minimize the manual operations and decision-making that are prone to errors?
- How can we shorten the time spent waiting for asynchronous information inputs?
- How can we implement clear division of labor and collaboration rules with minimal communication costs?
Considering the methods for automation to enhance efficiency, we have the following approaches:
- Keyboard shortcuts, graphical user interface (GUI) quick buttons, (command line) command aliases
- Regular expressions and functions in data tables
- Macros and scripts
- Workflow automation tools
Next, analyze the workflow and identify inefficient processes.
Finally, by integrating the automation techniques across various dimensions while considering constraints on manpower and budget, design a comprehensive solution to be tested, iterated upon, and implemented.
This presentation will use the example of my toolchain (AsciiDoc + Git/GitLab + VS Code + Feishu) to illustrate efficiency improvement methods across different dimensions, particularly the utilization of API scripts to connect multiple tools and streamline automation processes.
The application of LLMs in technical writing competitions nationwide vividly demonstrates their potential in enhancing writing efficiency and content innovation. From planning and development to revision and delivery, LLMs span the entire document creation process, significantly improving the quality and efficiency of the entire writing workflow.
The shift from manual content to conversational chatbots markedly enhances the user's interactive experience. The instant response and personalized assistance of chatbots make it possible for users to obtain information based on specific needs, which not only enhances user satisfaction but also brings new opportunities to professionals in technical writing.
The use of LLMs in user support systems showcases their precision and efficiency in handling complex queries. Leveraging LLMs, we can provide a more interactive and customized user experience, whether the user prefers text, voice, or video interactions.
In summary, the development of LLMs represents a significant leap in user assistance, transitioning from paper manuals to intelligent chatbots. This not only greatly improves the user experience but also brings revolutionary changes to the technical writing industry.
For any documentation, authors and subject matter experts work together to arrive at quality. I strongly believe that it is a shared responsibility for customer satisfaction.
Our various quality initiatives in improving the process as well as Root Cause Analysis (RCA) in Tech Comms did not disappoint us and helped to improve process and reduce the recurrence of defects. We have been trying out RCA on documentation defects since 2014 in our team. This has helped fix many of the root causes of documentation defects and at the same time strengthened the foundation of the process. Here we share our experience of how we used RCA and related activities like Bugathon, to progressively enhance documentation quality along with process.
In this speech, I will explain PLG, discuss how it affects product content demand, analyze methods of formulating corresponding content strategies, and explore the development trend of the content team based on their capabilities. I hope that this speech will provide a clearer understanding of the new industry directions and offer insights into the transformation of content team.
✅ Are you looking for precise content marketing strategies to capture developers' interest, seize their attention, and unleash their potential?
✅ Are you also thinking about questions like what are the characteristics of AI large model developers compared to traditional ones? What kind of decision-making journey do they go through in product selection? What are the gains, pains, and key considerations at each stage?
✅ Have you noticed that although AI model developers are generally not interested in marketing tactics, they unconsciously buy in certain marketing strategies. So, what is the magic of these strategies?
✅ Are you curious about how top AI model vendors attract and empower users through technical content and marketing? How they build and deepen trust through community activities and support services? And how they establish and expand ecosystems through product iteration and publicity?
This talk, with the author's work regarding technical content/open source community/developer operation in 01.AI, a large model company, as an example, will analyze practical dilemmas, sort out thinking processes, explore problem solving ideas, and provide solutions, helping create high-quality content, strengthening marketing strategies, and expanding your network, so as to build a vibrant open-source AI developer community.
This is a big limit to get to the Generative AI world because it is clear that Enterprise Knowledge must be structured to avoid AI Hallucinations.
In this workshop we will see how it is possible to move from unstructured miscellaneous documents to a super high quality structured knowledge base.
We will learn concepts Knowledge Architecture
We will learn concepts of Prompt Engineering
We will see how a Structured Workflow is important to get to a high quality structured knowledge base
At the end we will see how we can use this structured knowledge if we feed a RAG scenario
I will also address the challenges and ethical considerations arising from the integration of AI in technical communication. As AI algorithms autonomously generate content, there is a need for scrutiny to ensure accuracy, clarity, and adherence to ethical standards. Moreover, the discussion will extend to the importance of maintaining equilibrium between automated and human elements in technical communication, emphasizing the collaborative potential embodied in AI-human partnerships, often referred to as the “human-in-the-loop” approach.
In summary, this presentation invites attendees to engage in insightful discussions on navigating the evolving rhetorical situations of technical communication in the era of generative AI. By exploring both the advancements and ethical dimensions, participants will gain valuable insights into strategies for effectively adapting to the transformative impact of AI on technical communication practices.
As a startup providing end-to-end solutions in the field of chip verification, XEPIC offers a comprehensive range of seven product series that cover the demands of digital chip verification and provides effective verification solutions. Therefore, XEPIC is in pressing and intricate demand for technical document. Its document team has adopted the "Docs as Code" concept to establish a lightweight document development platform from the ground up, which combines open-source and self-developed tools. This platform effectively meets the company's requirements for bilingual (Chinese and English) publishing and multiple format (PDF and HTML) and style options for publishing. In this speech, the team will share their insights and experience gained from the practices, offering inspiration to professionals facing similar requirements.
Across various fields of technology, psychology is being integrated to achieve optimal results. Marketers employ psychological techniques such as the scarcity principle to encourage potential customers to click on links, sign up for trials, and convert to paying customers. UX designers use psychological principles like the principle of least effort to design user experiences that appeal to your users and prompt them to take the desired actions. Why aren't more technical writers doing same?
In this talk, I'll explore ten psychological principles that technical writers can apply to help users have better content experience documentation. I'll cover principles like Fogg's Behavior Model, the Ambiguity Aversion Bias, the Law of Pragnanz, and much more. It's time to start writing documentation for people as they truly are, not as we wish they were or as we think they are?
This workshop will explore the convergence between traditional content and AI, and how to communicate traditional content using AI to optimize user experience. We will utilize the IBM AI Essentials Framework and tools to guide you through the process of developing strategies with AI as the medium. Together, we will work towards a new model of AI content co-design.
2. A concise overview of RISC-V, the next-generation computer instruction set architecture, and its domestic and international development today.
3. Introducing the current state of products and documentation development under RISC-V architecture.
4. Why is DITA the best partner for content development under the new instruction set srchitecture RISC-V?
a. DITA's topic-based authoring V.S. the smooth transfer from IP to final devices in the lifecycle of IC products
b. DITA's customization V.S. RISC-V's high level of customization and modularization
c. DITA's reuse and filtering features V.S. the fragmented nature of the RISC-V and its associated products
d. DITA's collaborative sharing V.S. RISC-V's diverse ecology characterized by vendor's leadership and simultaneous operation of various communities
The premier gathering event for all decision-makers and specialists in the field of technical communication in China.
WeChat Official Account: