Real Exam Questions and Answers as experienced in Test Center

LRP-614 Braindumps with 100% Guaranteed Actual Questions | https:alphernet.com.au

LRP-614 Portal Developer approach | https://alphernet.com.au/

LRP-614 approach - Portal Developer Updated: 2024

Simply remember these LRP-614 braindump questions before you go for test.
Exam Code: LRP-614 Portal Developer approach January 2024 by Killexams.com team

LRP-614 Portal Developer

Portal
Development
in Liferay

• Adding Custom Branding
to the Liferay Platform

• Front-End Development
Frameworks in Liferay

• Front-End Development
Tools in Liferay

Branding the Platform with Themes

• Adding Custom Branding to Liferay with Themes

• Adding Custom Styling to the Platform

• Adding Custom JavaScript to a Theme

• Configuring the Theme

• Themelets

• Including Resources and Widgets

Widget Templates

• Controlling the User Experience in Liferay

• Customizing Widget Presentation

Experience Management

• Controlling the User Experience in Liferay

• Controlling Web Content Presentation

• Reducing Time to Market with Page Fragments

• Controlling Page Layouts with Layout Templates
Portal Developer
Liferay Developer approach

Other Liferay exams

LRP-614 Portal Developer

Pass LRP-614 exam at your first attempts with LRP-614 dumps questions and practice test. Our team keep searching for LRP-614 real exam questions from real tests and update LRP-614 cheatsheet at get section accordingly. All you have to memorize the LRP-614 Braindumps and take LRP-614 test. You will surprise to see your marks.
Liferay
LRP-614
Portal Developer
https://killexams.com/pass4sure/exam-detail/LRP-614
Question: 125
The method to set the value of a custom field for a BlogsEntry object ("blog") is:
A. PortalUtil.setExpando(blog)
B. blog.getExpandoBridge().setAttribute()
C. ExpandoLocalServiceUtil.setAttribute(blog)
D. blog.setExpandoQ
Answer: B
Question: 126
The default variables in a theme are defined in:
A. portal_normal.vm
B. init.vm
C. main.vm
D. variables.vm
Answer: B
Question: 127
As a best practice, a portlet plugin imports classes from:(Please select all correct
answers.)
A. portal-impl.jar
B. portal-service.jar
C. portlet.jar
D. ext-impl.jar
Answer: B, C
Question: 128
The recommended way to override the updateLastLogin() method and create a new
method called updateLastlmpersonation()for the User service is to:
A. Create a hook plugin and implement a service wrapper that overrides the
updateLastLogin() method and creates the updatel_astImpersonation() method in the
User service
37
B. Create a portlet plugin and implement a service wrapper hook that overrides the
updatel_astLogin() method in the User service and build a new service in the plugin that
references the User service and creates the updatel_astImpersonation() method
C. Create an Ext plugin that modifies portal-spring.xml to replace the User service with a
new service that overrides the updatel_astLogin() method and creates the
updatel_astImpersonation() method
D. Create a hook plugin and implement a service wrapper that overrides the
updatel_astLogin() method and create an Ext plugin that builds a new service to
implement the updateLastImpersonation() method for the User service.
Answer: B
Question: 129
After adding new functionality to an Ext plugin, the recommended way to deploy in a
development environment is to:
A. Stop the server, redeploy the plugin and restart the server
B. Undeploy the original plugin and deploy the updated plugin
C. Undeploy the original plugin, clean the server and deploy the updated plugin
D. Undeploy all plugins and deploy the updated plugin prior to redeploying the other
plugins
Answer: A
Question: 130
When a hook overrides a core JSP named view.jsp:
A. The new view.jsp overwrites the original file and the original view.jsp is no longer
available
B. The original view.jsp is moved to a temporary folder
C. The original view.jsp is renamed to view.portal.jsp
D. The contents of the original view.jsp and the new view.jsp are merged automatically
Answer: C
Question: 131
Liferay's core local services:(Please select all correct answers.).
A. Contain the business logic of the service
B. Enforce permission checking
38
C. Are required if using remote services
D. Communicate to the database through the persistence layer
Answer: A, D
Question: 132
Beta-portlet.war requires services that are in alpha-portlet.war. To guarantee beta-
portlet.war deploys after alpha-portlet.war:
A. Add the following to portletxml in beta-portlet.war:
< init-pa ram>
required-deployment-contexts
alpha-portlet

B. Add the following to liferay-plugin-package.properties in beta-portlet.war:required-
deploy ment-contexts=alpha-portlet
C. Add the following to liferay-portlet.xml in beta-portlet.war:
alpha-portlet
D. It is not possible to declare this type of dependency
Answer: B
Question: 133
Public render parameters are of the type:
A. List
B. RenderParameter
C. String
D. Object
E. RenderRequest
Answer: C
Question: 134
Service Builder does not generate:
A. SQL statements to create tables
B. Hibernate and Spring configuration files
C. Axis web services
D. The view layer
39
Answer: D
Question: 135
The recommended way to modify theWiki portlet configuration to recognize a new
public render parameter defined in a portlet plugin:
A. Create a hook plugin and add the public render parameter definition to portlet-
custom.xml
B. Create an Ext plugin and add the public render parameter definition to portlet-ext.xml
C. Create a portlet plugin and re-implement the logic of the Wiki portlet and define the
new public render parameter in portlet.xml
D. Create an Ext plugin and add the public render parameter definition to portlet-
custom.xml
Answer: B
Question: 136
The element that defines a database table in service.xml is:
A.
B.
C.
D.
Answer: A
Question: 137
If the expiration cache in portlet.xml is set to "-1":
A. The finder cache does not expire
B. Ehcache does not expire
C. The portlet cache does not expire
D. All of the above
Answer: C
Question: 138
40
To define a primary key named''bookld" in service.xml:
A.
B.
C.
D.
Answer: B
Question: 139
The method to retrieve a list of users that have been added directly to a site is:
A. SiteLocalServiceUtil.getSiteUsers()
B. UserLocalServiceUtil.getSiteUsers()
C. UserLocalServiceUtil.getGroupUsers()
D. SiteLocalServiceUtil. getGroupUsers ()
Answer: C
Question: 140
Portlet application security roles are mapped to Liferay roles in:
A. portlet.xml
B. liferay-portlet.xml
C. liferay-role.xml
D. liferay-display.xml
Answer: B
Question: 141
The service() method in GenericPortlet handles all requests for a particular portlet and
dispatches to the appropriate method based on the portlet mode.
A. True
B. False
Answer: B
41
Question: 142
The recommended way to escape text is:
A. StringUtil.escape()
B. HtmlUtiI.escape()
C. DisplayUtil.escape()
D. JSPUtil.escape()
E. FormUtil.escape()
F. GetterUtil.escape()
Answer: B
Question: 143
The Classic theme is built using:
A. HTML 4
B. XHTML
C. HTML 5
D. HTML 6
E. WML
Answer: C
Question: 144
The descriptor liferay-portlet.xml defines:
A. Events
B. The portlet class
C. The CSS class wrapper
D. Resource bundles
Answer: C
42
For More exams visit https://killexams.com/vendors-exam-list
Kill your exam at First Attempt....Guaranteed!

Liferay Developer approach - BingNews https://killexams.com/pass4sure/exam-detail/LRP-614 Search results Liferay Developer approach - BingNews https://killexams.com/pass4sure/exam-detail/LRP-614 https://killexams.com/exam_list/Liferay Next generation of Liferay Developer Studio adds visual workflow designer

Liferay, Inc., provider of the world’s leading enterprise-class, open source portal, announced today the latest release of Liferay Developer Studio, an Eclipse-based development environment that helps developers create applications for the flagship Liferay Portal platform. The new release, Liferay Developer Studio 1.6.0, now incorporates a powerful visual design tool that helps Java developers build sophisticated workflows.
 
The new Kaleo Designer for Java provides programmers the means to quickly create and publish workflows in a visual editor using new workflow definitions or existing ones retrieved from a Liferay installation. Developers can create workflow scripts in a Java/Groovy editor that provides Liferay API access, code assistance, code validation, and syntax-as-you-type error checking within a familiar Eclipse-based environment. A notification editor also allows developers to create workflow notifications using Liferay Portal context variables in both Freemarker and Velocity. In these ways and more, the new Kaleo Designer increases the efficiency with which Liferay Portal developers automate processes.
 
“These new features further push Liferay’s developer tooling strengths and make it a significant part of the overall Liferay portfolio,” said Greg Amerson, creator of Liferay Developer Studio and Senior Software Engineer at Liferay.
 
Appropriately, the latest release of Liferay Developer Studio follows the accurate release of Liferay Marketplace, the portal provider’s enterprise applications repository. Liferay’s community of over 65,000 developers has been called to create apps for get and eventually for purchase by Liferay platform users. Acting as a powerful tool for application creation, Liferay Developer Studio offers added ease and power to an international audience of Liferay developers.
 
Other tools in Liferay Developer Studio for Liferay Portal developers include a pre-installed version of Liferay Portal Enterprise Edition (EE), a bundled Liferay Plugins SDK with example projects wizard, and support for remote development through the built-in WebSphere server adapter or Remote IDE connector available through Liferay Marketplace.
 
To watch a video of Kaleo Designer for Java in Liferay Developer Studio, visit: http://vimeo.com/48315976.
 
Liferay Developer Studio is available as a free trial with any free 30-day trial of Liferay Portal EE. Please visit www.liferay.com/downloads/liferay-developer-studio to receive a trial license key and to access the download.

For more information about Liferay, visit www.liferay.com.

Mon, 03 Sep 2012 12:00:00 -0500 en-US text/html https://sdtimes.com/next-generation-of-liferay-developer-studio-adds-visual-workflow-designer/
A developer first approach: What does this mean for API security?A developer first approach: What does this mean for API security?

Within the emerging practice of DevSecOps there is no term more ambiguous than ‘shift left,’ a term likely to mean something subtly different depending on whom you ask. A commonly accepted view is that ‘shift left’ for security fosters the adoption of security practices as early as possible in the development lifecycle. This includes activities such as threat modeling, capturing security requirements, architecture review, and most vitally the integration of security testing tooling within developers’ native environments. For developers, this requires developer-friendly security tooling, typically operating with low latency, low in false positives, and adding value to the developer workflow. For security teams, this ‘shift left’ approach has meant developing a new set of skills, namely becoming familiar with development tools (think Git, CI/CD pipelines, containers, etc.) and delegating the operation of the tools to developers. Ideally, the ‘shift left’ approach allows security teams to focus more on policy, compliance, risk reviews, and mitigations — where security sets the ‘guardrails’ for developers who then operate their process within these rails.

How does this impact API security?

Recently there has been an increased focus on API security within organizations driven by the increased adoption of APIs and the attendant high-profile breaches affecting APIs. Unfortunately, much of the existing security tooling (such as SAST and DAST) is not effective at discovering vulnerabilities within API implementations which requires a rethink in the approaches toward API security. Fortunately, through the wide-scale adoption of the OpenAPI Specification (OAS) as the single ‘source of truth’ within an organization, forward-thinking security teams can drive API security to the ‘far left’. Using an OAS contract it is possible to encapsulate both the data domain (via the endpoints and the data requests and responses) and the security domain (the authentication and authorization requirements and additional factors such as rate-limiting, token validation). 

What does this mean for developers?

From a developer perspective, this ‘contract first’ approach requires a shift in thinking away from coding first towards an upfront design. The benefits are numerous — a well-designed contract can allow for the automatic generation of backend coding stubs (via tools like Swagger Codegen) and the generation of automated tests (via tools like Newman), as well as the automatic generation of UI frontends for the API (such as Swagger UI), and automated documentation.

The most important benefit of this ‘contract first’ approach for developers is to allow them to gain visibility into the security of their API code as they are developing it. Firstly various plugins can be used to perform active validation of the API contract as it is developed (for example detecting where no security constraints are specified on endpoints). Secondly, developers can test the implementation of their API backends using scanning tools to verify that their implementation matches the specified contract. This validation and verification can be done in the immediacy of the developer IDEs minimizing friction in adopting proactive security methods.

Security becomes everyone’s responsibility

Another core practice of a ‘shift left’ based development process is continuous integration and continuous delivery (CI/CD). Using an API contract it is possible to add gating controls to the pull-request (PR) process to ensure that proposed code changes adhere to the contract. Security teams can also implement gating controls in the delivery process to ensure that the deployable artifacts have the appropriate security controls. This is analogous to the approach used in Infrastructure-as-Code where Terraform deployments are validated before deployment to ensure that — for example — relevant network controls are implemented. 

As an extension of this approach, the security team can inject API security controls into an API backend as part of the deployment process. This ‘security as code’ approach allows segregation of duties between the development team and the security team — the security team can enforce perimeter controls (such as rate limiting and token validation) and free up the developers to focus on the data contract and input and output validation. Using an automated approach within the CI/CD process guarantees the controls are in place and removes the likelihood of human error, typically a developer racing to a deadline forgetting a vital control.

Conclusion

This developer-first approach to security is undoubtedly gaining momentum as it accelerates the overall delivery time for new APIs and applications and reduces cost overruns. By directly addressing the security bottlenecks that have evolved due to traditional negative security models, a positive security model that embraces shift-left combined and shield-right methodologies is the way forward for modern enterprises.

By taking this approach security teams need not feel that they are abdicating control to the dev teams. On the contrary, enabling “security-as-code” practices will free up the security team to focus on ensuring policies are being complied with and ensure the overall governance and risk management framework of the enterprise is optimized. 

This approach addresses the gap between your security and development teams and is the panacea of security being everyone’s responsibility — meaning the right security controls are included by the right team at the right time and all facilitated by the power of the OpenAPI Specification.

Sun, 21 Nov 2021 10:00:00 -0600 en-US text/html https://sdtimes.com/api/a-developer-first-approach-what-does-this-mean-for-api-security/
How to Enable Developer Mode on Chromebook [2024] No result found, try new keyword!In this article, we’ll show you how to enable developer mode on Chromebook to gain access to the command line. With the command line in developer mode, you can make changes to the system and ... Mon, 20 Mar 2023 12:00:00 -0500 en-US text/html https://techpp.com/2023/03/21/enable-developer-mode-on-chromebook/ How To Become A Game Developer: Salary, Education Requirements And Job Growth

Editorial Note: They earn a commission from partner links on Forbes Advisor. Commissions do not affect their editors' opinions or evaluations.

If you’re a technology professional pursuing a career in the gaming industry, you might consider becoming a video game developer. These gaming experts create the framework for building video games across various platforms including mobile, computer and gaming devices.

The following sections offer a closer look at how to become a video game developer.

Video Game Developer Job Outlook

Since the first video game came onto the scene, the industry has seen exponential growth. As such, video game developers can expect fast career growth in the coming years.

The U.S. Bureau of Labor Statistics (BLS) does not provide data for video game developers specifically, but the BLS projects a faster-than-average 26% growth for software developers, quality assurance analysts and testers. Video game developers fall in with these professionals.

What Is a Video Game Developer?

Video game developers play a crucial role in the success of any video game. They are responsible for bringing a video game from concept to reality. To do this, video game developers must code and program visual elements and other features. They also run tests to make sure the game performs well.

Video Game Development vs. Video Game Design

You may hear the titles “video game developer” and “video game designer” used interchangeably, but the two jobs are different. Video game designers focus on the creative aspects of video game creation. Developers, on the other hand, focus on coding and the other technical aspects of that process.

What are the Main Responsibilities of a Video Game Developer?

Video game developer roles vary depending on the place of employment. In smaller organizations, for example, these professionals may work on multiple projects—such as both coding and testing—at the same time throughout the game development process.

At larger video game companies, on the other hand, each developer may take on a more specific set of tasks.

Typical responsibilities for a video game developer include:

  • Coding visual elements
  • Game design ideation
  • Making sure the game plays well
  • Monitoring game performance
  • Reviewing and improving existing code
  • Working with producers, designers and other professionals to bring the game to life

Video Game Developer Salary

Factors like experience and location affect video game developers’ salaries. Developers in the entertainment or video game software industry make an average annual salary of around $91,000, according to Payscale data as of December 2023.

Steps to Becoming a Video Game Developer

Several educational paths can lead to a video game developer career. Some developers attend college. Others opt for immersive bootcamps, where they learn crucial skills such as coding and technical problem-solving. Below, they take a look at the most important steps you need to take to land a job as a video game developer.

Earn a Degree

When it comes to the hiring process, many video game companies look for developers with degrees. A degree is not an absolute requirement, but employers may prefer candidates who have completed undergraduate degrees in computer science or related fields.

Given the gaming industry’s growing popularity, several colleges now offer bachelor’s degrees in video game design and development.

Obtain a Certificate

Certificates offer another option for students who want to either forgo college or supplement their current degree. Earning a certificate in video game development allows students to hone their skills through intensive, project-based curricula.

Entities like the University of Washington, Harvard University and Arkansas State University offer professional certificate programs in game development. Since these certificate programs generally take less than a year to complete, they can offer a quicker path to a career in the gaming industry than traditional four-year degrees.

Certificates cannot fully replace professional experience, but they do offer several benefits. For example, video game development certificate-holders can:

  • Build a solid foundation in game development and design.
  • Connect with groups of fellow creatives.
  • Meet teachers and mentors who can help make introductions to industry professionals.

Gain Work Experience

Professional experience is just as important as education when it comes to building a solid foundation in video game development. Before gaining professional experience or earning a degree, you might find entry-level work as a game tester. Game testing positions rarely require specialized training or a degree, so this might be a good way to build experience while completing your studies.

Many game developers begin their careers with internships as well. Consider pursuing an internship at a gaming studio to start making professional connections and building hands-on experience. You might also apply for non-development roles at gaming studios to get your foot in the door and start learning the ropes.

Video Game Development Bootcamps

Bootcamps can offer a strong alternative to traditional degrees for prospective video game developers. Bootcamps are short-term, intensive programs that offer specialized training for specific jobs. Though many employers prefer candidates with full degrees, bootcamps can also provide you with a high-quality education.

Examples of game development and design bootcamps include:

  • General Assembly. This online game design bootcamp focuses on the mechanics of gamification and how to engage users.
  • Vertex School. This 30-week, fully online program trains prospective developers to work in the gaming industry.
  • Udemy. This 11-hour crash course claims to teach “everything you need to become a game developer from scratch.”

Frequently Asked Questions (FAQs) About Video Game Development

How do I get into video game development?

Start with education. You can pursue a degree in computer science or game development, or you can complete a coding or game development bootcamp. You might then pursue an internship or entry-level role at a gaming studio.

How long does it take to become a game developer?

If you go the traditional route, it takes at least four years to complete a bachelor’s degree and gain some professional experience before you can become a game developer.

What does a video game developer do?

Game developers bring video games from concept to reality. This work involves lots of coding, programming, testing and maintenance.

Fri, 29 Jul 2022 04:12:00 -0500 Doug Shaffer en-US text/html https://www.forbes.com/advisor/education/how-to-become-a-video-game-developer/
Three Ways To Strengthen Customer Experience With Business Developer User Experience Tools

Umesh Sachdev is the CEO &amp; Co-Founder of Uniphore, a leader in Conversational Automation. He's based in Silicon Valley, California.

If you are a business leader, you are no doubt seeing firsthand how innovative technology is revolutionizing the way they interact with customers. Artificial intelligence and other advanced automation technologies have gone mainstream and allowed companies to meet rising customer expectations and boost their profits. The latest automation technologies deliver a seamless customer experience across digital and in-person touchpoints and streamline enterprise contact center operations. This can translate to happier customers, improved margins and competitiveness and steady growth for companies. In fact, PwC reported that more than 42% of consumers are willing to pay a premium for a good customer experience.

Sounds easy, but implementing these advanced automation technologies — including machine learning, business process automation and natural language processing — is not easy. And enterprises face new challenges with more remote contact center personnel and customers who prefer intuitive, self-service options. As a result, acquiring, deploying and configuring these tools with existing contact center infrastructure requires much time, budget and effort — all of which are in scarce supply.

While these advanced technologies can deliver massive value when properly set up and kept current, value is maximized when it can be leveraged without reliance on IT teams with specialized programming skills. This need is fueling the growth and utilization of low-code/no-code development tools, which are increasingly becoming known as business developer CX/UX tools. When built and deployed correctly, these require very little to no coding experience — hence their name — and enable businesses to quickly and easily transform CX.

Simplification To The Rescue

Low-code development has been around for a while, gaining traction across enterprises as a way to develop solutions quickly, efficiently and with minimal engineering resources. Business users can use low-code tools themselves to quickly expand a critical new application, build a capability from scratch or pull data from another source, all with little or no coding experience.

Usually, employees who are most familiar with an enterprise’s processes, applications and systems can be most effective at incorporating these new tools; they blend their business skills and experience with a simplified yet flexible set of tools to build a better CX/UX.

These business developer UX approaches have been used successfully in many customer-focused industries, including customer relationship management. For example, Salesforce built out an entire community on its online platform for sales applications. Business developer UX tools are additionally helpful for a range of industries, such as government or healthcare workers who often need to pull data from legacy mainframe systems into a web portal where it can be easily accessed. When done well, adopting these low-code/no-code business developer technologies makes organizations more efficient, productive and ultimately, more profitable.

Beware Of Challenges

Still, if you don’t use the developer tools correctly, you may end up introducing significant challenges to your business, such as: 

1. More Complexity: Adding a new tool to fix a complex situation may not save time or money.

2. Silos: Sometimes when teams are given powerful capabilities, they focus only on their needs and don’t connect with other groups to ensure everyone is in the know.

3. Security Vulnerabilities: If best practices aren’t followed and systems are not updated, bad actors could exploit these newer capabilities and access critical resources.

Even with these hurdles, there is still an excellent opportunity to empower your business users with low/no-code development platforms to automate fundamental interactions across self-service and agent-assisted customer engagements.

The Checklist

Here are three things to focus on to get the most out of implementing a business developer approach for best-in-class CX.

1. Start with some "experiments" using low-code tools.

Building applications from scratch is time-consuming and expensive, so organizations are often hesitant to move forward when the return on investment is a little fuzzy. Low-code applications remove this risk by streamlining the development process. Users can drag and drop application widgets or APIs and flows while experimenting with different features and capabilities. This freedom allows organizations to think outside the box regarding contact center capabilities, perhaps offering the opportunity to experiment with interactive voice response, voice bots, a chatbot or a simple decision tree that automates and streamlines processes for call center agents.

2. Measure the impact of KPIs in real time.

Given the ease of implementing changes, businesses can easily create call flows in contact centers seemingly at will. Once done, it’s important to measure their effectiveness against other metrics such as customer satisfaction scores, first-call resolutions and other contact center KPIs. In addition, it’s easier to set up and deploy tracking mechanisms such as A/B tests to measure your various UX metrics and KPIs in real-time. From there, it’s easier to identify and scale the features that perform the best.

3. Make changes using low-code tools; rinse and repeat.

Agile development requires iterative processes to make application changes in your production environment quickly, continuously and nondisruptively. Low-code provides this agility in a simple, easy-to-use development platform. With a faster way to create new applications, business users can easily modify applications as needed when success criteria aren’t met, rather than waiting for technical teams to make the necessary changes.

Contact Centers: Ripe For Low-Code Optimization

Low-code development can create efficiencies, Strengthen productivity and enhance the organization’s bottom line. And these developer tools can now be used to provide contact centers with an accurate, 360-degree view of their customer, enhancing self-service and agent-assisted engagements to deliver exceptional CX. Business developer UX design uses low-code software and allows you to experiment with these new features and capabilities with minimal risk and greater reward through real-time measurement and iterations. As a result of automating and optimizing across their enterprise, businesses can expect to gain more agility quickly, Strengthen their contact center operations and — the most critical piece — produce happy and loyal customers. And they all know what that does for the bottom line.


Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


Thu, 03 Feb 2022 00:00:00 -0600 Umesh Sachdev en text/html https://www.forbes.com/sites/forbestechcouncil/2022/02/03/three-ways-to-improve-customer-experience-with-business-developer-user-experience-tools/
A developer’s guide to getting started with generative AI: A use case-specific approach

Presented by Intel


Generative AI promises to greatly enhance human productivity but only a handful of enterprises possess the skills and resources to develop and train from scratch the foundation models essential for its deployment. The challenges are two-fold. First, collecting the data to train the models was already challenging and has become even more so as content owners assert their intellectual property rights. Next, the resources needed for training can be prohibitively expensive. However, the societal value of unlocking the availability of and access to generative AI technologies remains high.

So, how can small enterprises or individual developers incorporate generative AI into their applications? By creating and deploying custom versions of the larger foundation models.

The large investment and effort to develop new generative AI models means that they will need to be general enough to address a wide range of uses — consider all the ways in which GPT-based models have been used already. However, a general-purpose model often can’t address the domain-specific needs of individual and enterprise use cases. Using a large general-purpose model for a narrow application also consumes excess computing resources, time and energy.

Therefore, most enterprises and developers can find the best fit for their requirements and budget by starting with a large generative AI model as a foundation to adapt to their own needs at a fraction of the development effort. This also provides infrastructure flexibility by using existing CPUs or AI accelerators instead of being limited by shortages of specific GPUs. The key is to focus on the specific use case and narrow the scope while maximizing project flexibility by using open, standards-based software and ubiquitous hardware.

Taking the use case approach for AI application development

In software development, a use case defines the characteristics of the target user, the problem to be solved and how the application will be used to solve it. This defines product requirements, dictates the software architecture and provides a roadmap for the product lifecycle. Most crucially, this scopes the project and defines what does not need to be included.

Similarly, in the case of a generative AI project, defining a use case can reduce the size, compute requirements and energy consumption of the AI model. At the same time, it can Strengthen model accuracy by focusing on a specific data set. Along with these benefits come reduced development effort and costs.

The factors that define a use case for generative AI will vary by project, but some common helpful questions can guide the process:

Data requirements: What, and how much, training data is necessary and available? Is the data structured (data warehouse) or unstructured (data lake)? What are the regulations or restrictions with it? How will the application process the data — via batch or streaming?  How often do you need to maintain or update the model? Training large language models (LLMs) from scratch takes so much time that they lack awareness of accurate knowledge — so if being up-to-date is important to your application, then you would need a different approach. Or, if you are developing a healthcare application, privacy and security restrictions on patient data typically dictate unique approaches to training and inference.

Model requirements: Model size, model performance, openness and explainability of results are all important considerations when choosing the right model for you. Performant LLM models range in size from billions to trillions of parameters — Llama 2 from Meta offers versions ranging from 7 billion to 70 billion parameters, while GPT-4 from OpenAI reportedly has 1.76 trillion parameters. While larger model sizes are typically associated with higher performance, smaller models may better fit your overall requirements. Open models offer more choices for customization, whereas closed models work well off the shelf but are limited to API access. Control over customization allows you to ground the model in your data with traceable results, which would be important in an application such as generating summaries of financial statements for investors. On the other hand, allowing an off-the-shelf model to extrapolate beyond its trained parameters (“hallucinate”) may be perfectly fine for generating ideas for advertising copy.

Application requirements: What are the accuracy, latency, privacy and safety standards that must be met? How many simultaneous users does it need to handle? How will users interact with it? For example, your implementation decisions will depend on whether your model should run on a low-latency edge device owned by the end-user or in a high-capacity cloud environment where each inference call costs you money. 

Compute requirements: Once the above is understood, what compute resources are required to meet them? Do you need to parallelize your Pandas data processing using Modin*? Do your fine-tuning and inference requirements differ enough to require a hybrid cloud-edge compute environment? While you may have the talent and data to train a generative AI model from scratch, consider whether you have the budget to overhaul your compute infrastructure.

The above factors will help drive conversations to define and scope the project requirements. Economics also factor in — the budget for data engineering, up-front development costs and the ultimate business model that will provide a requirement for the inference costs dictate the data, training and deployment strategies.

How Intel generative AI technologies can help

Intel provides heterogenous AI hardware options for a wide variety of compute requirements. To get the most out of your hardware, Intel provides optimized versions of the data analysis and end-to-end AI tools most teams use today. More recently, Intel has begun providing optimized models, including the #1 ranked 7B parameter model on the Hugging Face open LLM leaderboard (as of November 2023). These tools and models, together with those provided by its AI developer ecosystem, can satisfy your application’s accuracy, latency and security considerations. First, you can start with the hundreds of pre-trained models on Hugging Face or GitHub that are optimized for Intel hardware.  Next, you can pre-process your data using Intel-optimized tools such as Modin, fine-tune foundation models using application-specific optimization tools such as Intel® Extension for Transformers* or Hugging Face* Optimum, and automate model tuning with SigOpt. All of this builds on the optimizations that Intel has already contributed to open source AI frameworks, including TensorFlow*, PyTorch* and DeepSpeed.

Let’s illustrate with some generative AI use case examples for customer service, retail, and healthcare applications.

Generative AI for customer service: Chatbot use case

Chatbots based on LLMs can Strengthen customer service efficiency by providing instant answers to common questions, freeing customer service representatives to focus on more complex cases.

Foundation models are already trained to converse in multiple languages on a broad range of subjects but lack depth on the offerings of a given business. A general-purpose LLM may also hallucinate, confidently generating output even in the absence of trained knowledge.

Fine-tuning and retrieval are two of the more popular methods to customize a foundation model. Fine-tuning incrementally updates a foundation model with custom information. Retrieval-based methods, such as retrieval-augmented generation (RAG), fetch information from a database external to the model. This database is built using the offering-specific data and documents, vectorized for use by the AI model. Both methods deliver offering-specific results and can be updated using only CPUs (such as Intel® Xeon® Scalable processors), which are ubiquitous and more readily available than specific accelerators.

The use case helps determine which method best fits the application’s requirements. Fine-tuning offers latency advantages since the knowledge is built into the generative AI model. Retrieval offers traceability from its answers directly to real sources in the knowledge base, and updating this knowledge base does not require incremental training.  

It’s also important to consider the compute requirements and costs for ongoing inference operations. The transformer architecture that powers most chatbots is usually limited more by memory bandwidth than raw compute power. Model optimization techniques such as quantization can reduce the memory bandwidth requirements, which reduces latency and inference compute costs.

There are plenty of foundation models to choose from. Many come in different parameter sizes. Starting with a clearly defined use case helps choose the right starting point and dictates how to customize it from there.

Customize a chatbot foundation model with retrieval augmented generation (RAG).

Generative AI for retail: Virtual try-on use case

Retailers can use generative AI to offer their customers a better, more immersive online experience. An example is the ability to try on clothes virtually so they see how they look and fit before buying. This improves customer satisfaction and retail supply chain efficiency by reducing returns and better forecasting customers’ wants.

This use case is based on image generation, but the foundation model must be focused on generating images using the retailer’s clothing line. Fine-tuning image-based foundation models such as Stable Diffusion may only require a small number of images running on CPU platforms. Techniques such as Low-Rank Adaptation (LoRA) can more surgically insert the retailer’s offerings into the Stable Diffusion model.

The other key input to this use case is the imagery or scan of the customer’s body. The use case implications start with how to preserve the customer’s privacy. The images must stay on the local edge device, perhaps the customer’s phone or a locally installed image capture device.

Does this mean the entire generative AI pipeline must run on the edge, or can this application be architected in a way that encodes the necessary information from the images to upload to the rest of the model running in a data center or cloud? This type of architecture decision is the domain of MLOps professionals, who are vital to the successful development of generative AI applications.

Now, given that some amount of AI inference needs to run efficiently on a variety of edge devices, it becomes vital to choose a framework that can optimize for deployment without rewriting code for each type of device.

See a generative AI virtual try-on application in action.

Generative AI for healthcare: Patient monitoring use case

Pairing generative AI with real-time patient monitoring data can generate personalized reports, action plans or interventions. Synthesizing data, imagery and case notes into a summary or a recommendation can Strengthen healthcare provider productivity while reducing the need for patients to travel to or stay in healthcare facilities.

This use case requires multimodal AI, which combines different types of models to process the heterogeneous input data, likely combined with an LLM to generate reports. Because this is a more complex use case, starting with a multimodal reference implementation for a similar use case may accelerate a project.

Training healthcare models typically raises patient data privacy questions. Often, patient data must remain with the provider, so collecting data from multiple providers to train or fine-tune a model becomes impossible. Federated learning addresses this by sending the model to the data locations for training locally and then combining the results from the various locally trained models.

Inference also needs to maintain patient privacy. The most straightforward approach would be to run inference locally to the patient. Given the size and complexity of a multimodal generative AI system, running entirely on edge devices may be challenging. It may be possible to architect the system to combine edge and data center processing, but model optimization techniques will likely still be required for the models running on edge devices.

Developing a hybrid MLOps architecture like this is much more efficient if the AI tools and frameworks run optimally on a variety of devices without having to rewrite low-level code to optimize for each type of device.

How to get started

Start by doing your best to define your use case, using the questions above as guidance to determine the data, compute, model and application requirements for the problem you are trying to solve with generative AI.

Then, determine what relevant foundation models, reference implementations and resources are available in the AI ecosystem. From there, identify and implement the fine-tuning and model optimization techniques most relevant to your use case.

Compute needs will likely not be apparent at the beginning of the project and typically evolve throughout the project. Intel® Developer Cloud offers access to a variety of CPUs, GPUs and AI accelerators to try out or to get started developing with.

Finally, to efficiently adapt to different compute platforms during development and then to deployment, use AI tools and frameworks that are open, standards-based and run optimally on any of the above devices without having to rewrite low-level code for each type of device.  

Learn More: Intel AI softwareIntel Developer Cloud, Intel AI Reference Kits, oneAPI for Unified Programming

Jack Erickson is Principal Product Marketing Manager, AI Software at Intel.

Chandan Damannagari is Director, AI Software, at Intel.


Notices &amp; Disclaimers: Intel technologies may require enabled hardware, software, or service activation. No product or component can be absolutely secure. Your costs and results may vary. ©Intel Corporation.  Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries.  Other names and brands may be claimed as the property of others.
Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact sales@venturebeat.com.
Sun, 10 Dec 2023 19:29:00 -0600 en-US text/html https://venturebeat.com/ai/a-developerss-guide-to-getting-started-with-generative-ai-a-use-case-specific-approach/
No takeover approach, says COVID-19 vaccine developer BioNTech

Germany’s BioNTech has said it has not received any takeover approaches after it began a high-profile coronavirus vaccine trial with Pfizer in Germany, following press reports over the weekend.

Press reports over the weekend suggest the Mainz-based biotech has been contacted by several industry players since the news of the trial emerged. 

But the company has reached out to pharmaphorum to clarify it had not received any takeover approach following the newspaper speculation and further reports online.

Shares in BioNTech have been soaring since the announcement on Thursday of the plans to begin the first human trial of a COVID-19 trial in Germany. 

The Paul-Ehrlich-Institut (PEI) gave the go-ahead in just four days for the phase 1/2 trial in 200 healthy volunteers, which will test four coronavirus vaccine candidates, which are based on different RNA formats and target different antigens. 

The trials are due to start before the end of this month, and the first results should become available around the end of June or early July. 

Pfizer licensed rights to BioNTech’s BNT162 vaccine development programme last month. 

Two of the vaccines are based on nucleoside modified mRNA (modeRNA), one has a uridine containing mRNA (uRNA) structure and the fourth uses self-amplifying mRNA (saRNA), each formulated in lipid nanoparticles. 

Two have the full sequence of the spike (s) protein of SARS-CoV-2, the virus which causes COVID-19, and two use a smaller sequence that BioNTech calls an optimised receptor binding domain (RBD) and is thought to be the most important for stimulating antibody response to the virus. 

The trial will enrol 200 volunteers aged 18 to 55 and test a range of vaccine doses from 1 µg to 100 µg, gauging safety and tolerability and how well it stimulates an antibody response, whilst also selecting a dose for further studies. 

The phase 2 portion of the study will include subjects with a higher risk for a severe COVID-19 infection, according to the PEI. 

(Corrected to clarify there has been no takeover approach)

Feature image courtesy of Rocky Mountain Laboratories/NIH

 

Sun, 26 Apr 2020 12:00:00 -0500 en text/html https://pharmaphorum.com/news/biontech-plays-down-takeover-talk-ahead-of-covid-19-vaccine-trial
The 15 Best Indie Games of 2023 No result found, try new keyword!Following a stellar slate of titles in 2022, indie developers had a bit of a tough ... and a more non-linear approach to how players tackle its massive world and intimidating enemies. Fri, 15 Dec 2023 10:00:00 -0600 en-us text/html https://www.msn.com/ Charlotte News No result found, try new keyword!© 2024 American City Business Journals. All rights reserved. Use of and/or registration on any portion of this site constitutes acceptance of their User Agreement ... Wed, 03 Jan 2024 17:38:00 -0600 text/html https://www.bizjournals.com/charlotte/news/




LRP-614 PDF Download | LRP-614 PDF Download | LRP-614 Topics | LRP-614 exam syllabus | LRP-614 plan | LRP-614 exam Questions | LRP-614 approach | LRP-614 helper | LRP-614 learning | LRP-614 Free PDF |


Killexams exam Simulator
Killexams Questions and Answers
Killexams Exams List
Search Exams

Source Provider

LRP-614 Reviews by Customers

Customer Reviews help to evaluate the exam performance in real test. Here all the reviews, reputation, success stories and ripoff reports provided.

LRP-614 Reviews

100% Valid and Up to Date LRP-614 Exam Questions

We hereby announce with the collaboration of world's leader in Certification Exam Dumps and Real Exam Questions with Practice Tests that, we offer Real Exam Questions of thousands of Certification Exams Free PDF with up to date VCE exam simulator Software.