Search results for: swarm intelligence
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1607

Search results for: swarm intelligence

887 A QoE-driven Cross-layer Resource Allocation Scheme for High Traffic Service over Open Wireless Network Downlink

Authors: Liya Shan, Qing Liao, Qinyue Hu, Shantao Jiang, Tao Wang

Abstract:

In this paper, a Quality of Experience (QoE)-driven cross-layer resource allocation scheme for high traffic service over Open Wireless Network (OWN) downlink is proposed, and the related problem about the users in the whole cell including the users in overlap region of different cells has been solved.A method, in which assess models of the BestEffort service and the no-reference assess algorithm for video service are adopted, to calculate the Mean Opinion Score (MOS) value for high traffic service has been introduced. The cross-layer architecture considers the parameters in application layer, media access control layer and physical layer jointly. Based on this architecture and the MOS value, the Binary Constrained Particle Swarm Optimization (B_CPSO) algorithm is used to solve the cross-layer resource allocation problem. In addition,simulationresults show that the proposed scheme significantly outperforms other schemes in terms of maximizing average users’ MOS value for the whole system as well as maintaining fairness among users.

Keywords: high traffic service, cross-layer resource allocation, QoE, B_CPSO, OWN

Procedia PDF Downloads 525
886 From Battles to Balance and Back: Document Analysis of EU Copyright in the Digital Era

Authors: Anette Alén

Abstract:

Intellectual property (IP) regimes have traditionally been designed to integrate various conflicting elements stemming from private entitlement and the public good. In IP laws and regulations, this design takes the form of specific uses of protected subject-matter without the right-holder’s consent, or exhaustion of exclusive rights upon market release, and the like. More recently, the pursuit of ‘balance’ has gained ground in the conceptualization of these conflicting elements both in terms of IP law and related policy. This can be seen, for example, in European Union (EU) copyright regime, where ‘balance’ has become a key element in argumentation, backed up by fundamental rights reasoning. This development also entails an ever-expanding dialogue between the IP regime and the constitutional safeguards for property, free speech, and privacy, among others. This study analyses the concept of ‘balance’ in EU copyright law: the research task is to examine the contents of the concept of ‘balance’ and the way it is operationalized and pursued, thereby producing new knowledge on the role and manifestations of ‘balance’ in recent copyright case law and regulatory instruments in the EU. The study discusses two particular pieces of legislation, the EU Digital Single Market (DSM) Copyright Directive (EU) 2019/790 and the finalized EU Artificial Intelligence (AI) Act, including some of the key preparatory materials, as well as EU Court of Justice (CJEU) case law pertaining to copyright in the digital era. The material is examined by means of document analysis, mapping the ways ‘balance’ is approached and conceptualized in the documents. Similarly, the interaction of fundamental rights as part of the balancing act is also analyzed. Doctrinal study of law is also employed in the analysis of legal sources. This study suggests that the pursuit of balance is, for its part, conducive to new battles, largely due to the advancement of digitalization and more recent developments in artificial intelligence. Indeed, the ‘balancing act’ rather presents itself as a way to bypass or even solidify some of the conflicting interests in a complex global digital economy. Indeed, such a conceptualization, especially when accompanied by non-critical or strategically driven fundamental rights argumentation, runs counter to the genuine acknowledgment of new types of conflicting interests in the copyright regime. Therefore, a more radical approach, including critical analysis of the normative basis and fundamental rights implications of the concept of ‘balance’, is required to readjust copyright law and regulations for the digital era. Notwithstanding the focus on executing the study in the context of the EU copyright regime, the results bear wider significance for the digital economy, especially due to the platform liability regime in the DSM Directive and with the AI Act including objectives of a ‘level playing field’ whereby compliance with EU copyright rules seems to be expected among system providers.

Keywords: balance, copyright, fundamental rights, platform liability, artificial intelligence

Procedia PDF Downloads 12
885 Covid Medical Imaging Trial: Utilising Artificial Intelligence to Identify Changes on Chest X-Ray of COVID

Authors: Leonard Tiong, Sonit Singh, Kevin Ho Shon, Sarah Lewis

Abstract:

Investigation into the use of artificial intelligence in radiology continues to develop at a rapid rate. During the coronavirus pandemic, the combination of an exponential increase in chest x-rays and unpredictable staff shortages resulted in a huge strain on the department's workload. There is a World Health Organisation estimate that two-thirds of the global population does not have access to diagnostic radiology. Therefore, there could be demand for a program that could detect acute changes in imaging compatible with infection to assist with screening. We generated a conventional neural network and tested its efficacy in recognizing changes compatible with coronavirus infection. Following ethics approval, a deidentified set of 77 normal and 77 abnormal chest x-rays in patients with confirmed coronavirus infection were used to generate an algorithm that could train, validate and then test itself. DICOM and PNG image formats were selected due to their lossless file format. The model was trained with 100 images (50 positive, 50 negative), validated against 28 samples (14 positive, 14 negative), and tested against 26 samples (13 positive, 13 negative). The initial training of the model involved training a conventional neural network in what constituted a normal study and changes on the x-rays compatible with coronavirus infection. The weightings were then modified, and the model was executed again. The training samples were in batch sizes of 8 and underwent 25 epochs of training. The results trended towards an 85.71% true positive/true negative detection rate and an area under the curve trending towards 0.95, indicating approximately 95% accuracy in detecting changes on chest X-rays compatible with coronavirus infection. Study limitations include access to only a small dataset and no specificity in the diagnosis. Following a discussion with our programmer, there are areas where modifications in the weighting of the algorithm can be made in order to improve the detection rates. Given the high detection rate of the program, and the potential ease of implementation, this would be effective in assisting staff that is not trained in radiology in detecting otherwise subtle changes that might not be appreciated on imaging. Limitations include the lack of a differential diagnosis and application of the appropriate clinical history, although this may be less of a problem in day-to-day clinical practice. It is nonetheless our belief that implementing this program and widening its scope to detecting multiple pathologies such as lung masses will greatly assist both the radiology department and our colleagues in increasing workflow and detection rate.

Keywords: artificial intelligence, COVID, neural network, machine learning

Procedia PDF Downloads 69
884 Modeling and Optimization of Nanogenerator for Energy Harvesting

Authors: Fawzi Srairi, Abderrahmane Dib

Abstract:

Recently, the desire for a self-powered micro and nanodevices has attracted a great interest of using sustainable energy sources. Further, the ultimate goal of nanogenerator is to harvest energy from the ambient environment in which a self-powered device based on these generators is needed. With the development of nanogenerator-based circuits design and optimization, the building of new device simulator is necessary for the study and the synthesis of electromechanical parameters of this type of models. In the present article, both numerical modeling and optimization of piezoelectric nanogenerator based on zinc oxide have been carried out. They aim to improve the electromechanical performances, robustness, and synthesis process for nanogenerator. The proposed model has been developed for a systematic study of the nanowire morphology parameters in stretching mode. In addition, heuristic optimization technique, namely, particle swarm optimization has been implemented for an analytic modeling and an optimization of nanogenerator-based process in stretching mode. Moreover, the obtained results have been tested and compared with conventional model where a good agreement has been obtained for excitation mode. The developed nanogenerator model can be generalized, extended and integrated into simulators devices to study nanogenerator-based circuits.

Keywords: electrical potential, heuristic algorithms, numerical modeling, nanogenerator

Procedia PDF Downloads 285
883 Important Factors for Successful Solution of Emotional Situations: Empirical Study on Young People

Authors: R. Lekaviciene, D. Antiniene

Abstract:

Attempts to split the construct of emotional intelligence (EI) into separate components – ability to understand own and others’ emotions and ability to control own and others’ emotions may be meaningful more theoretically than practically. In real life, a personality encounters various emotional situations that require exhibition of complex EI to solve them. Emotional situation solution tests enable measurement of such undivided EI. The object of the present study is to determine sociodemographic and other factors that are important for emotional situation solutions. The study involved 1,430 participants from various regions of Lithuania. The age of participants varied from 17 years to 27 years. Emotional social and interpersonal situation scale EI-DARL-V2 was used. Each situation had two mandatory answering formats: The first format contained assignments associated with hypothetical theoretical knowledge of how the situation should be solved, while the second format included the question of how the participant would personally resolve the given situation in reality. A questionnaire that contained various sociodemographic data of subjects was also presented. Factors, statistically significant for emotional situation solution, have been determined: gender, family structure, the subject’s relation with his or her mother, mother’s occupation, subjectively assessed financial situation of the family, level of education of the subjects and his or her parents, academic achievement, etc. The best solvers of emotional situations are women with high academic achievements. According to their chosen study profile/acquired profession, they are related to the fields in social sciences and humanities. The worst solvers of emotional situations are men raised in foster homes. They are/were bad students and mostly choose blue-collar professions.

Keywords: emotional intelligence, emotional situations, solution of situation, young people

Procedia PDF Downloads 155
882 Drones, Rebels and Bombs: Explaining the Role of Private Security and Expertise in a Post-piratical Indian Ocean

Authors: Jessica Kate Simonds

Abstract:

The last successful hijacking perpetrated by Somali pirates in 2012 represented a critical turning point for the identity and brand of Indian Ocean (IO) insecurity, coined in this paper as the era of the post-piratical. This paper explores the broadening of the PMSC business model to account and contribute to the design of a new IO security environment that prioritises foreign and insurgency drone activity and Houthi rebel operations as the main threat to merchant shipping in the post-2012 era. This study is situated within a longer history of analysing maritime insecurity and also contributes a bespoke conceptual framework that understands the sea as a space that is produced and reproduced relative to existing and emerging threats to merchant shipping based on bespoke models of information sharing and intelligence acquisition. This paper also makes a prominent empirical contribution by drawing on a post-positivist methodology, data drawn from original semi-structured interviews with senior maritime insurers and active merchant seafarers that is triangulated with industry-produced guidance such as the BMP series as primary data sources. Each set is analysed through qualitative discourse and content analysis and supported by the quantitative data sets provided by the IMB Piracy Reporting center and intelligence networks. This analysis reveals that mechanisms such as the IGP&I Maritime Security Committee and intelligence divisions of PMSC’s have driven the exchanges of knowledge between land and sea and thus the reproduction of the maritime security environment through new regulations and guidance to account dones, rebels and bombs as the key challenges in the IO, beyond piracy. A contribution of this paper is the argument that experts who may not be in the highest-profile jobs are the architects of maritime insecurity based on their detailed knowledge and connections to vessels in transit. This paper shares the original insights of those who have served in critical decision making spaces to demonstrate that the development and refinement of industry produced deterrence guidance that has been accredited to the mitigation of piracy, have shaped new editions such as BMP 5 that now serve to frame a new security environment that prioritises the mitigation of risks from drones and WBEID’s from both state and insurgency risk groups. By highlighting the experiences and perspectives of key players on both land and at sea, the key finding of this paper is outlining that as pirates experienced a financial boom by profiteering from their bespoke business model during the peak of successful hijackings, the private security market encountered a similar level of financial success and guaranteed risk environment in which to prospect business. Thus, the reproduction of the Indian Ocean as a maritime security environment reflects a new found purpose for PMSC’s as part of the broader conglomerate of maritime insurers, regulators, shipowners and managers who continue to redirect the security consciousness and IO brand of insecurity.

Keywords: maritime security, private security, risk intelligence, political geography, international relations, political economy, maritime law, security studies

Procedia PDF Downloads 162
881 A Thorough Analysis on The Dialog Application Replika

Authors: Weeam Abdulrahman, Gawaher Al-Madwary, Fatima Al-Ammari, Razan Mohammad

Abstract:

This research discusses the AI features in Replika which is a dialog with a customized characters application, interaction and communication with AI in different ways that is provided for the user. spreading a survey with questions on how the AI worked is one approach of exposing the app to others to utilize and also we made an analysis that provides us with the conclusion of our research as a result, individuals will be able to try out the app. In the methodology we explain each page that pops up in the screen while using replika and Specify each part and icon.

Keywords: Replika, AI, artificial intelligence, dialog app

Procedia PDF Downloads 151
880 Artificial Law: Legal AI Systems and the Need to Satisfy Principles of Justice, Equality and the Protection of Human Rights

Authors: Begum Koru, Isik Aybay, Demet Celik Ulusoy

Abstract:

The discipline of law is quite complex and has its own terminology. Apart from written legal rules, there is also living law, which refers to legal practice. Basic legal rules aim at the happiness of individuals in social life and have different characteristics in different branches such as public or private law. On the other hand, law is a national phenomenon. The law of one nation and the legal system applied on the territory of another nation may be completely different. People who are experts in a particular field of law in one country may have insufficient expertise in the law of another country. Today, in addition to the local nature of law, international and even supranational law rules are applied in order to protect basic human values and ensure the protection of human rights around the world. Systems that offer algorithmic solutions to legal problems using artificial intelligence (AI) tools will perhaps serve to produce very meaningful results in terms of human rights. However, algorithms to be used should not be developed by only computer experts, but also need the contribution of people who are familiar with law, values, judicial decisions, and even the social and political culture of the society to which it will provide solutions. Otherwise, even if the algorithm works perfectly, it may not be compatible with the values of the society in which it is applied. The latest developments involving the use of AI techniques in legal systems indicate that artificial law will emerge as a new field in the discipline of law. More AI systems are already being applied in the field of law, with examples such as predicting judicial decisions, text summarization, decision support systems, and classification of documents. Algorithms for legal systems employing AI tools, especially in the field of prediction of judicial decisions and decision support systems, have the capacity to create automatic decisions instead of judges. When the judge is removed from this equation, artificial intelligence-made law created by an intelligent algorithm on its own emerges, whether the domain is national or international law. In this work, the aim is to make a general analysis of this new topic. Such an analysis needs both a literature survey and a perspective from computer experts' and lawyers' point of view. In some societies, the use of prediction or decision support systems may be useful to integrate international human rights safeguards. In this case, artificial law can serve to produce more comprehensive and human rights-protective results than written or living law. In non-democratic countries, it may even be thought that direct decisions and artificial intelligence-made law would be more protective instead of a decision "support" system. Since the values of law are directed towards "human happiness or well-being", it requires that the AI algorithms should always be capable of serving this purpose and based on the rule of law, the principle of justice and equality, and the protection of human rights.

Keywords: AI and law, artificial law, protection of human rights, AI tools for legal systems

Procedia PDF Downloads 52
879 Artificial Intelligence and Robotics in the Eye of Private Law with Special Regards to Intellectual Property and Liability Issues

Authors: Barna Arnold Keserű

Abstract:

In the last few years (what is called by many scholars the big data era) artificial intelligence (hereinafter AI) get more and more attention from the public and from the different branches of sciences as well. What previously was a mere science-fiction, now starts to become reality. AI and robotics often walk hand in hand, what changes not only the business and industrial life, but also has a serious impact on the legal system. The main research of the author focuses on these impacts in the field of private law, with special regards to liability and intellectual property issues. Many questions arise in these areas connecting to AI and robotics, where the boundaries are not sufficiently clear, and different needs are articulated by the different stakeholders. Recognizing the urgent need of thinking the Committee on Legal Affairs of the European Parliament adopted a Motion for a European Parliament Resolution A8-0005/2017 (of January 27th, 2017) in order to take some recommendations to the Commission on civil law rules on robotics and AI. This document defines some crucial usage of AI and/or robotics, e.g. the field of autonomous vehicles, the human job replacement in the industry or smart applications and machines. It aims to give recommendations to the safe and beneficial use of AI and robotics. However – as the document says – there are no legal provisions that specifically apply to robotics or AI in IP law, but that existing legal regimes and doctrines can be readily applied to robotics, although some aspects appear to call for specific consideration, calls on the Commission to support a horizontal and technologically neutral approach to intellectual property applicable to the various sectors in which robotics could be employed. AI can generate some content what worth copyright protection, but the question came up: who is the author, and the owner of copyright? The AI itself can’t be deemed author because it would mean that it is legally equal with the human persons. But there is the programmer who created the basic code of the AI, or the undertaking who sells the AI as a product, or the user who gives the inputs to the AI in order to create something new. Or AI generated contents are so far from humans, that there isn’t any human author, so these contents belong to public domain. The same questions could be asked connecting to patents. The research aims to answer these questions within the current legal framework and tries to enlighten future possibilities to adapt these frames to the socio-economical needs. In this part, the proper license agreements in the multilevel-chain from the programmer to the end-user become very important, because AI is an intellectual property in itself what creates further intellectual property. This could collide with data-protection and property rules as well. The problems are similar in the field of liability. We can use different existing forms of liability in the case when AI or AI led robotics cause damages, but it is unsure that the result complies with economical and developmental interests.

Keywords: artificial intelligence, intellectual property, liability, robotics

Procedia PDF Downloads 179
878 Overweight and Neurocognitive Functioning: Unraveling the Antagonistic Relationship in Adolescents

Authors: Swati Bajpai, S. P. K Jena

Abstract:

Background: There is dramatic increase in the prevalence and severity of overweight in adolescents, raising concerns about their psychosocial and cognitive consequences, thereby indicating the immediate need to understand the effects of increased weight on scholastic performance. Although the body of research is currently limited, available results have identified an inverse relationship between obesity and cognition in adolescents. Aim: to examine the association between increased Body Mass Index in adolescents and their neurocognitive functioning. Methods: A case –control study of 28 subjects in the age group of 11-17 years (14 Males and 14 females) was taken on the basis of main inclusion criteria (Body Mass Index). All of them were randomized to (experimental group: overweight) and (control group: normal weighted). A complete neurocognitive assessment was carried out using validated psychological scales namely, Color Progressive Matrices (to assess intelligence); Bender Visual Motor Gestalt Test (Perceptual motor functioning); PGI-Memory Scale for Children (memory functioning) and Malin’s Intelligence Scale Indian Children (verbal and performance ability). Results: statistical analysis of the results depicted that 57% of the experimental group lack in cognitive abilities, especially in general knowledge (99.1±12.0 vs. 102.8±6.7), working memory (91.5±8.4 vs. 93.1±8.7), concrete ability (82.3±11.5 vs. 92.6±1.7) and perceptual motor functioning (1.5±1.0 vs. 0.3±0.9) as compared to control group. Conclusion: Our investigations suggest that weight gain results, at least in part, from a neurological predisposition characterized by reduced executive function, and in turn obesity itself has a compounding negative impact on the brain. Though, larger sample is needed to make more affirmative claims.

Keywords: adolescents, body mass index, neurocognition, obesity

Procedia PDF Downloads 470
877 Impact of School Environment on Socio-Affective Development: A Quasi-Experimental Longitudinal Study of Urban and Suburban Gifted and Talented Programs

Authors: Rebekah Granger Ellis, Richard B. Speaker, Pat Austin

Abstract:

This study used two psychological scales to examine the level of social and emotional intelligence and moral judgment of over 500 gifted and talented high school students in various academic and creative arts programs in a large metropolitan area in the southeastern United States. For decades, numerous models and programs purporting to encourage socio-affective characteristics of adolescent development have been explored in curriculum theory and design. Socio-affective merges social, emotional, and moral domains. It encompasses interpersonal relations and social behaviors; development and regulation of emotions; personal and gender identity construction; empathy development; moral development, thinking, and judgment. Examining development in these socio-affective domains can provide insight into why some gifted and talented adolescents are not successful in adulthood despite advanced IQ scores. Particularly whether nonintellectual characteristics of gifted and talented individuals, such as emotional, social and moral capabilities, are as advanced as their intellectual abilities and how these are related to each other. Unique characteristics distinguish gifted and talented individuals; these may appear as strengths, but there is the potential for problems to accompany them. Although many thrive in their school environments, some gifted students struggle rather than flourish. In the socio-affective domain, these adolescents face special intrapersonal, interpersonal, and environmental problems. Gifted individuals’ cognitive, psychological, and emotional development occurs asynchronously, in multidimensional layers at different rates and unevenly across ability levels. Therefore, it is important to examine the long-term effects of participation in various gifted and talented programs on the socio-affective development of gifted and talented adolescents. This quasi-experimental longitudinal study examined students in several gifted and talented education programs (creative arts school, urban charter schools, and suburban public schools) for (1) socio-affective development level and (2) whether a particular gifted and talented program encourages developmental growth. The following research questions guided the study: (1) How do academically and artistically talented gifted 10th and 11th grade students perform on psychometric scales of social and emotional intelligence and moral judgment? Do they differ from their age or grade normative sample? Are their gender differences among gifted students? (2) Does school environment impact 10th and 11th grade gifted and talented students’ socio-affective development? Do gifted adolescents who participate in a particular school gifted program differ in their developmental profiles of social and emotional intelligence and moral judgment? Students’ performances on psychometric instruments were compared over time and by type of program. Participants took pre-, mid-, and post-tests over the course of an academic school year with Defining Issues Test (DIT-2) assessing moral judgment and BarOn EQ-I: YV assessing social and emotional intelligence. Based on these assessments, quantitative differences in growth on psychological scales (individual and school) were examined. Change scores between schools were also compared. If a school showed change, artifacts (culture, curricula, instructional methodology) provided insight as to environmental qualities that produced this difference.

Keywords: gifted and talented education, moral development, socio-affective development, socio-affective education

Procedia PDF Downloads 145
876 Enhance Concurrent Design Approach through a Design Methodology Based on an Artificial Intelligence Framework: Guiding Group Decision Making to Balanced Preliminary Design Solution

Authors: Loris Franchi, Daniele Calvi, Sabrina Corpino

Abstract:

This paper presents a design methodology in which stakeholders are assisted with the exploration of a so-called negotiation space, aiming to the maximization of both group social welfare and single stakeholder’s perceived utility. The outcome results in less design iterations needed for design convergence while obtaining a higher solution effectiveness. During the early stage of a space project, not only the knowledge about the system but also the decision outcomes often are unknown. The scenario is exacerbated by the fact that decisions taken in this stage imply delayed costs associated with them. Hence, it is necessary to have a clear definition of the problem under analysis, especially in the initial definition. This can be obtained thanks to a robust generation and exploration of design alternatives. This process must consider that design usually involves various individuals, who take decisions affecting one another. An effective coordination among these decision-makers is critical. Finding mutual agreement solution will reduce the iterations involved in the design process. To handle this scenario, the paper proposes a design methodology which, aims to speed-up the process of pushing the mission’s concept maturity level. This push up is obtained thanks to a guided negotiation space exploration, which involves autonomously exploration and optimization of trade opportunities among stakeholders via Artificial Intelligence algorithms. The negotiation space is generated via a multidisciplinary collaborative optimization method, infused by game theory and multi-attribute utility theory. In particular, game theory is able to model the negotiation process to reach the equilibria among stakeholder needs. Because of the huge dimension of the negotiation space, a collaborative optimization framework with evolutionary algorithm has been integrated in order to guide the game process to efficiently and rapidly searching for the Pareto equilibria among stakeholders. At last, the concept of utility constituted the mechanism to bridge the language barrier between experts of different backgrounds and differing needs, using the elicited and modeled needs to evaluate a multitude of alternatives. To highlight the benefits of the proposed methodology, the paper presents the design of a CubeSat mission for the observation of lunar radiation environment. The derived solution results able to balance all stakeholders needs and guaranteeing the effectiveness of the selection mission concept thanks to its robustness in valuable changeability. The benefits provided by the proposed design methodology are highlighted, and further development proposed.

Keywords: concurrent engineering, artificial intelligence, negotiation in engineering design, multidisciplinary optimization

Procedia PDF Downloads 110
875 Ethical Artificial Intelligence: An Exploratory Study of Guidelines

Authors: Ahmad Haidar

Abstract:

The rapid adoption of Artificial Intelligence (AI) technology holds unforeseen risks like privacy violation, unemployment, and algorithmic bias, triggering research institutions, governments, and companies to develop principles of AI ethics. The extensive and diverse literature on AI lacks an analysis of the evolution of principles developed in recent years. There are two fundamental purposes of this paper. The first is to provide insights into how the principles of AI ethics have been changed recently, including concepts like risk management and public participation. In doing so, a NOISE (Needs, Opportunities, Improvements, Strengths, & Exceptions) analysis will be presented. Second, offering a framework for building Ethical AI linked to sustainability. This research adopts an explorative approach, more specifically, an inductive approach to address the theoretical gap. Consequently, this paper tracks the different efforts to have “trustworthy AI” and “ethical AI,” concluding a list of 12 documents released from 2017 to 2022. The analysis of this list unifies the different approaches toward trustworthy AI in two steps. First, splitting the principles into two categories, technical and net benefit, and second, testing the frequency of each principle, providing the different technical principles that may be useful for stakeholders considering the lifecycle of AI, or what is known as sustainable AI. Sustainable AI is the third wave of AI ethics and a movement to drive change throughout the entire lifecycle of AI products (i.e., idea generation, training, re-tuning, implementation, and governance) in the direction of greater ecological integrity and social fairness. In this vein, results suggest transparency, privacy, fairness, safety, autonomy, and accountability as recommended technical principles to include in the lifecycle of AI. Another contribution is to capture the different basis that aid the process of AI for sustainability (e.g., towards sustainable development goals). The results indicate data governance, do no harm, human well-being, and risk management as crucial AI for sustainability principles. This study’s last contribution clarifies how the principles evolved. To illustrate, in 2018, the Montreal declaration mentioned eight principles well-being, autonomy, privacy, solidarity, democratic participation, equity, and diversity. In 2021, notions emerged from the European Commission proposal, including public trust, public participation, scientific integrity, risk assessment, flexibility, benefit and cost, and interagency coordination. The study design will strengthen the validity of previous studies. Yet, we advance knowledge in trustworthy AI by considering recent documents, linking principles with sustainable AI and AI for sustainability, and shedding light on the evolution of guidelines over time.

Keywords: artificial intelligence, AI for sustainability, declarations, framework, regulations, risks, sustainable AI

Procedia PDF Downloads 71
874 Web and Smart Phone-based Platform Combining Artificial Intelligence and Satellite Remote Sensing Data to Geoenable Villages for Crop Health Monitoring

Authors: Siddhartha Khare, Nitish Kr Boro, Omm Animesh Mishra

Abstract:

Recent food price hikes may signal the end of an era of predictable global grain crop plenty due to climate change, population expansion, and dietary changes. Food consumption will treble in 20 years, requiring enormous production expenditures. Climate and the atmosphere changed owing to rainfall and seasonal cycles in the past decade. India's tropical agricultural relies on evapotranspiration and monsoons. In places with limited resources, the global environmental change affects agricultural productivity and farmers' capacity to adjust to changing moisture patterns. Motivated by these difficulties, satellite remote sensing might be combined with near-surface imaging data (smartphones, UAVs, and PhenoCams) to enable phenological monitoring and fast evaluations of field-level consequences of extreme weather events on smallholder agriculture output. To accomplish this technique, we must digitally map all communities agricultural boundaries and crop kinds. With the improvement of satellite remote sensing technologies, a geo-referenced database may be created for rural Indian agriculture fields. Using AI, we can design digital agricultural solutions for individual farms. Main objective is to Geo-enable each farm along with their seasonal crop information by combining Artificial Intelligence (AI) with satellite and near-surface data and then prepare long term crop monitoring through in-depth field analysis and scanning of fields with satellite derived vegetation indices. We developed an AI based algorithm to understand the timelapse based growth of vegetation using PhenoCam or Smartphone based images. We developed an android platform where user can collect images of their fields based on the android application. These images will be sent to our local server, and then further AI based processing will be done at our server. We are creating digital boundaries of individual farms and connecting these farms with our smart phone application to collect information about farmers and their crops in each season. We are extracting satellite-based information for each farm from Google earth engine APIs and merging this data with our data of tested crops from our app according to their farm’s locations and create a database which will provide the data of quality of crops from their location.

Keywords: artificial intelligence, satellite remote sensing, crop monitoring, android and web application

Procedia PDF Downloads 79
873 Intelligent Control of Agricultural Farms, Gardens, Greenhouses, Livestock

Authors: Vahid Bairami Rad

Abstract:

The intelligentization of agricultural fields can control the temperature, humidity, and variables affecting the growth of agricultural products online and on a mobile phone or computer. Smarting agricultural fields and gardens is one of the best and best ways to optimize agricultural equipment and has a 100 percent direct effect on the growth of plants and agricultural products and farms. Smart farms are the topic that we are going to discuss today, the Internet of Things and artificial intelligence. Agriculture is becoming smarter every day. From large industrial operations to individuals growing organic produce locally, technology is at the forefront of reducing costs, improving results and ensuring optimal delivery to market. A key element to having a smart agriculture is the use of useful data. Modern farmers have more tools to collect intelligent data than in previous years. Data related to soil chemistry also allows people to make informed decisions about fertilizing farmland. Moisture meter sensors and accurate irrigation controllers have made the irrigation processes to be optimized and at the same time reduce the cost of water consumption. Drones can apply pesticides precisely on the desired point. Automated harvesting machines navigate crop fields based on position and capacity sensors. The list goes on. Almost any process related to agriculture can use sensors that collect data to optimize existing processes and make informed decisions. The Internet of Things (IoT) is at the center of this great transformation. Internet of Things hardware has grown and developed rapidly to provide low-cost sensors for people's needs. These sensors are embedded in IoT devices with a battery and can be evaluated over the years and have access to a low-power and cost-effective mobile network. IoT device management platforms have also evolved rapidly and can now be used securely and manage existing devices at scale. IoT cloud services also provide a set of application enablement services that can be easily used by developers and allow them to build application business logic. Focus on yourself. These development processes have created powerful and new applications in the field of Internet of Things, and these programs can be used in various industries such as agriculture and building smart farms. But the question is, what makes today's farms truly smart farms? Let us put this question in another way. When will the technologies associated with smart farms reach the point where the range of intelligence they provide can exceed the intelligence of experienced and professional farmers?

Keywords: food security, IoT automation, wireless communication, hybrid lifestyle, arduino Uno

Procedia PDF Downloads 32
872 A Predictive Model of Supply and Demand in the State of Jalisco, Mexico

Authors: M. Gil, R. Montalvo

Abstract:

Business Intelligence (BI) has become a major source of competitive advantages for firms around the world. BI has been defined as the process of data visualization and reporting for understanding what happened and what is happening. Moreover, BI has been studied for its predictive capabilities in the context of trade and financial transactions. The current literature has identified that BI permits managers to identify market trends, understand customer relations, and predict demand for their products and services. This last capability of BI has been of special concern to academics. Specifically, due to its power to build predictive models adaptable to specific time horizons and geographical regions. However, the current literature of BI focuses on predicting specific markets and industries because the impact of such predictive models was relevant to specific industries or organizations. Currently, the existing literature has not developed a predictive model of BI that takes into consideration the whole economy of a geographical area. This paper seeks to create a predictive model of BI that would show the bigger picture of a geographical area. This paper uses a data set from the Secretary of Economic Development of the state of Jalisco, Mexico. Such data set includes data from all the commercial transactions that occurred in the state in the last years. By analyzing such data set, it will be possible to generate a BI model that predicts supply and demand from specific industries around the state of Jalisco. This research has at least three contributions. Firstly, a methodological contribution to the BI literature by generating the predictive supply and demand model. Secondly, a theoretical contribution to BI current understanding. The model presented in this paper incorporates the whole picture of the economic field instead of focusing on a specific industry. Lastly, a practical contribution might be relevant to local governments that seek to improve their economic performance by implementing BI in their policy planning.

Keywords: business intelligence, predictive model, supply and demand, Mexico

Procedia PDF Downloads 98
871 Computing Machinery and Legal Intelligence: Towards a Reflexive Model for Computer Automated Decision Support in Public Administration

Authors: Jacob Livingston Slosser, Naja Holten Moller, Thomas Troels Hildebrandt, Henrik Palmer Olsen

Abstract:

In this paper, we propose a model for human-AI interaction in public administration that involves legal decision-making. Inspired by Alan Turing’s test for machine intelligence, we propose a way of institutionalizing a continuous working relationship between man and machine that aims at ensuring both good legal quality and higher efficiency in decision-making processes in public administration. We also suggest that our model enhances the legitimacy of using AI in public legal decision-making. We suggest that case loads in public administration could be divided between a manual and an automated decision track. The automated decision track will be an algorithmic recommender system trained on former cases. To avoid unwanted feedback loops and biases, part of the case load will be dealt with by both a human case worker and the automated recommender system. In those cases an experienced human case worker will have the role of an evaluator, choosing between the two decisions. This model will ensure that the algorithmic recommender system is not compromising the quality of the legal decision making in the institution. It also enhances the legitimacy of using algorithmic decision support because it provides justification for its use by being seen as superior to human decisions when the algorithmic recommendations are preferred by experienced case workers. The paper outlines in some detail the process through which such a model could be implemented. It also addresses the important issue that legal decision making is subject to legislative and judicial changes and that legal interpretation is context sensitive. Both of these issues requires continuous supervision and adjustments to algorithmic recommender systems when used for legal decision making purposes.

Keywords: administrative law, algorithmic decision-making, decision support, public law

Procedia PDF Downloads 191
870 MAGNI Dynamics: A Vision-Based Kinematic and Dynamic Upper-Limb Model for Intelligent Robotic Rehabilitation

Authors: Alexandros Lioulemes, Michail Theofanidis, Varun Kanal, Konstantinos Tsiakas, Maher Abujelala, Chris Collander, William B. Townsend, Angie Boisselle, Fillia Makedon

Abstract:

This paper presents a home-based robot-rehabilitation instrument, called ”MAGNI Dynamics”, that utilized a vision-based kinematic/dynamic module and an adaptive haptic feedback controller. The system is expected to provide personalized rehabilitation by adjusting its resistive and supportive behavior according to a fuzzy intelligence controller that acts as an inference system, which correlates the user’s performance to different stiffness factors. The vision module uses the Kinect’s skeletal tracking to monitor the user’s effort in an unobtrusive and safe way, by estimating the torque that affects the user’s arm. The system’s torque estimations are justified by capturing electromyographic data from primitive hand motions (Shoulder Abduction and Shoulder Forward Flexion). Moreover, we present and analyze how the Barrett WAM generates a force-field with a haptic controller to support or challenge the users. Experiments show that by shifting the proportional value, that corresponds to different stiffness factors of the haptic path, can potentially help the user to improve his/her motor skills. Finally, potential areas for future research are discussed, that address how a rehabilitation robotic framework may include multisensing data, to improve the user’s recovery process.

Keywords: human-robot interaction, kinect, kinematics, dynamics, haptic control, rehabilitation robotics, artificial intelligence

Procedia PDF Downloads 309
869 Integer Programming: Domain Transformation in Nurse Scheduling Problem.

Authors: Geetha Baskaran, Andrzej Barjiela, Rong Qu

Abstract:

Motivation: Nurse scheduling is a complex combinatorial optimization problem. It is also known as NP-hard. It needs an efficient re-scheduling to minimize some trade-off of the measures of violation by reducing selected constraints to soft constraints with measurements of their violations. Problem Statement: In this paper, we extend our novel approach to solve the nurse scheduling problem by transforming it through Information Granulation. Approach: This approach satisfies the rules of a typical hospital environment based on a standard benchmark problem. Generating good work schedules has a great influence on nurses' working conditions which are strongly related to the level of a quality health care. Domain transformation that combines the strengths of operation research and artificial intelligence was proposed for the solution of the problem. Compared to conventional methods, our approach involves judicious grouping (information granulation) of shifts types’ that transforms the original problem into a smaller solution domain. Later these schedules from the smaller problem domain are converted back into the original problem domain by taking into account the constraints that could not be represented in the smaller domain. An Integer Programming (IP) package is used to solve the transformed scheduling problem by expending the branch and bound algorithm. We have used the GNU Octave for Windows to solve this problem. Results: The scheduling problem has been solved in the proposed formalism resulting in a high quality schedule. Conclusion: Domain transformation represents departure from a conventional one-shift-at-a-time scheduling approach. It offers an advantage of efficient and easily understandable solutions as well as offering deterministic reproducibility of the results. We note, however, that it does not guarantee the global optimum.

Keywords: domain transformation, nurse scheduling, information granulation, artificial intelligence, simulation

Procedia PDF Downloads 374
868 Artificial Intelligence Techniques for Enhancing Supply Chain Resilience: A Systematic Literature Review, Holistic Framework, and Future Research

Authors: Adane Kassa Shikur

Abstract:

Today’s supply chains (SC) have become vulnerable to unexpected and ever-intensifying disruptions from myriad sources. Consequently, the concept of supply chain resilience (SCRes) has become crucial to complement the conventional risk management paradigm, which has failed to cope with unexpected SC disruptions, resulting in severe consequences affecting SC performances and making business continuity questionable. Advancements in cutting-edge technologies like artificial intelligence (AI) and their potential to enhance SCRes by improving critical antecedents in the different phases have attracted the attention of scholars and practitioners. The research from academia and the practical interest of the industry have yielded significant publications at the nexus of AI and SCRes during the last two decades. However, the applications and examinations have been primarily conducted independently, and the extant literature is dispersed into research streams despite the complex nature of SCRes. To close this research gap, this study conducts a systematic literature review of 106 peer-reviewed articles by curating, synthesizing, and consolidating up-to-date literature and presents the state-of-the-art development from 2010 to 2022. Bayesian networks are the most topical ones among the 13 AI techniques evaluated. Concerning the critical antecedents, visibility is the first ranking to be realized by the techniques. The study revealed that AI techniques support only the first 3 phases of SCRes (readiness, response, and recovery), and readiness is the most popular one, while no evidence has been found for the growth phase. The study proposed an AI-SCRes framework to inform research and practice to approach SCRes holistically. It also provided implications for practice, policy, and theory as well as gaps for impactful future research.

Keywords: ANNs, risk, Bauesian networks, vulnerability, resilience

Procedia PDF Downloads 54
867 The Social Psychology of Illegal Game Room Addiction in the Historic Chinatown District of Honolulu, Hawaii: Illegal Compulsive Gambling, Chinese-Polynesian Organized Crime Syndicates, Police Corruption, and Loan Sharking Rings

Authors: Gordon James Knowles

Abstract:

Historically the Chinatown district in Sandwich Islands has been plagued with the traditional vice crimes of illegal drugs, gambling, and prostitution since the early 1800s. However, a new form of psychologically addictive arcade style table gambling machines has become the dominant form of illegal revenue made in Honolulu, Hawaii. This study attempts to document the drive, desire, or will to play and wager with arcade style video gaming and understand the role of illegal game rooms in facilitating pathological gambling addiction. Indicators of police corruption by Chinese organized crime syndicates related to protection rackets, bribery, and pay-offs were revealed. Information fusion from a police science and sociological intelligence perspective indicates insurgent warfare is being waged on the streets of Honolulu by the People’s Republic of China. This state-sponsored communist terrorism in the Hawaiian Islands used “contactless” irregular warfare entailing: (1) the deployment of psychologically addictive gambling machines, (2) the distribution of the physically addictive fentanyl drug as a lethal chemical weapon, and (3) psychological warfare by circulating pro-China anti-American propaganda newspapers targeted at the small island populace.

Keywords: Chinese and Polynesian organized crime, china daily newspaper, electronic arcade style table games, gaming technology addiction, illegal compulsive gambling, and police intelligence

Procedia PDF Downloads 51
866 Artificial Intelligence Based Online Monitoring System for Cardiac Patient

Authors: Syed Qasim Gilani, Muhammad Umair, Muhammad Noman, Syed Bilawal Shah, Aqib Abbasi, Muhammad Waheed

Abstract:

Cardiovascular Diseases(CVD's) are the major cause of death in the world. The main reason for these deaths is the unavailability of first aid for heart failure. In many cases, patients die before reaching the hospital. We in this paper are presenting innovative online health service for Cardiac Patients. The proposed online health system has two ends. Users through device developed by us can communicate with their doctor through a mobile application. This interface provides them with first aid.Also by using this service, they have an easy interface with their doctors for attaining medical advice. According to the proposed system, we developed a device called Cardiac Care. Cardiac Care is a portable device which a patient can use at their home for monitoring heart condition. When a patient checks his/her heart condition, Electrocardiogram (ECG), Blood Pressure(BP), Temperature are sent to the central database. The severity of patients condition is checked using Artificial Intelligence Algorithm at the database. If the patient is suffering from the minor problem, our algorithm will suggest a prescription for patients. But if patient's condition is severe, patients record is sent to doctor through the mobile Android application. Doctor after reviewing patients condition suggests next step. If a doctor identifies the patient condition as critical, then the message is sent to the central database for sending an ambulance for the patient. Ambulance starts moving towards patient for bringing him/her to hospital. We have implemented this model at prototype level. This model will be life-saving for millions of people around the globe. According to this proposed model patients will be in contact with their doctors all the time.

Keywords: cardiovascular disease, classification, electrocardiogram, blood pressure

Procedia PDF Downloads 162
865 Inverse Heat Conduction Analysis of Cooling on Run-Out Tables

Authors: M. S. Gadala, Khaled Ahmed, Elasadig Mahdi

Abstract:

In this paper, we introduced a gradient-based inverse solver to obtain the missing boundary conditions based on the readings of internal thermocouples. The results show that the method is very sensitive to measurement errors, and becomes unstable when small time steps are used. The artificial neural networks are shown to be capable of capturing the whole thermal history on the run-out table, but are not very effective in restoring the detailed behavior of the boundary conditions. Also, they behave poorly in nonlinear cases and where the boundary condition profile is different. GA and PSO are more effective in finding a detailed representation of the time-varying boundary conditions, as well as in nonlinear cases. However, their convergence takes longer. A variation of the basic PSO, called CRPSO, showed the best performance among the three versions. Also, PSO proved to be effective in handling noisy data, especially when its performance parameters were tuned. An increase in the self-confidence parameter was also found to be effective, as it increased the global search capabilities of the algorithm. RPSO was the most effective variation in dealing with noise, closely followed by CRPSO. The latter variation is recommended for inverse heat conduction problems, as it combines the efficiency and effectiveness required by these problems.

Keywords: inverse analysis, function specification, neural net works, particle swarm, run-out table

Procedia PDF Downloads 217
864 Neural Synchronization - The Brain’s Transfer of Sensory Data

Authors: David Edgar

Abstract:

To understand how the brain’s subconscious and conscious functions, we must conquer the physics of Unity, which leads to duality’s algorithm. Where the subconscious (bottom-up) and conscious (top-down) processes function together to produce and consume intelligence, we use terms like ‘time is relative,’ but we really do understand the meaning. In the brain, there are different processes and, therefore, different observers. These different processes experience time at different rates. A sensory system such as the eyes cycles measurement around 33 milliseconds, the conscious process of the frontal lobe cycles at 300 milliseconds, and the subconscious process of the thalamus cycle at 5 milliseconds. Three different observers experience time differently. To bridge observers, the thalamus, which is the fastest of the processes, maintains a synchronous state and entangles the different components of the brain’s physical process. The entanglements form a synchronous cohesion between the brain components allowing them to share the same state and execute in the same measurement cycle. The thalamus uses the shared state to control the firing sequence of the brain’s linear subconscious process. Sharing state also allows the brain to cheat on the amount of sensory data that must be exchanged between components. Only unpredictable motion is transferred through the synchronous state because predictable motion already exists in the shared framework. The brain’s synchronous subconscious process is entirely based on energy conservation, where prediction regulates energy usage. So, the eyes every 33 milliseconds dump their sensory data into the thalamus every day. The thalamus is going to perform a motion measurement to identify the unpredictable motion in the sensory data. Here is the trick. The thalamus conducts its measurement based on the original observation time of the sensory system (33 ms), not its own process time (5 ms). This creates a data payload of synchronous motion that preserves the original sensory observation. Basically, a frozen moment in time (Flat 4D). The single moment in time can then be processed through the single state maintained by the synchronous process. Other processes, such as consciousness (300 ms), can interface with the synchronous state to generate awareness of that moment. Now, synchronous data traveling through a separate faster synchronous process creates a theoretical time tunnel where observation time is tunneled through the synchronous process and is reproduced on the other side in the original time-relativity. The synchronous process eliminates time dilation by simply removing itself from the equation so that its own process time does not alter the experience. To the original observer, the measurement appears to be instantaneous, but in the thalamus, a linear subconscious process generating sensory perception and thought production is being executed. It is all just occurring in the time available because other observation times are slower than thalamic measurement time. For life to exist in the physical universe requires a linear measurement process, it just hides by operating at a faster time relativity. What’s interesting is time dilation is not the problem; it’s the solution. Einstein said there was no universal time.

Keywords: neural synchronization, natural intelligence, 99.95% IoT data transmission savings, artificial subconscious intelligence (ASI)

Procedia PDF Downloads 107
863 Support Vector Regression Combined with Different Optimization Algorithms to Predict Global Solar Radiation on Horizontal Surfaces in Algeria

Authors: Laidi Maamar, Achwak Madani, Abdellah El Ahdj Abdellah

Abstract:

The aim of this work is to use Support Vector regression (SVR) combined with dragonfly, firefly, Bee Colony and particle swarm Optimization algorithm to predict global solar radiation on horizontal surfaces in some cities in Algeria. Combining these optimization algorithms with SVR aims principally to enhance accuracy by fine-tuning the parameters, speeding up the convergence of the SVR model, and exploring a larger search space efficiently; these parameters are the regularization parameter (C), kernel parameters, and epsilon parameter. By doing so, the aim is to improve the generalization and predictive accuracy of the SVR model. Overall, the aim is to leverage the strengths of both SVR and optimization algorithms to create a more powerful and effective regression model for various cities and under different climate conditions. Results demonstrate close agreement between predicted and measured data in terms of different metrics. In summary, SVM has proven to be a valuable tool in modeling global solar radiation, offering accurate predictions and demonstrating versatility when combined with other algorithms or used in hybrid forecasting models.

Keywords: support vector regression (SVR), optimization algorithms, global solar radiation prediction, hybrid forecasting models

Procedia PDF Downloads 10
862 Genetic Algorithm to Construct and Enumerate 4×4 Pan-Magic Squares

Authors: Younis R. Elhaddad, Mohamed A. Alshaari

Abstract:

Since 2700 B.C the problem of constructing magic squares attracts many researchers. Magic squares one of most difficult challenges for mathematicians. In this work, we describe how to construct and enumerate Pan- magic squares using genetic algorithm, using new chromosome encoding technique. The results were promising within reasonable time.

Keywords: genetic algorithm, magic square, pan-magic square, computational intelligence

Procedia PDF Downloads 555
861 Advancing UAV Operations with Hybrid Mobile Network and LoRa Communications

Authors: Annika J. Meyer, Tom Piechotta

Abstract:

Unmanned Aerial Vehicles (UAVs) have increasingly become vital tools in various applications, including surveillance, search and rescue, and environmental monitoring. One common approach to ensure redundant communication systems when flying beyond visual line of sight is for UAVs to employ multiple mobile data modems by different providers. Although widely adopted, this approach suffers from several drawbacks, such as high costs, added weight and potential increases in signal interference. In light of these challenges, this paper proposes a communication framework intermeshing mobile networks and LoRa (Long Range) technology—a low-power, long-range communication protocol. LoRaWAN (Long Range Wide Area Network) is commonly used in Internet of Things applications, relying on stationary gateways and Internet connectivity. This paper, however, utilizes the underlying LoRa protocol, taking advantage of the protocol’s low power and long-range capabilities while ensuring efficiency and reliability. Conducted in collaboration with the Potsdam Fire Department, the implementation of mobile network technology in combination with the LoRa protocol in small UAVs (take-off weight < 0.4 kg), specifically designed for search and rescue and area monitoring missions, is explored. This research aims to test the viability of LoRa as an additional redundant communication system during UAV flights as well as its intermeshing with the primary, mobile network-based controller. The methodology focuses on direct UAV-to-UAV and UAV-to-ground communications, employing different spreading factors optimized for specific operational scenarios—short-range for UAV-to-UAV interactions and long-range for UAV-to-ground commands. This explored use case also dramatically reduces one of the major drawbacks of LoRa communication systems, as a line of sight between the modules is necessary for reliable data transfer. Something that UAVs are uniquely suited to provide, especially when deployed as a swarm. Additionally, swarm deployment may enable UAVs that have lost contact with their primary network to reestablish their connection through another, better-situated UAV. The experimental setup involves multiple phases of testing, starting with controlled environments to assess basic communication capabilities and gradually advancing to complex scenarios involving multiple UAVs. Such a staged approach allows for meticulous adjustment of parameters and optimization of the communication protocols to ensure reliability and effectiveness. Furthermore, due to the close partnership with the Fire Department, the real-world applicability of the communication system is assured. The expected outcomes of this paper include a detailed analysis of LoRa's performance as a communication tool for UAVs, focusing on aspects such as signal integrity, range, and reliability under different environmental conditions. Additionally, the paper seeks to demonstrate the cost-effectiveness and operational efficiency of using a single type of communication technology that reduces UAV payload and power consumption. By shifting from traditional cellular network communications to a more robust and versatile cellular and LoRa-based system, this research has the potential to significantly enhance UAV capabilities, especially in critical applications where reliability is paramount. The success of this paper could pave the way for broader adoption of LoRa in UAV communications, setting a new standard for UAV operational communication frameworks.

Keywords: LoRa communication protocol, mobile network communication, UAV communication systems, search and rescue operations

Procedia PDF Downloads 22
860 Thermodynamic Modeling of Three Pressure Level Reheat HRSG, Parametric Analysis and Optimization Using PSO

Authors: Mahmoud Nadir, Adel Ghenaiet

Abstract:

The main purpose of this study is the thermodynamic modeling, the parametric analysis, and the optimization of three pressure level reheat HRSG (Heat Recovery Steam Generator) using PSO method (Particle Swarm Optimization). In this paper, a parametric analysis followed by a thermodynamic optimization is presented. The chosen objective function is the specific work of the steam cycle that may be, in the case of combined cycle (CC), a good criterion of thermodynamic performance analysis, contrary to the conventional steam turbines in which the thermal efficiency could be also an important criterion. The technologic constraints such as maximal steam cycle temperature, minimal steam fraction at steam turbine outlet, maximal steam pressure, minimal stack temperature, minimal pinch point, and maximal superheater effectiveness are also considered. The parametric analyses permitted to understand the effect of design parameters and the constraints on steam cycle specific work variation. PSO algorithm was used successfully in HRSG optimization, knowing that the achieved results are in accordance with those of the previous studies in which genetic algorithms were used. Moreover, this method is easy to implement comparing with the other methods.

Keywords: combined cycle, HRSG thermodynamic modeling, optimization, PSO, steam cycle specific work

Procedia PDF Downloads 357
859 Revolutionizing Healthcare Communication: The Transformative Role of Natural Language Processing and Artificial Intelligence

Authors: Halimat M. Ajose-Adeogun, Zaynab A. Bello

Abstract:

Artificial Intelligence (AI) and Natural Language Processing (NLP) have transformed computer language comprehension, allowing computers to comprehend spoken and written language with human-like cognition. NLP, a multidisciplinary area that combines rule-based linguistics, machine learning, and deep learning, enables computers to analyze and comprehend human language. NLP applications in medicine range from tackling issues in electronic health records (EHR) and psychiatry to improving diagnostic precision in orthopedic surgery and optimizing clinical procedures with novel technologies like chatbots. The technology shows promise in a variety of medical sectors, including quicker access to medical records, faster decision-making for healthcare personnel, diagnosing dysplasia in Barrett's esophagus, boosting radiology report quality, and so on. However, successful adoption requires training for healthcare workers, fostering a deep understanding of NLP components, and highlighting the significance of validation before actual application. Despite prevailing challenges, continuous multidisciplinary research and collaboration are critical for overcoming restrictions and paving the way for the revolutionary integration of NLP into medical practice. This integration has the potential to improve patient care, research outcomes, and administrative efficiency. The research methodology includes using NLP techniques for Sentiment Analysis and Emotion Recognition, such as evaluating text or audio data to determine the sentiment and emotional nuances communicated by users, which is essential for designing a responsive and sympathetic chatbot. Furthermore, the project includes the adoption of a Personalized Intervention strategy, in which chatbots are designed to personalize responses by merging NLP algorithms with specific user profiles, treatment history, and emotional states. The synergy between NLP and personalized medicine principles is critical for tailoring chatbot interactions to each user's demands and conditions, hence increasing the efficacy of mental health care. A detailed survey corroborated this synergy, revealing a remarkable 20% increase in patient satisfaction levels and a 30% reduction in workloads for healthcare practitioners. The poll, which focused on health outcomes and was administered to both patients and healthcare professionals, highlights the improved efficiency and favorable influence on the broader healthcare ecosystem.

Keywords: natural language processing, artificial intelligence, healthcare communication, electronic health records, patient care

Procedia PDF Downloads 52
858 'Explainable Artificial Intelligence' and Reasons for Judicial Decisions: Why Justifications and Not Just Explanations May Be Required

Authors: Jacquelyn Burkell, Jane Bailey

Abstract:

Artificial intelligence (AI) solutions deployed within the justice system face the critical task of providing acceptable explanations for decisions or actions. These explanations must satisfy the joint criteria of public and professional accountability, taking into account the perspectives and requirements of multiple stakeholders, including judges, lawyers, parties, witnesses, and the general public. This research project analyzes and integrates two existing literature on explanations in order to propose guidelines for explainable AI in the justice system. Specifically, we review three bodies of literature: (i) explanations of the purpose and function of 'explainable AI'; (ii) the relevant case law, judicial commentary and legal literature focused on the form and function of reasons for judicial decisions; and (iii) the literature focused on the psychological and sociological functions of these reasons for judicial decisions from the perspective of the public. Our research suggests that while judicial ‘reasons’ (arguably accurate descriptions of the decision-making process and factors) do serve similar explanatory functions as those identified in the literature on 'explainable AI', they also serve an important ‘justification’ function (post hoc constructions that justify the decision that was reached). Further, members of the public are also looking for both justification and explanation in reasons for judicial decisions, and that the absence of either feature is likely to contribute to diminished public confidence in the legal system. Therefore, artificially automated judicial decision-making systems that simply attempt to document the process of decision-making are unlikely in many cases to be useful to and accepted within the justice system. Instead, these systems should focus on the post-hoc articulation of principles and precedents that support the decision or action, especially in cases where legal subjects’ fundamental rights and liberties are at stake.

Keywords: explainable AI, judicial reasons, public accountability, explanation, justification

Procedia PDF Downloads 102