Search results for: artificial neuron network
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6311

Search results for: artificial neuron network

1031 Dual-use UAVs in Armed Conflicts: Opportunities and Risks for Cyber and Electronic Warfare

Authors: Piret Pernik

Abstract:

Based on strategic, operational, and technical analysis of the ongoing armed conflict in Ukraine, this paper will examine the opportunities and risks of using small commercial drones (dual-use unmanned aerial vehicles, UAV) for military purposes. The paper discusses the opportunities and risks in the information domain, encompassing both cyber and electromagnetic interference and attacks. The paper will draw conclusions on a possible strategic impact to the battlefield outcomes in the modern armed conflicts by the widespread use of dual-use UAVs. This article will contribute to filling the gap in the literature by examining based on empirical data cyberattacks and electromagnetic interference. Today, more than one hundred states and non-state actors possess UAVs ranging from low cost commodity models, widely are dual-use, available and affordable to anyone, to high-cost combat UAVs (UCAV) with lethal kinetic strike capabilities, which can be enhanced with Artificial Intelligence (AI) and Machine Learning (ML). Dual-use UAVs have been used by various actors for intelligence, reconnaissance, surveillance, situational awareness, geolocation, and kinetic targeting. Thus they function as force multipliers enabling kinetic and electronic warfare attacks and provide comparative and asymmetric operational and tactical advances. Some go as far as argue that automated (or semi-automated) systems can change the character of warfare, while others observe that the use of small drones has not changed the balance of power or battlefield outcomes. UAVs give considerable opportunities for commanders, for example, because they can be operated without GPS navigation, makes them less vulnerable and dependent on satellite communications. They can and have been used to conduct cyberattacks, electromagnetic interference, and kinetic attacks. However, they are highly vulnerable to those attacks themselves. So far, strategic studies, literature, and expert commentary have overlooked cybersecurity and electronic interference dimension of the use of dual use UAVs. The studies that link technical analysis of opportunities and risks with strategic battlefield outcomes is missing. It is expected that dual use commercial UAV proliferation in armed and hybrid conflicts will continue and accelerate in the future. Therefore, it is important to understand specific opportunities and risks related to the crowdsourced use of dual-use UAVs, which can have kinetic effects. Technical countermeasures to protect UAVs differ depending on a type of UAV (small, midsize, large, stealth combat), and this paper will offer a unique analysis of small UAVs both from the view of opportunities and risks for commanders and other actors in armed conflict.

Keywords: dual-use technology, cyber attacks, electromagnetic warfare, case studies of cyberattacks in armed conflicts

Procedia PDF Downloads 102
1030 The Minimum Patch Size Scale for Seagrass Canopy Restoration

Authors: Aina Barcelona, Carolyn Oldham, Jordi Colomer, Teresa Serra

Abstract:

The loss of seagrass meadows worldwide is being tackled by formulating coastal restoration strategies. Seagrass loss results in a network of vegetated patches which are barely interconnected, and consequently, the ecological services they provide may be highly compromised. Hence, there is a need to optimize coastal management efforts in order to implement successful restoration strategies, not only through modifying the architecture of the canopies but also by gathering together information on the hydrodynamic conditions of the seabeds. To obtain information on the hydrodynamics within the patches of vegetation, this study deals with the scale analysis of the minimum lengths of patch management strategies that can be effectively used on. To this aim, a set of laboratory experiments were conducted in a laboratory flume where the plant densities, patch lengths, and hydrodynamic conditions were varied to discern the vegetated patch lengths that can provide optimal ecosystem services for canopy development. Two possible patch behaviours based on the turbulent kinetic energy (TKE) production were determined: one where plants do not interact with the flow and the other where plants interact with waves and produce TKE. Furthermore, this study determines the minimum patch lengths that can provide successful management restoration. A canopy will produce TKE, depending on its density, the length of the vegetated patch, and the wave velocities. Therefore, a vegetated patch will produce plant-wave interaction under high wave velocities when it presents large lengths and high canopy densities.

Keywords: seagrass, minimum patch size, turbulent kinetic energy, oscillatory flow

Procedia PDF Downloads 197
1029 Enhancing Large Language Models' Data Analysis Capability with Planning-and-Execution and Code Generation Agents: A Use Case for Southeast Asia Real Estate Market Analytics

Authors: Kien Vu, Jien Min Soh, Mohamed Jahangir Abubacker, Piyawut Pattamanon, Soojin Lee, Suvro Banerjee

Abstract:

Recent advances in Generative Artificial Intelligence (GenAI), in particular Large Language Models (LLMs) have shown promise to disrupt multiple industries at scale. However, LLMs also present unique challenges, notably, these so-called "hallucination" which is the generation of outputs that are not grounded in the input data that hinders its adoption into production. Common practice to mitigate hallucination problem is utilizing Retrieval Agmented Generation (RAG) system to ground LLMs'response to ground truth. RAG converts the grounding documents into embeddings, retrieve the relevant parts with vector similarity between user's query and documents, then generates a response that is not only based on its pre-trained knowledge but also on the specific information from the retrieved documents. However, the RAG system is not suitable for tabular data and subsequent data analysis tasks due to multiple reasons such as information loss, data format, and retrieval mechanism. In this study, we have explored a novel methodology that combines planning-and-execution and code generation agents to enhance LLMs' data analysis capabilities. The approach enables LLMs to autonomously dissect a complex analytical task into simpler sub-tasks and requirements, then convert them into executable segments of code. In the final step, it generates the complete response from output of the executed code. When deployed beta version on DataSense, the property insight tool of PropertyGuru, the approach yielded promising results, as it was able to provide market insights and data visualization needs with high accuracy and extensive coverage by abstracting the complexities for real-estate agents and developers from non-programming background. In essence, the methodology not only refines the analytical process but also serves as a strategic tool for real estate professionals, aiding in market understanding and enhancement without the need for programming skills. The implication extends beyond immediate analytics, paving the way for a new era in the real estate industry characterized by efficiency and advanced data utilization.

Keywords: large language model, reasoning, planning and execution, code generation, natural language processing, prompt engineering, data analysis, real estate, data sense, PropertyGuru

Procedia PDF Downloads 87
1028 A Multi Objective Reliable Location-Inventory Capacitated Disruption Facility Problem with Penalty Cost Solve with Efficient Meta Historic Algorithms

Authors: Elham Taghizadeh, Mostafa Abedzadeh, Mostafa Setak

Abstract:

Logistics network is expected that opened facilities work continuously for a long time horizon without any failure; but in real world problems, facilities may face disruptions. This paper studies a reliable joint inventory location problem to optimize cost of facility locations, customers’ assignment, and inventory management decisions when facilities face failure risks and doesn’t work. In our model we assume when a facility is out of work, its customers may be reassigned to other operational facilities otherwise they must endure high penalty costs associated with losing service. For defining the model closer to real world problems, the model is proposed based on p-median problem and the facilities are considered to have limited capacities. We define a new binary variable (Z_is) for showing that customers are not assigned to any facilities. Our problem involve a bi-objective model; the first one minimizes the sum of facility construction costs and expected inventory holding costs, the second one function that mention for the first one is minimizes maximum expected customer costs under normal and failure scenarios. For solving this model we use NSGAII and MOSS algorithms have been applied to find the pareto- archive solution. Also Response Surface Methodology (RSM) is applied for optimizing the NSGAII Algorithm Parameters. We compare performance of two algorithms with three metrics and the results show NSGAII is more suitable for our model.

Keywords: joint inventory-location problem, facility location, NSGAII, MOSS

Procedia PDF Downloads 525
1027 Measurement of Fatty Acid Changes in Post-Mortem Belowground Carcass (Sus-scrofa) Decomposition: A Semi-Quantitative Methodology for Determining the Post-Mortem Interval

Authors: Nada R. Abuknesha, John P. Morgan, Andrew J. Searle

Abstract:

Information regarding post-mortem interval (PMI) in criminal investigations is vital to establish a time frame when reconstructing events. PMI is defined as the time period that has elapsed between the occurrence of death and the discovery of the corpse. Adipocere, commonly referred to as ‘grave-wax’, is formed when post-mortem adipose tissue is converted into a solid material that is heavily comprised of fatty acids. Adipocere is of interest to forensic anthropologists, as its formation is able to slow down the decomposition process. Therefore, analysing the changes in the patterns of fatty acids during the early decomposition process may be able to estimate the period of burial, and hence the PMI. The current study concerned the investigation of the fatty acid composition and patterns in buried pig fat tissue. This was in an attempt to determine whether particular patterns of fatty acid composition can be shown to be associated with the duration of the burial, and hence may be used to estimate PMI. The use of adipose tissue from the abdominal region of domestic pigs (Sus-scrofa), was used to model the human decomposition process. 17 x 20cm piece of pork belly was buried in a shallow artificial grave, and weekly samples (n=3) from the buried pig fat tissue were collected over an 11-week period. Marker fatty acids: palmitic (C16:0), oleic (C18:1n-9) and linoleic (C18:2n-6) acid were extracted from the buried pig fat tissue and analysed as fatty acid methyl esters using the gas chromatography system. Levels of the marker fatty acids were quantified from their respective standards. The concentrations of C16:0 (69.2 mg/mL) and C18:1n-9 (44.3 mg/mL) from time zero exhibited significant fluctuations during the burial period. Levels rose (116 and 60.2 mg/mL, respectively) and fell starting from the second week to reach 19.3 and 18.3 mg/mL, respectively at week 6. Levels showed another increase at week 9 (66.3 and 44.1 mg/mL, respectively) followed by gradual decrease at week 10 (20.4 and 18.5 mg/mL, respectively). A sharp increase was observed in the final week (131.2 and 61.1 mg/mL, respectively). Conversely, the levels of C18:2n-6 remained more or less constant throughout the study. In addition to fluctuations in the concentrations, several new fatty acids appeared in the latter weeks. Other fatty acids which were detectable in the time zero sample, were lost in the latter weeks. There are several probable opportunities to utilise fatty acid analysis as a basic technique for approximating PMI: the quantification of marker fatty acids and the detection of selected fatty acids that either disappear or appear during the burial period. This pilot study indicates that this may be a potential semi-quantitative methodology for determining the PMI. Ideally, the analysis of particular fatty acid patterns in the early stages of decomposition could be an additional tool to the already available techniques or methods in improving the overall processes in estimating PMI of a corpse.

Keywords: adipocere, fatty acids, gas chromatography, post-mortem interval

Procedia PDF Downloads 131
1026 Servitization in Machine and Plant Engineering: Leveraging Generative AI for Effective Product Portfolio Management Amidst Disruptive Innovations

Authors: Till Gramberg

Abstract:

In the dynamic world of machine and plant engineering, stagnation in the growth of new product sales compels companies to reconsider their business models. The increasing shift toward service orientation, known as "servitization," along with challenges posed by digitalization and sustainability, necessitates an adaptation of product portfolio management (PPM). Against this backdrop, this study investigates the current challenges and requirements of PPM in this industrial context and develops a framework for the application of generative artificial intelligence (AI) to enhance agility and efficiency in PPM processes. The research approach of this study is based on a mixed-method design. Initially, qualitative interviews with industry experts were conducted to gain a deep understanding of the specific challenges and requirements in PPM. These interviews were analyzed using the Gioia method, painting a detailed picture of the existing issues and needs within the sector. This was complemented by a quantitative online survey. The combination of qualitative and quantitative research enabled a comprehensive understanding of the current challenges in the practical application of machine and plant engineering PPM. Based on these insights, a specific framework for the application of generative AI in PPM was developed. This framework aims to assist companies in implementing faster and more agile processes, systematically integrating dynamic requirements from trends such as digitalization and sustainability into their PPM process. Utilizing generative AI technologies, companies can more quickly identify and respond to trends and market changes, allowing for a more efficient and targeted adaptation of the product portfolio. The study emphasizes the importance of an agile and reactive approach to PPM in a rapidly changing environment. It demonstrates how generative AI can serve as a powerful tool to manage the complexity of a diversified and continually evolving product portfolio. The developed framework offers practical guidelines and strategies for companies to improve their PPM processes by leveraging the latest technological advancements while maintaining ecological and social responsibility. This paper significantly contributes to deepening the understanding of the application of generative AI in PPM and provides a framework for companies to manage their product portfolios more effectively and adapt to changing market conditions. The findings underscore the relevance of continuous adaptation and innovation in PPM strategies and demonstrate the potential of generative AI for proactive and future-oriented business management.

Keywords: servitization, product portfolio management, generative AI, disruptive innovation, machine and plant engineering

Procedia PDF Downloads 82
1025 Prediction of Live Birth in a Matched Cohort of Elective Single Embryo Transfers

Authors: Mohsen Bahrami, Banafsheh Nikmehr, Yueqiang Song, Anuradha Koduru, Ayse K. Vuruskan, Hongkun Lu, Tamer M. Yalcinkaya

Abstract:

In recent years, we have witnessed an explosion of studies aimed at using a combination of artificial intelligence (AI) and time-lapse imaging data on embryos to improve IVF outcomes. However, despite promising results, no study has used a matched cohort of transferred embryos which only differ in pregnancy outcome, i.e., embryos from a single clinic which are similar in parameters, such as: morphokinetic condition, patient age, and overall clinic and lab performance. Here, we used time-lapse data on embryos with known pregnancy outcomes to see if the rich spatiotemporal information embedded in this data would allow the prediction of the pregnancy outcome regardless of such critical parameters. Methodology—We did a retrospective analysis of time-lapse data from our IVF clinic utilizing Embryoscope 100% of the time for embryo culture to blastocyst stage with known clinical outcomes, including live birth vs nonpregnant (embryos with spontaneous abortion outcomes were excluded). We used time-lapse data from 200 elective single transfer embryos randomly selected from January 2019 to June 2021. Our sample included 100 embryos in each group with no significant difference in patient age (P=0.9550) and morphokinetic scores (P=0.4032). Data from all patients were combined to make a 4th order tensor, and feature extraction were subsequently carried out by a tensor decomposition methodology. The features were then used in a machine learning classifier to classify the two groups. Major Findings—The performance of the model was evaluated using 100 random subsampling cross validation (train (80%) - test (20%)). The prediction accuracy, averaged across 100 permutations, exceeded 80%. We also did a random grouping analysis, in which labels (live birth, nonpregnant) were randomly assigned to embryos, which yielded 50% accuracy. Conclusion—The high accuracy in the main analysis and the low accuracy in random grouping analysis suggest a consistent spatiotemporal pattern which is associated with pregnancy outcomes, regardless of patient age and embryo morphokinetic condition, and beyond already known parameters, such as: early cleavage or early blastulation. Despite small samples size, this ongoing analysis is the first to show the potential of AI methods in capturing the complex morphokinetic changes embedded in embryo time-lapse data, which contribute to successful pregnancy outcomes, regardless of already known parameters. The results on a larger sample size with complementary analysis on prediction of other key outcomes, such as: euploidy and aneuploidy of embryos will be presented at the meeting.

Keywords: IVF, embryo, machine learning, time-lapse imaging data

Procedia PDF Downloads 92
1024 Nano-Filled Matrix Reinforced by Woven Carbon Fibers Used as a Sensor

Authors: K. Hamdi, Z. Aboura, W. Harizi, K. Khellil

Abstract:

Improving the electrical properties of organic matrix composites has been investigated in several studies. Thus, to extend the use of composites in more varied application, one of the actual barrier is their poor electrical conductivities. In the case of carbon fiber composites, organic matrix are in charge of the insulating properties of the resulting composite. However, studying the properties of continuous carbon fiber nano-filled composites is less investigated. This work tends to characterize the effect of carbon black nano-fillers on the properties of the woven carbon fiber composites. First of all, SEM observations were performed to localize the nano-particles. It showed that particles penetrated on the fiber zone (figure1). In fact, by reaching the fiber zone, the carbon black nano-fillers created network connectivity between fibers which means an easy pathway for the current. It explains the noticed improvement of the electrical conductivity of the composites by adding carbon black. This test was performed with the four points electrical circuit. It shows that electrical conductivity of 'neat' matrix composite passed from 80S/cm to 150S/cm by adding 9wt% of carbon black and to 250S/cm by adding 17wt% of the same nano-filler. Thanks to these results, the use of this composite as a strain gauge might be possible. By the way, the study of the influence of a mechanical excitation (flexion, tensile) on the electrical properties of the composite by recording the variance of an electrical current passing through the material during the mechanical testing is possible. Three different configuration were performed depending on the rate of carbon black used as nano-filler. These investigation could lead to develop an auto-instrumented material.

Keywords: carbon fibers composites, nano-fillers, strain-sensors, auto-instrumented

Procedia PDF Downloads 411
1023 The Roman Fora in North Africa Towards a Supportive Protocol to the Decision for the Morphological Restitution

Authors: Dhouha Laribi Galalou, Najla Allani Bouhoula, Atef Hammouda

Abstract:

This research delves into the fundamental question of the morphological restitution of built archaeology in order to place it in its paradigmatic context and to seek answers to it. Indeed, the understanding of the object of the study, its analysis, and the methodology of solving the morphological problem posed, are manageable aspects only by means of a thoughtful strategy that draws on well-defined epistemological scaffolding. In this stream, the crisis of natural reasoning in archaeology has generated multiple changes in this field, ranging from the use of new tools to the integration of an archaeological information system where urbanization involves the interplay of several disciplines. The built archaeological topic is also an architectural and morphological object. It is also a set of articulated elementary data, the understanding of which is about to be approached from a logicist point of view. Morphological restitution is no exception to the rule, and the inter-exchange between the different disciplines uses the capacity of each to frame the reflection on the incomplete elements of a given architecture or on its different phases and multiple states of existence. The logicist sequence is furnished by the set of scattered or destroyed elements found, but also by what can be called a rule base which contains the set of rules for the architectural construction of the object. The knowledge base built from the archaeological literature also provides a reference that enters into the game of searching for forms and articulations. The choice of the Roman Forum in North Africa is justified by the great urban and architectural characteristics of this entity. The research on the forum involves both a fairly large knowledge base but also provides the researcher with material to study - from a morphological and architectural point of view - starting from the scale of the city down to the architectural detail. The experimentation of the knowledge deduced on the paradigmatic level, as well as the deduction of an analysis model, is then carried out on the basis of a well-defined context which contextualises the experimentation from the elaboration of the morphological information container attached to the rule base and the knowledge base. The use of logicist analysis and artificial intelligence has allowed us to first question the aspects already known in order to measure the credibility of our system, which remains above all a decision support tool for the morphological restitution of Roman Fora in North Africa. This paper presents a first experimentation of the model elaborated during this research, a model framed by a paradigmatic discussion and thus trying to position the research in relation to the existing paradigmatic and experimental knowledge on the issue.

Keywords: classical reasoning, logicist reasoning, archaeology, architecture, roman forum, morphology, calculation

Procedia PDF Downloads 147
1022 Critical Analysis of International Protections for Children from Sexual Abuse and Examination of Indian Legal Approach

Authors: Ankita Singh

Abstract:

Sex trafficking and child pornography are those kinds of borderless crimes which can not be effectively prevented only through the laws and efforts of one country because it requires a proper and smooth collaboration among countries. Eradication of international human trafficking syndicates, criminalisation of international cyber offenders, and effective ban on child pornography is not possible without applying effective universal laws; hence, continuous collaboration of all countries is much needed to adopt and routinely update these universal laws. Congregation of countries on an international platform is very necessary from time to time, where they can simultaneously adopt international agendas and create powerful universal laws to prevent sex trafficking and child pornography in this modern digital era. In the past, some international steps have been taken through The Convention on the Rights of the Child (CRC) and through The Optional Protocol to the Convention on the Rights of the Child on the Sale of Children, Child Prostitution, and Child Pornography, but in reality, these measures are quite weak and are not capable in effectively protecting children from sexual abuse in this modern & highly advanced digital era. The uncontrolled growth of artificial intelligence (AI) and its misuse, lack of proper legal jurisdiction over foreign child abusers and difficulties in their extradition, improper control over international trade of digital child pornographic content, etc., are some prominent issues which can only be controlled through some new, effective and powerful universal laws. Due to a lack of effective international standards and a lack of improper collaboration among countries, Indian laws are also not capable of taking effective actions against child abusers. This research will be conducted through both doctrinal as well as empirical methods. Various literary sources will be examined, and a questionnaire survey will be conducted to analyse the effectiveness of international standards and Indian laws against child pornography. Participants in this survey will be Indian University students. In this work, the existing international norms made for protecting children from sexual abuse will be critically analysed. It will explore why effective and strong collaboration between countries is required in modern times. It will be analysed whether existing international steps are enough to protect children from getting trafficked or being subjected to pornography, and if these steps are not found to be sufficient enough, then suggestions will be given on how international standards and protections can be made more effective and powerful in this digital era. The approach of India towards the existing international standards, the Indian laws to protect children from being subjected to pornography, and the contributions & capabilities of India in strengthening the international standards will also be analysed.

Keywords: child pornography, prevention of children from sexual offences act, the optional protocol to the convention on the rights of the child on the sale of children, child prostitution and child pornography, the convention on the rights of the child

Procedia PDF Downloads 39
1021 A Quadratic Model to Early Predict the Blastocyst Stage with a Time Lapse Incubator

Authors: Cecile Edel, Sandrine Giscard D'Estaing, Elsa Labrune, Jacqueline Lornage, Mehdi Benchaib

Abstract:

Introduction: The use of incubator equipped with time-lapse technology in Artificial Reproductive Technology (ART) allows a continuous surveillance. With morphocinetic parameters, algorithms are available to predict the potential outcome of an embryo. However, the different proposed time-lapse algorithms do not take account the missing data, and then some embryos could not be classified. The aim of this work is to construct a predictive model even in the case of missing data. Materials and methods: Patients: A retrospective study was performed, in biology laboratory of reproduction at the hospital ‘Femme Mère Enfant’ (Lyon, France) between 1 May 2013 and 30 April 2015. Embryos (n= 557) obtained from couples (n=108) were cultured in a time-lapse incubator (Embryoscope®, Vitrolife, Goteborg, Sweden). Time-lapse incubator: The morphocinetic parameters obtained during the three first days of embryo life were used to build the predictive model. Predictive model: A quadratic regression was performed between the number of cells and time. N = a. T² + b. T + c. N: number of cells at T time (T in hours). The regression coefficients were calculated with Excel software (Microsoft, Redmond, WA, USA), a program with Visual Basic for Application (VBA) (Microsoft) was written for this purpose. The quadratic equation was used to find a value that allows to predict the blastocyst formation: the synthetize value. The area under the curve (AUC) obtained from the ROC curve was used to appreciate the performance of the regression coefficients and the synthetize value. A cut-off value has been calculated for each regression coefficient and for the synthetize value to obtain two groups where the difference of blastocyst formation rate according to the cut-off values was maximal. The data were analyzed with SPSS (IBM, Il, Chicago, USA). Results: Among the 557 embryos, 79.7% had reached the blastocyst stage. The synthetize value corresponds to the value calculated with time value equal to 99, the highest AUC was then obtained. The AUC for regression coefficient ‘a’ was 0.648 (p < 0.001), 0.363 (p < 0.001) for the regression coefficient ‘b’, 0.633 (p < 0.001) for the regression coefficient ‘c’, and 0.659 (p < 0.001) for the synthetize value. The results are presented as follow: blastocyst formation rate under cut-off value versus blastocyst rate formation above cut-off value. For the regression coefficient ‘a’ the optimum cut-off value was -1.14.10-3 (61.3% versus 84.3%, p < 0.001), 0.26 for the regression coefficient ‘b’ (83.9% versus 63.1%, p < 0.001), -4.4 for the regression coefficient ‘c’ (62.2% versus 83.1%, p < 0.001) and 8.89 for the synthetize value (58.6% versus 85.0%, p < 0.001). Conclusion: This quadratic regression allows to predict the outcome of an embryo even in case of missing data. Three regression coefficients and a synthetize value could represent the identity card of an embryo. ‘a’ regression coefficient represents the acceleration of cells division, ‘b’ regression coefficient represents the speed of cell division. We could hypothesize that ‘c’ regression coefficient could represent the intrinsic potential of an embryo. This intrinsic potential could be dependent from oocyte originating the embryo. These hypotheses should be confirmed by studies analyzing relationship between regression coefficients and ART parameters.

Keywords: ART procedure, blastocyst formation, time-lapse incubator, quadratic model

Procedia PDF Downloads 306
1020 Design of Robust and Intelligent Controller for Active Removal of Space Debris

Authors: Shabadini Sampath, Jinglang Feng

Abstract:

With huge kinetic energy, space debris poses a major threat to astronauts’ space activities and spacecraft in orbit if a collision happens. The active removal of space debris is required in order to avoid frequent collisions that would occur. In addition, the amount of space debris will increase uncontrollably, posing a threat to the safety of the entire space system. But the safe and reliable removal of large-scale space debris has been a huge challenge to date. While capturing and deorbiting space debris, the space manipulator has to achieve high control precision. However, due to uncertainties and unknown disturbances, there is difficulty in coordinating the control of the space manipulator. To address this challenge, this paper focuses on developing a robust and intelligent control algorithm that controls joint movement and restricts it on the sliding manifold by reducing uncertainties. A neural network adaptive sliding mode controller (NNASMC) is applied with the objective of finding the control law such that the joint motions of the space manipulator follow the given trajectory. A computed torque control (CTC) is an effective motion control strategy that is used in this paper for computing space manipulator arm torque to generate the required motion. Based on the Lyapunov stability theorem, the proposed intelligent controller NNASMC and CTC guarantees the robustness and global asymptotic stability of the closed-loop control system. Finally, the controllers used in the paper are modeled and simulated using MATLAB Simulink. The results are presented to prove the effectiveness of the proposed controller approach.

Keywords: GNC, active removal of space debris, AI controllers, MatLabSimulink

Procedia PDF Downloads 132
1019 Modeling Breathable Particulate Matter Concentrations over Mexico City Retrieved from Landsat 8 Satellite Imagery

Authors: Rodrigo T. Sepulveda-Hirose, Ana B. Carrera-Aguilar, Magnolia G. Martinez-Rivera, Pablo de J. Angeles-Salto, Carlos Herrera-Ventosa

Abstract:

In order to diminish health risks, it is of major importance to monitor air quality. However, this process is accompanied by the high costs of physical and human resources. In this context, this research is carried out with the main objective of developing a predictive model for concentrations of inhalable particles (PM10-2.5) using remote sensing. To develop the model, satellite images, mainly from Landsat 8, of the Mexico City’s Metropolitan Area were used. Using historical PM10 and PM2.5 measurements of the RAMA (Automatic Environmental Monitoring Network of Mexico City) and through the processing of the available satellite images, a preliminary model was generated in which it was possible to observe critical opportunity areas that will allow the generation of a robust model. Through the preliminary model applied to the scenes of Mexico City, three areas were identified that cause great interest due to the presumed high concentration of PM; the zones are those that present high plant density, bodies of water and soil without constructions or vegetation. To date, work continues on this line to improve the preliminary model that has been proposed. In addition, a brief analysis was made of six models, presented in articles developed in different parts of the world, this in order to visualize the optimal bands for the generation of a suitable model for Mexico City. It was found that infrared bands have helped to model in other cities, but the effectiveness that these bands could provide for the geographic and climatic conditions of Mexico City is still being evaluated.

Keywords: air quality, modeling pollution, particulate matter, remote sensing

Procedia PDF Downloads 155
1018 The Impact of Quality Cost on Revenue Sharing in Supply Chain Management

Authors: Fayza M. Obied-Allah

Abstract:

Customer’ needs, quality, and value creation while reducing costs through supply chain management provides challenges and opportunities for companies and researchers. In the light of these challenges, modern ideas must contribute to counter these challenges and exploit opportunities. Perhaps this paper will be one of these contributions. This paper discusses the impact of the quality cost on revenue sharing as a most important incentive to configure business networks. No doubt that the costs directly affect the size of income generated by a business network, so this paper investigates the impact of quality costs on business networks revenue, and their impact on the decision to participate the revenue among the companies in the supply chain. This paper develops the quality cost approach to align with the modern era, the developed model includes five categories besides the well-known four categories (namely prevention costs, appraisal costs, internal failure costs, and external failure costs), a new category has been developed in this research as a new vision of the relationship between quality costs and innovations of industry. This new category is Recycle Cost. This paper is organized into six sections, Section I shows quality costs overview in the supply chain. Section II discusses revenue sharing between the parties in supply chain. Section III investigates the impact of quality costs in revenue sharing decision between partners in supply chain. The fourth section includes survey study and presents statistical results. Section V discusses the results and shows future opportunities for research. Finally, Section VI summarizes the theoretical and practical results of this paper.

Keywords: quality cost, recycle cost, revenue sharing, supply chain management

Procedia PDF Downloads 443
1017 Ways to Sustaining Self-Care of Thai Community Women to Achieve Future Healthy Aging

Authors: Manee Arpanantikul, Pennapa Unsanit, Dolrat Rujiwatthanakorn, Aporacha Lumdubwong

Abstract:

In order to continuously perform self-care based on the sufficiency economy philosophy for the length of women’s lives is not easy. However, there are different ways that women can use to carry out self-care activities regularly. Some women individually perform self-care while others perform self-care in groups. Little is known about ways to sustaining self-care of women based on the fundamental principle of Thai culture. The purpose of this study was to investigate ways to sustaining self-care based on the sufficiency economy philosophy of Thai middle-aged women living in the community in order to achieve future healthy aging. This study employed a qualitative research design. Twenty women who were willing to participate in this study were recruited. Data collection were conducted through in-depth interviews with tape recording, doing field notes, and observation. All interviews were transcribed verbatim, and data were analyzed by using content analysis. The findings showed ways to sustaining self-care of Thai community women to achieve future healthy aging consisting of 7 themes: 1) having determination, 2) having a model, 3) developing a leader, 4) carrying on performing activities, 5) setting up rules, 6) building self-care culture, and 7) developing a self-care group/network. The findings of this study suggested that in order to achieve self-care sustainability women should get to know themselves, have intention and belief, together with having the power of community and support. Therefore, having self-care constantly will prevent disease and promote healthy in women’s lives.

Keywords: qualitative research, sufficiency economy philosophy, Thai middle-aged women, ways to sustaining self-care

Procedia PDF Downloads 375
1016 An End-to-end Piping and Instrumentation Diagram Information Recognition System

Authors: Taekyong Lee, Joon-Young Kim, Jae-Min Cha

Abstract:

Piping and instrumentation diagram (P&ID) is an essential design drawing describing the interconnection of process equipment and the instrumentation installed to control the process. P&IDs are modified and managed throughout a whole life cycle of a process plant. For the ease of data transfer, P&IDs are generally handed over from a design company to an engineering company as portable document format (PDF) which is hard to be modified. Therefore, engineering companies have to deploy a great deal of time and human resources only for manually converting P&ID images into a computer aided design (CAD) file format. To reduce the inefficiency of the P&ID conversion, various symbols and texts in P&ID images should be automatically recognized. However, recognizing information in P&ID images is not an easy task. A P&ID image usually contains hundreds of symbol and text objects. Most objects are pretty small compared to the size of a whole image and are densely packed together. Traditional recognition methods based on geometrical features are not capable enough to recognize every elements of a P&ID image. To overcome these difficulties, state-of-the-art deep learning models, RetinaNet and connectionist text proposal network (CTPN) were used to build a system for recognizing symbols and texts in a P&ID image. Using the RetinaNet and the CTPN model carefully modified and tuned for P&ID image dataset, the developed system recognizes texts, equipment symbols, piping symbols and instrumentation symbols from an input P&ID image and save the recognition results as the pre-defined extensible markup language format. In the test using a commercial P&ID image, the P&ID information recognition system correctly recognized 97% of the symbols and 81.4% of the texts.

Keywords: object recognition system, P&ID, symbol recognition, text recognition

Procedia PDF Downloads 153
1015 Cybersecurity Assessment of Decentralized Autonomous Organizations in Smart Cities

Authors: Claire Biasco, Thaier Hayajneh

Abstract:

A smart city is the integration of digital technologies in urban environments to enhance the quality of life. Smart cities capture real-time information from devices, sensors, and network data to analyze and improve city functions such as traffic analysis, public safety, and environmental impacts. Current smart cities face controversy due to their reliance on real-time data tracking and surveillance. Internet of Things (IoT) devices and blockchain technology are converging to reshape smart city infrastructure away from its centralized model. Connecting IoT data to blockchain applications would create a peer-to-peer, decentralized model. Furthermore, blockchain technology powers the ability for IoT device data to shift from the ownership and control of centralized entities to individuals or communities with Decentralized Autonomous Organizations (DAOs). In the context of smart cities, DAOs can govern cyber-physical systems to have a greater influence over how urban services are being provided. This paper will explore how the core components of a smart city now apply to DAOs. We will also analyze different definitions of DAOs to determine their most important aspects in relation to smart cities. Both categorizations will provide a solid foundation to conduct a cybersecurity assessment of DAOs in smart cities. It will identify the benefits and risks of adopting DAOs as they currently operate. The paper will then provide several mitigation methods to combat cybersecurity risks of DAO integrations. Finally, we will give several insights into what challenges will be faced by DAO and blockchain spaces in the coming years before achieving a higher level of maturity.

Keywords: blockchain, IoT, smart city, DAO

Procedia PDF Downloads 121
1014 Quince Seed Mucilage (QSD)/ Multiwall Carbonano Tube Hybrid Hydrogels as Novel Controlled Drug Delivery Systems

Authors: Raouf Alizadeh, Kadijeh Hemmati

Abstract:

The aim of this study is to synthesize several series of hydrogels from combination of a natural based polymer (Quince seed mucilage QSD), a synthetic copolymer contained methoxy poly ethylene glycol -polycaprolactone (mPEG-PCL) in the presence of different amount of multi-walled carbon nanotube (f-MWNT). Mono epoxide functionalized mPEG (mP EG-EP) was synthesized and reacted with sodium azide in the presence of NH4Cl to afford mPEG- N3(-OH). Then ring opening polymerization (ROP) of ε–caprolactone (CL) in the presence of mPEG- N3(-OH) as initiator and Sn(Oct)2 as catalyst led to preparation of mPEG-PCL- N3(-OH ) which was grafted onto propagylated f-MWNT by the click reaction to obtain mPEG-PCL- f-MWNT (-OH ). In the presence of mPEG- N3(-Br) and mixture of NHS/DCC/ QSD, hybrid hydrogels were successfully synthesized. The copolymers and hydrogels were characterized using different techniques such as, scanning electron microscope (SEM) and thermogravimetric analysis (TGA). The gel content of hydrogels showed dependence on the weight ratio of QSD:mPEG-PCL:f-MWNT. The swelling behavior of the prepared hydrogels was also studied under variation of pH, immersion time, and temperature. According to the results, the swelling behavior of the prepared hydrogels showed significant dependence in the gel content, pH, immersion time and temperature. The highest swelling was observed at room temperature, in 60 min and at pH 8. The loading and in-vitro release of quercetin as a model drug were investigated at pH of 2.2 and 7.4, and the results showed that release rate at pH 7.4 was faster than that at pH 2.2. The total loading and release showed dependence on the network structure of hydrogels and were in the range of 65- 91%. In addition, the cytotoxicity and release kinetics of the prepared hydrogels were also investigated.

Keywords: antioxidant, drug delivery, Quince Seed Mucilage(QSD), swelling behavior

Procedia PDF Downloads 320
1013 Development of Mobile Application for Internship Program Management Using the Concept of Model View Controller (MVC) Pattern

Authors: Shutchapol Chopvitayakun

Abstract:

Nowadays, especially for the last 5 years, mobile devices, mobile applications and mobile users, through the deployment of wireless communication and mobile phone cellular network, all these components are growing significantly bigger and stronger. They are being integrated into each other to create multiple purposes and pervasive deployments into every business and non-business sector such as education, medicine, traveling, finance, real estate and many more. Objective of this study was to develop a mobile application for seniors or last-year students who enroll the internship program at each tertiary school (undergraduate school) and do onsite practice at real field sties, real organizations and real workspaces. During the internship session, all students as the interns are required to exercise, drilling and training onsite with specific locations and specific tasks or may be some assignments from their supervisor. Their work spaces are both private and government corporates and enterprises. This mobile application is developed under schema of a transactional processing system that enables users to keep daily work or practice log, monitor true working locations and ability to follow daily tasks of each trainee. Moreover, it provides useful guidance from each intern’s advisor, in case of emergency. Finally, it can summarize all transactional data then calculate each internship cumulated hours from the field practice session for each individual intern.

Keywords: internship, mobile application, Android OS, smart phone devices, mobile transactional processing system, guidance and monitoring, tertiary education, senior students, model view controller (MVC)

Procedia PDF Downloads 315
1012 Aerodynamic Modeling Using Flight Data at High Angle of Attack

Authors: Rakesh Kumar, A. K. Ghosh

Abstract:

The paper presents the modeling of linear and nonlinear longitudinal aerodynamics using real flight data of Hansa-3 aircraft gathered at low and high angles of attack. The Neural-Gauss-Newton (NGN) method has been applied to model the linear and nonlinear longitudinal dynamics and estimate parameters from flight data. Unsteady aerodynamics due to flow separation at high angles of attack near stall has been included in the aerodynamic model using Kirchhoff’s quasi-steady stall model. NGN method is an algorithm that utilizes Feed Forward Neural Network (FFNN) and Gauss-Newton optimization to estimate the parameters and it does not require any a priori postulation of mathematical model or solving of equations of motion. NGN method was validated on real flight data generated at moderate angles of attack before application to the data at high angles of attack. The estimates obtained from compatible flight data using NGN method were validated by comparing with wind tunnel values and the maximum likelihood estimates. Validation was also carried out by comparing the response of measured motion variables with the response generated by using estimates a different control input. Next, NGN method was applied to real flight data generated by executing a well-designed quasi-steady stall maneuver. The results obtained in terms of stall characteristics and aerodynamic parameters were encouraging and reasonably accurate to establish NGN as a method for modeling nonlinear aerodynamics from real flight data at high angles of attack.

Keywords: parameter estimation, NGN method, linear and nonlinear, aerodynamic modeling

Procedia PDF Downloads 445
1011 Educational Fieldworks towards Urban Biodiversity Preservation: Case Study of Japanese Gardens Management of Kanazawa City, Japan

Authors: Aida Mammadova, Juan Pastor Ivars

Abstract:

Japanese gardens can be considered as the unique hubs to preserve urban biodiversity, as they provide the habitat for the diverse network of living organisms, facilitating to the movement of the rare species around the urban landscape, became the refuge for the moss and many endangered species. For the centuries, Japanese gardens were considered as ecologically sustainable and well-organized ecosystems, due to the skilled maintenances and management. However, unfortunately, due to the depopulations and ageing in Japanese societies, gardens are becoming more abandoned, and there is an urgent need to increase the awareness about the importance of the Japanese gardens to preserve the urban biodiversity. In this study, we have conducted the participatory educational field trips for 12 students into the to the five gardens protected by Kanazawa City and learned about the preservation activities conducted at the governmental, municipal, and local levels. After the courses, students have found a strong linkage between the gardens with the traditional culture. Kanazawa City, for more than 400 years is famous with traditional craft makings and tea ceremonies, and it was noticed that the cultural diversity of the city was strongly supported by the biodiversity of the gardens, and loss of the gardens would bring to the loss of the traditional culture. Using the experiential approach during the fieldworks, it was observed by the students that the linkage between the bio-cultural diversity strongly depends on humans’ activities. The continuous management and maintenance of the gardens are the contributing factor for the preservation of urban diversity. However, garden management is very time and capital consuming process, and it was also noticed that there is a big need to attract all levels of the society to preserve the urban biodiversity through the participatory urbanism.

Keywords: biodiversity, conservation, educational fieldwork, Japanese gardens

Procedia PDF Downloads 212
1010 An Exploration of Why Insider Fraud Is the Biggest Threat to Your Business

Authors: Claire Norman-Maillet

Abstract:

Insider fraud, otherwise known as occupational, employee, or internal fraud, is a financial crime threat. Perpetrated by defrauding (or attempting to defraud) one’s current, prospective, or past employer, an ‘employee’ covers anyone employed by the company, including board members and contractors. The Coronavirus pandemic has forced insider fraud into the spotlight, and it isn’t dimming. As the focus of most academics and practitioners has historically been on that of ‘external fraud’, insider fraud is often overlooked or not considered to be a real threat. However, since COVID-19 changed the working world, pushing most of us into remote or hybrid working, employers cannot easily keep an eye on what their staff are doing, which has led to reliance on trust and transparency. This, therefore, brings about an increased risk of insider fraud perpetration. The objective of this paper is to explore why insider fraud is, therefore, now the biggest threat to a business. To achieve the research objective, participating individuals within the financial crime sector (either as a practitioner or consultants) attended semi-structured interviews with the researcher. The principal recruitment strategy for these individuals was via the researcher’s LinkedIn network. The main findings in the research suggest that insider fraud has been ignored and rejected as a threat to a business, owing to a reluctance to admit that a colleague may perpetrate. A positive of the Coronavirus pandemic is that it has forced insider fraud into a more prominent position and giving it more importance on a business’ agenda and risk register. Despite insider fraud always having been a possibility (and therefore a risk) within any business, it is very rare that a business has given it the attention it requires until now, if at all. The research concludes that insider fraud needs to prioritised by all businesses, and even ahead of external fraud. The research also provides advice on how a business can add new or enhance existing controls to mitigate the risk.

Keywords: insider fraud, occupational fraud, COVID-19, COVID, coronavirus, pandemic, internal fraud, financial crime, economic crime

Procedia PDF Downloads 64
1009 Role of Kerala’s Diaspora Philanthropy Engagement During Economic Crises

Authors: Shibinu S, Mohamed Haseeb N

Abstract:

In times of crisis, the diaspora's role and the help it offers are seen to be vital in determining how many countries, particularly low- and middle-income nations that significantly rely on remittances, recover. Twenty-one lakh twenty thousand Keralites have emigrated abroad, with 81.2 percent of these outflows occurring in the Gulf Cooperative Council (GCC). Most of them are semi-skilled or low-skilled laborers employed in GCC nations. Additionally, a sizeable portion of migrants are employed in industrialized nations like the UK and the US. These nations have seen the development of a highly robust Indian Diaspora. India's development is largely dependent on the generosity of its diaspora, and the nation has benefited greatly from the substantial contributions made by several emigrant generations. Its strength was noticeable during the COVID-19 and Kerala floods. Millions of people were displaced, millions of properties were damaged, and many people died as a result of the 2018 Kerala floods. The Malayalee diaspora played a crucial role in the reconstruction of Kerala by providing support for the rescue efforts underway on the ground through their extensive worldwide network. During COVID-19, an analogous outreach was also noted, in which the diaspora assisted stranded migrants across the globe. Together with the work the diaspora has done for the state's development and recovery, there has also been a recent outpouring of assistance during the COVID-19 pandemic. The study focuses on the subtleties of diaspora philanthropic scholarship and how Kerala was able to recover from the COVID-19 pandemic and floods thanks to it. Semi-structured in-depth interviews with migrants, migrant organizations, and beneficiaries from the diaspora through snowball sampling to better understand the role that diaspora philanthropy plays in times of crisis.

Keywords: crises, diaspora, remittances, COVID-19, flood, economic development of Kerala

Procedia PDF Downloads 31
1008 Evaluating Daylight Performance in an Office Environment in Malaysia, Using Venetian Blind System: Case Study

Authors: Fatemeh Deldarabdolmaleki, Mohamad Fakri Zaky Bin Ja'afar

Abstract:

Having a daylit space together with view results in a pleasant and productive environment for office employees. A daylit space is a space which utilizes daylight as a basic source of illumination to fulfill user’s visual demands and minimizes the electric energy consumption. Malaysian weather is hot and humid all over the year because of its location in the equatorial belt. however, because most of the commercial buildings in Malaysia are air-conditioned, huge glass windows are normally installed in order to keep the physical and visual relation between inside and outside. As a result of climatic situation and mentioned new trend, an ordinary office has huge heat gain, glare, and discomfort for occupants. Balancing occupant’s comfort and energy conservation in a tropical climate is a real challenge. This study concentrates on evaluating a venetian blind system using per pixel analyzing tools based on the suggested cut-out metrics by the literature. Workplace area in a private office room has been selected as a case study. Eight-day measurement experiment was conducted to investigate the effect of different venetian blind angles in an office area under daylight conditions in Serdang, Malaysia. The study goal was to explore daylight comfort of a commercially available venetian blind system, its’ daylight sufficiency and excess (8:00 AM to 5 PM) as well as Glare examination. Recently developed software, analyzing High Dynamic Range Images (HDRI captured by CCD camera), such as radiance based Evalglare and hdrscope help to investigate luminance-based metrics. The main key factors are illuminance and luminance levels, mean and maximum luminance, daylight glare probability (DGP) and luminance ratio of the selected mask regions. The findings show that in most cases, morning session needs artificial lighting in order to achieve daylight comfort. However, in some conditions (e.g. 10° and 40° slat angles) in the second half of day the workplane illuminance level exceeds the maximum of 2000 lx. Generally, a rising trend is discovered toward mean window luminance and the most unpleasant cases occur after 2 P.M. Considering the luminance criteria rating, the uncomfortable conditions occur in the afternoon session. Surprisingly in no blind condition, extreme case of window/task ratio is not common. Studying the daylight glare probability, there is not any DGP value higher than 0.35 in this experiment.

Keywords: daylighting, energy simulation, office environment, Venetian blind

Procedia PDF Downloads 256
1007 Heat Vulnerability Index (HVI) Mapping in Extreme Heat Days Coupled with Air Pollution Using Principal Component Analysis (PCA) Technique: A Case Study of Amiens, France

Authors: Aiman Mazhar Qureshi, Ahmed Rachid

Abstract:

Extreme heat events are emerging human environmental health concerns in dense urban areas due to anthropogenic activities. High spatial and temporal resolution heat maps are important for urban heat adaptation and mitigation, helping to indicate hotspots that are required for the attention of city planners. The Heat Vulnerability Index (HVI) is the important approach used by decision-makers and urban planners to identify heat-vulnerable communities and areas that require heat stress mitigation strategies. Amiens is a medium-sized French city, where the average temperature has been increasing since the year 2000 by +1°C. Extreme heat events are recorded in the month of July for the last three consecutive years, 2018, 2019 and 2020. Poor air quality, especially ground-level ozone, has been observed mainly during the same hot period. In this study, we evaluated the HVI in Amiens during extreme heat days recorded last three years (2018,2019,2020). The Principal Component Analysis (PCA) technique is used for fine-scale vulnerability mapping. The main data we considered for this study to develop the HVI model are (a) socio-economic and demographic data; (b) Air pollution; (c) Land use and cover; (d) Elderly heat-illness; (e) socially vulnerable; (f) Remote sensing data (Land surface temperature (LST), mean elevation, NDVI and NDWI). The output maps identified the hot zones through comprehensive GIS analysis. The resultant map shows that high HVI exists in three typical areas: (1) where the population density is quite high and the vegetation cover is small (2) the artificial surfaces (built-in areas) (3) industrial zones that release thermal energy and ground-level ozone while those with low HVI are located in natural landscapes such as rivers and grasslands. The study also illustrates the system theory with a causal diagram after data analysis where anthropogenic activities and air pollution appear in correspondence with extreme heat events in the city. Our suggested index can be a useful tool to guide urban planners and municipalities, decision-makers and public health professionals in targeting areas at high risk of extreme heat and air pollution for future interventions adaptation and mitigation measures.

Keywords: heat vulnerability index, heat mapping, heat health-illness, remote sensing, urban heat mitigation

Procedia PDF Downloads 148
1006 Understanding Retail Benefits Trade-offs of Dynamic Expiration Dates (DED) Associated with Food Waste

Authors: Junzhang Wu, Yifeng Zou, Alessandro Manzardo, Antonio Scipioni

Abstract:

Dynamic expiration dates (DEDs) play an essential role in reducing food waste in the context of the sustainable cold chain and food system. However, it is unknown for the trades-off in retail benefits when setting an expiration date on fresh food products. This study aims to develop a multi-dimensional decision-making model that integrates DEDs with food waste based on wireless sensor network technology. The model considers the initial quality of fresh food and the change rate of food quality with the storage temperature as cross-independent variables to identify the potential impacts of food waste in retail by applying s DEDs system. The results show that retail benefits from the DEDs system depend on each scenario despite its advanced technology. In the DEDs, the storage temperature of the retail shelf leads to the food waste rate, followed by the change rate of food quality and the initial quality of food products. We found that the DEDs system could reduce food waste when food products are stored at lower temperature areas. Besides, the potential of food savings in an extended replenishment cycle is significantly more advantageous than the fixed expiration dates (FEDs). On the other hand, the information-sharing approach of the DEDs system is relatively limited in improving sustainable assessment performance of food waste in retail and even misleads consumers’ choices. The research provides a comprehensive understanding to support the techno-economic choice of the DEDs associated with food waste in retail.

Keywords: dynamic expiry dates (DEDs), food waste, retail benefits, fixed expiration dates (FEDs)

Procedia PDF Downloads 114
1005 Global Healthcare Village Based on Mobile Cloud Computing

Authors: Laleh Boroumand, Muhammad Shiraz, Abdullah Gani, Rashid Hafeez Khokhar

Abstract:

Cloud computing being the use of hardware and software that are delivered as a service over a network has its application in the area of health care. Due to the emergency cases reported in most of the medical centers, prompt for an efficient scheme to make health data available with less response time. To this end, we propose a mobile global healthcare village (MGHV) model that combines the components of three deployment model which include country, continent and global health cloud to help in solving the problem mentioned above. In the creation of continent model, two (2) data centers are created of which one is local and the other is global. The local replay the request of residence within the continent, whereas the global replay the requirements of others. With the methods adopted, there is an assurance of the availability of relevant medical data to patients, specialists, and emergency staffs regardless of locations and time. From our intensive experiment using the simulation approach, it was observed that, broker policy scheme with respect to optimized response time, yields a very good performance in terms of reduction in response time. Though, our results are comparable to others when there is an increase in the number of virtual machines (80-640 virtual machines). The proportionality in increase of response time is within 9%. The results gotten from our simulation experiments shows that utilizing MGHV leads to the reduction of health care expenditures and helps in solving the problems of unqualified medical staffs faced by both developed and developing countries.

Keywords: cloud computing (MCC), e-healthcare, availability, response time, service broker policy

Procedia PDF Downloads 377
1004 Systematic Review of Digital Interventions to Reduce the Carbon Footprint of Primary Care

Authors: Anastasia Constantinou, Panayiotis Laouris, Stephen Morris

Abstract:

Background: Climate change has been reported as one of the worst threats to healthcare. The healthcare sector is a significant contributor to greenhouse gas emissions with primary care being responsible for 23% of the NHS’ total carbon footprint. Digital interventions, primarily focusing on telemedicine, offer a route to change. This systematic review aims to quantify and characterize the carbon footprint savings associated with the implementation of digital interventions in the setting of primary care. Methods: A systematic review of published literature was conducted according to PRISMA (Preferred Reporting Item for Systematic Reviews and Meta-Analyses) guidelines. MEDLINE, PubMed, and Scopus databases as well as Google scholar were searched using key terms relating to “carbon footprint,” “environmental impact,” “sustainability”, “green care”, “primary care,”, and “general practice,” using citation tracking to identify additional articles. Data was extracted and analyzed in Microsoft Excel. Results: Eight studies were identified conducted in four different countries between 2010 and 2023. Four studies used interventions to address primary care services, three studies focused on the interface between primary and specialist care, and one study addressed both. Digital interventions included the use of mobile applications, online portals, access to electronic medical records, electronic referrals, electronic prescribing, video-consultations and use of autonomous artificial intelligence. Only one study carried out a complete life cycle assessment to determine the carbon footprint of the intervention. It estimate that digital interventions reduced the carbon footprint at primary care level by 5.1 kgCO2/visit, and at the interface with specialist care by 13.4 kg CO₂/visit. When assessing the relationship between travel-distance saved and savings in emissions, we identified a strong correlation, suggesting that most of the carbon footprint reduction is attributed to reduced travel. However, two studies also commented on environmental savings associated with reduced use of paper. Patient savings in the form of reduced fuel cost and reduced travel time were also identified. Conclusion: All studies identified significant reductions in carbon footprint following implementation of digital interventions. In the future, controlled, prospective studies incorporating complete life cycle assessments and accounting for double-consulting effects, use of additional resources, technical failures, quality of care and cost-effectiveness are needed to fully appreciate the sustainable benefit of these interventions

Keywords: carbon footprint, environmental impact, primary care, sustainable healthcare

Procedia PDF Downloads 62
1003 The Effect of Global Value Chain Participation on Environment

Authors: Piyaphan Changwatchai

Abstract:

Global value chain is important for current world economy through foreign direct investment. Multinational enterprises' efficient location seeking for each stage of production lead to global production network and more global value chain participation of several countries. Global value chain participation has several effects on participating countries in several aspects including the environment. The effect of global value chain participation on the environment is ambiguous. As a result, this research aims to study the effect of global value chain participation on countries' CO₂ emission and methane emission by using quantitative analysis with secondary panel data of sixty countries. The analysis is divided into two types of global value chain participation, which are forward global value chain participation and backward global value chain participation. The results show that, for forward global value chain participation, GDP per capita affects two types of pollutants in downward bell curve shape. Forward global value chain participation negatively affects CO₂ emission and methane emission. As for backward global value chain participation, GDP per capita affects two types of pollutants in downward bell curve shape. Backward global value chain participation negatively affects methane emission only. However, when considering Asian countries, forward global value chain participation positively affects CO₂ emission. The recommendations of this research are that countries participating in global value chain should promote production with effective environmental management in each stage of value chain. The examples of policies are providing incentives to private sectors, including domestic producers and MNEs, for green production technology and efficient environment management and engaging in international agreements in terms of green production. Furthermore, government should regulate each stage of production in value chain toward green production, especially for Asia countries.

Keywords: CO₂ emission, environment, global value chain participation, methane emission

Procedia PDF Downloads 191
1002 Short-Term Forecast of Wind Turbine Production with Machine Learning Methods: Direct Approach and Indirect Approach

Authors: Mamadou Dione, Eric Matzner-lober, Philippe Alexandre

Abstract:

The Energy Transition Act defined by the French State has precise implications on Renewable Energies, in particular on its remuneration mechanism. Until then, a purchase obligation contract permitted the sale of wind-generated electricity at a fixed rate. Tomorrow, it will be necessary to sell this electricity on the Market (at variable rates) before obtaining additional compensation intended to reduce the risk. This sale on the market requires to announce in advance (about 48 hours before) the production that will be delivered on the network, so to be able to predict (in the short term) this production. The fundamental problem remains the variability of the Wind accentuated by the geographical situation. The objective of the project is to provide, every day, short-term forecasts (48-hour horizon) of wind production using weather data. The predictions of the GFS model and those of the ECMWF model are used as explanatory variables. The variable to be predicted is the production of a wind farm. We do two approaches: a direct approach that predicts wind generation directly from weather data, and an integrated approach that estimâtes wind from weather data and converts it into wind power by power curves. We used machine learning techniques to predict this production. The models tested are random forests, CART + Bagging, CART + Boosting, SVM (Support Vector Machine). The application is made on a wind farm of 22MW (11 wind turbines) of the Compagnie du Vent (that became Engie Green France). Our results are very conclusive compared to the literature.

Keywords: forecast aggregation, machine learning, spatio-temporal dynamics modeling, wind power forcast

Procedia PDF Downloads 217