Search results for: artificial intellegence
35 Effectiveness of Gamified Simulators in the Health Sector
Authors: Nuno Biga
Abstract:
The integration of serious games with gamification in management education and training has gained significant importance in recent years as innovative strategies are sought to improve target audience engagement and learning outcomes. This research builds on the author's previous work in this field and presents a case study that evaluates the ex-post impact of a sample of applications of the BIGAMES management simulator in the training of top managers from various hospital institutions. The methodology includes evaluating the reaction of participants after each edition of BIGAMES Accident & Emergency (A&E) carried out over the last 3 years, as well as monitoring the career path of a significant sample of participants and their feedback more than a year after their experience with this simulator. Control groups will be set up, according to the type of role their members held when they took part in the BIGAMES A&E simulator: Administrators, Clinical Directors and Nursing Directors. Former participants are invited to answer a questionnaire structured for this purpose, where they are asked, among other questions, about the importance and impact that the BIGAMES A&E simulator has had on their professional activity. The research methodology also includes an exhaustive literature review, focusing on empirical studies in the field of education and training in management and business that investigate the effectiveness of gamification and serious games in improving learning, team collaboration, critical thinking, problem-solving skills and overall performance, with a focus on training contexts in the health sector. The results of the research carried out show that gamification and serious games that simulate real scenarios, such as Business Interactive Games - BIGAMES©, can significantly increase the motivation and commitment of participants, stimulating the development of transversal skills, the mobilization of group synergies and the acquisition and retention of knowledge through interactive user-centred scenarios. Individuals who participate in game-based learning series show a higher level of commitment to learning because they find these teaching methods more enjoyable and interactive. This research study aims to demonstrate that, as executive education and training programs develop to meet the current needs of managers, gamification and serious games stand out as effective means of bridging the gap between traditional teaching methods and modern educational and training requirements. To this end, this research evaluates the medium/long-term effects of gamified learning on the professional performance of participants in the BIGAMES simulator applied to healthcare. Based on the conclusions of the evaluation of the effectiveness of training using gamification and taking into account the results of the opinion poll of former A&E participants, this research study proposes an integrated approach for the transversal application of the A&E Serious Game in various educational contexts, covering top management (traditionally the target audience of BIGAMES A&E), middle and operational management in healthcare institutions (functional area heads and professionals with career development potential), as well as higher education in medicine and nursing courses. The integrated solution called “BIGAMES A&E plus”, developed as part of this research, includes the digitalization of key processes and the incorporation of AI.Keywords: artificial intelligence (AI), executive training, gamification, higher education, management simulators, serious games (SG), training effectiveness
Procedia PDF Downloads 1234 Application of the Pattern Method to Form the Stable Neural Structures in the Learning Process as a Way of Solving Modern Problems in Education
Authors: Liudmyla Vesper
Abstract:
The problems of modern education are large-scale and diverse. The aspirations of parents, teachers, and experts converge - everyone interested in growing up a generation of whole, well-educated persons. Both the family and society are expected in the future generation to be self-sufficient, desirable in the labor market, and capable of lifelong learning. Today's children have a powerful potential that is difficult to realize in the conditions of traditional school approaches. Focusing on STEM education in practice often ends with the simple use of computers and gadgets during class. "Science", "technology", "engineering" and "mathematics" are difficult to combine within school and university curricula, which have not changed much during the last 10 years. Solving the problems of modern education largely depends on teachers - innovators, teachers - practitioners who develop and implement effective educational methods and programs. Teachers who propose innovative pedagogical practices that allow students to master large-scale knowledge and apply it to the practical plane. Effective education considers the creation of stable neural structures during the learning process, which allow to preserve and increase knowledge throughout life. The author proposed a method of integrated lessons – cases based on the maths patterns for forming a holistic perception of the world. This method and program are scientifically substantiated and have more than 15 years of practical application experience in school and student classrooms. The first results of the practical application of the author's methodology and curriculum were announced at the International Conference "Teaching and Learning Strategies to Promote Elementary School Success", 2006, April 22-23, Yerevan, Armenia, IREX-administered 2004-2006 Multiple Component Education Project. This program is based on the concept of interdisciplinary connections and its implementation in the process of continuous learning. This allows students to save and increase knowledge throughout life according to a single pattern. The pattern principle stores information on different subjects according to one scheme (pattern), using long-term memory. This is how neural structures are created. The author also admits that a similar method can be successfully applied to the training of artificial intelligence neural networks. However, this assumption requires further research and verification. The educational method and program proposed by the author meet the modern requirements for education, which involves mastering various areas of knowledge, starting from an early age. This approach makes it possible to involve the child's cognitive potential as much as possible and direct it to the preservation and development of individual talents. According to the methodology, at the early stages of learning students understand the connection between school subjects (so-called "sciences" and "humanities") and in real life, apply the knowledge gained in practice. This approach allows students to realize their natural creative abilities and talents, which makes it easier to navigate professional choices and find their place in life.Keywords: science education, maths education, AI, neuroplasticity, innovative education problem, creativity development, modern education problem
Procedia PDF Downloads 6133 Big Data Applications for Transportation Planning
Authors: Antonella Falanga, Armando Cartenì
Abstract:
"Big data" refers to extremely vast and complex sets of data, encompassing extraordinarily large and intricate datasets that require specific tools for meaningful analysis and processing. These datasets can stem from diverse origins like sensors, mobile devices, online transactions, social media platforms, and more. The utilization of big data is pivotal, offering the chance to leverage vast information for substantial advantages across diverse fields, thereby enhancing comprehension, decision-making, efficiency, and fostering innovation in various domains. Big data, distinguished by its remarkable attributes of enormous volume, high velocity, diverse variety, and significant value, represent a transformative force reshaping the industry worldwide. Their pervasive impact continues to unlock new possibilities, driving innovation and advancements in technology, decision-making processes, and societal progress in an increasingly data-centric world. The use of these technologies is becoming more widespread, facilitating and accelerating operations that were once much more complicated. In particular, big data impacts across multiple sectors such as business and commerce, healthcare and science, finance, education, geography, agriculture, media and entertainment and also mobility and logistics. Within the transportation sector, which is the focus of this study, big data applications encompass a wide variety, spanning across optimization in vehicle routing, real-time traffic management and monitoring, logistics efficiency, reduction of travel times and congestion, enhancement of the overall transportation systems, but also mitigation of pollutant emissions contributing to environmental sustainability. Meanwhile, in public administration and the development of smart cities, big data aids in improving public services, urban planning, and decision-making processes, leading to more efficient and sustainable urban environments. Access to vast data reservoirs enables deeper insights, revealing hidden patterns and facilitating more precise and timely decision-making. Additionally, advancements in cloud computing and artificial intelligence (AI) have further amplified the potential of big data, enabling more sophisticated and comprehensive analyses. Certainly, utilizing big data presents various advantages but also entails several challenges regarding data privacy and security, ensuring data quality, managing and storing large volumes of data effectively, integrating data from diverse sources, the need for specialized skills to interpret analysis results, ethical considerations in data use, and evaluating costs against benefits. Addressing these difficulties requires well-structured strategies and policies to balance the benefits of big data with privacy, security, and efficient data management concerns. Building upon these premises, the current research investigates the efficacy and influence of big data by conducting an overview of the primary and recent implementations of big data in transportation systems. Overall, this research allows us to conclude that big data better provide to enhance rational decision-making for mobility choices and is imperative for adeptly planning and allocating investments in transportation infrastructures and services.Keywords: big data, public transport, sustainable mobility, transport demand, transportation planning
Procedia PDF Downloads 6032 An Efficient Algorithm for Solving the Transmission Network Expansion Planning Problem Integrating Machine Learning with Mathematical Decomposition
Authors: Pablo Oteiza, Ricardo Alvarez, Mehrdad Pirnia, Fuat Can
Abstract:
To effectively combat climate change, many countries around the world have committed to a decarbonisation of their electricity, along with promoting a large-scale integration of renewable energy sources (RES). While this trend represents a unique opportunity to effectively combat climate change, achieving a sound and cost-efficient energy transition towards low-carbon power systems poses significant challenges for the multi-year Transmission Network Expansion Planning (TNEP) problem. The objective of the multi-year TNEP is to determine the necessary network infrastructure to supply the projected demand in a cost-efficient way, considering the evolution of the new generation mix, including the integration of RES. The rapid integration of large-scale RES increases the variability and uncertainty in the power system operation, which in turn increases short-term flexibility requirements. To meet these requirements, flexible generating technologies such as energy storage systems must be considered within the TNEP as well, along with proper models for capturing the operational challenges of future power systems. As a consequence, TNEP formulations are becoming more complex and difficult to solve, especially for its application in realistic-sized power system models. To meet these challenges, there is an increasing need for developing efficient algorithms capable of solving the TNEP problem with reasonable computational time and resources. In this regard, a promising research area is the use of artificial intelligence (AI) techniques for solving large-scale mixed-integer optimization problems, such as the TNEP. In particular, the use of AI along with mathematical optimization strategies based on decomposition has shown great potential. In this context, this paper presents an efficient algorithm for solving the multi-year TNEP problem. The algorithm combines AI techniques with Column Generation, a traditional decomposition-based mathematical optimization method. One of the challenges of using Column Generation for solving the TNEP problem is that the subproblems are of mixed-integer nature, and therefore solving them requires significant amounts of time and resources. Hence, in this proposal we solve a linearly relaxed version of the subproblems, and trained a binary classifier that determines the value of the binary variables, based on the results obtained from the linearized version. A key feature of the proposal is that we integrate the binary classifier into the optimization algorithm in such a way that the optimality of the solution can be guaranteed. The results of a study case based on the HRP 38-bus test system shows that the binary classifier has an accuracy above 97% for estimating the value of the binary variables. Since the linearly relaxed version of the subproblems can be solved with significantly less time than the integer programming counterpart, the integration of the binary classifier into the Column Generation algorithm allowed us to reduce the computational time required for solving the problem by 50%. The final version of this paper will contain a detailed description of the proposed algorithm, the AI-based binary classifier technique and its integration into the CG algorithm. To demonstrate the capabilities of the proposal, we evaluate the algorithm in case studies with different scenarios, as well as in other power system models.Keywords: integer optimization, machine learning, mathematical decomposition, transmission planning
Procedia PDF Downloads 8331 Variations in Spatial Learning and Memory across Natural Populations of Zebrafish, Danio rerio
Authors: Tamal Roy, Anuradha Bhat
Abstract:
Cognitive abilities aid fishes in foraging, avoiding predators & locating mates. Factors like predation pressure & habitat complexity govern learning & memory in fishes. This study aims to compare spatial learning & memory across four natural populations of zebrafish. Zebrafish, a small cyprinid inhabits a diverse range of freshwater habitats & this makes it amenable to studies investigating role of native environment in spatial cognitive abilities. Four populations were collected across India from waterbodies with contrasting ecological conditions. Habitat complexity of the water-bodies was evaluated as a combination of channel substrate diversity and diversity of vegetation. Experiments were conducted on populations under controlled laboratory conditions. A square shaped spatial testing arena (maze) was constructed for testing the performance of adult zebrafish. The square tank consisted of an inner square shaped layer with the edges connected to the diagonal ends of the tank-walls by connections thereby forming four separate chambers. Each of the four chambers had a main door in the centre. Each chamber had three sections separated by two windows. A removable coloured window-pane (red, yellow, green or blue) identified each main door. A food reward associated with an artificial plant was always placed inside the left-hand section of the red-door chamber. The position of food-reward and plant within the red-door chamber was fixed. A test fish would have to explore the maze by taking turns and locate the food inside the right-side section of the red-door chamber. Fishes were sorted from each population stock and kept individually in separate containers for identification. At a time, a test fish was released into the arena and allowed 20 minutes to explore in order to find the food-reward. In this way, individual fishes were trained through the maze to locate the food reward for eight consecutive days. The position of red door, with the plant and the reward, was shuffled every day. Following training, an intermission of four days was given during which the fishes were not subjected to trials. Post-intermission, the fishes were re-tested on the 13th day following the same protocol for their ability to remember the learnt task. Exploratory tendencies and latency of individuals to explore on 1st day of training, performance time across trials, and number of mistakes made each day were recorded. Additionally, mechanism used by individuals to solve the maze each day was analyzed across populations. Fishes could be expected to use algorithm (sequence of turns) or associative cues in locating the food reward. Individuals of populations did not differ significantly in latencies and tendencies to explore. No relationship was found between exploration and learning across populations. High habitat-complexity populations had higher rates of learning & stronger memory while low habitat-complexity populations had lower rates of learning and much reduced abilities to remember. High habitat-complexity populations used associative cues more than algorithm for learning and remembering while low habitat-complexity populations used both equally. The study, therefore, helped understand the role of natural ecology in explaining variations in spatial learning abilities across populations.Keywords: algorithm, associative cue, habitat complexity, population, spatial learning
Procedia PDF Downloads 28430 Generating Individualized Wildfire Risk Assessments Utilizing Multispectral Imagery and Geospatial Artificial Intelligence
Authors: Gus Calderon, Richard McCreight, Tammy Schwartz
Abstract:
Forensic analysis of community wildfire destruction in California has shown that reducing or removing flammable vegetation in proximity to buildings and structures is one of the most important wildfire defenses available to homeowners. State laws specify the requirements for homeowners to create and maintain defensible space around all structures. Unfortunately, this decades-long effort had limited success due to noncompliance and minimal enforcement. As a result, vulnerable communities continue to experience escalating human and economic costs along the wildland-urban interface (WUI). Quantifying vegetative fuels at both the community and parcel scale requires detailed imaging from an aircraft with remote sensing technology to reduce uncertainty. FireWatch has been delivering high spatial resolution (5” ground sample distance) wildfire hazard maps annually to the community of Rancho Santa Fe, CA, since 2019. FireWatch uses a multispectral imaging system mounted onboard an aircraft to create georeferenced orthomosaics and spectral vegetation index maps. Using proprietary algorithms, the vegetation type, condition, and proximity to structures are determined for 1,851 properties in the community. Secondary data processing combines object-based classification of vegetative fuels, assisted by machine learning, to prioritize mitigation strategies within the community. The remote sensing data for the 10 sq. mi. community is divided into parcels and sent to all homeowners in the form of defensible space maps and reports. Follow-up aerial surveys are performed annually using repeat station imaging of fixed GPS locations to address changes in defensible space, vegetation fuel cover, and condition over time. These maps and reports have increased wildfire awareness and mitigation efforts from 40% to over 85% among homeowners in Rancho Santa Fe. To assist homeowners fighting increasing insurance premiums and non-renewals, FireWatch has partnered with Black Swan Analytics, LLC, to leverage the multispectral imagery and increase homeowners’ understanding of wildfire risk drivers. For this study, a subsample of 100 parcels was selected to gain a comprehensive understanding of wildfire risk and the elements which can be mitigated. Geospatial data from FireWatch’s defensible space maps was combined with Black Swan’s patented approach using 39 other risk characteristics into a 4score Report. The 4score Report helps property owners understand risk sources and potential mitigation opportunities by assessing four categories of risk: Fuel sources, ignition sources, susceptibility to loss, and hazards to fire protection efforts (FISH). This study has shown that susceptibility to loss is the category residents and property owners must focus their efforts. The 4score Report also provides a tool to measure the impact of homeowner actions on risk levels over time. Resiliency is the only solution to breaking the cycle of community wildfire destruction and it starts with high-quality data and education.Keywords: defensible space, geospatial data, multispectral imaging, Rancho Santa Fe, susceptibility to loss, wildfire risk.
Procedia PDF Downloads 10629 Internet of Things, Edge and Cloud Computing in Rock Mechanical Investigation for Underground Surveys
Authors: Esmael Makarian, Ayub Elyasi, Fatemeh Saberi, Olusegun Stanley Tomomewo
Abstract:
Rock mechanical investigation is one of the most crucial activities in underground operations, especially in surveys related to hydrocarbon exploration and production, geothermal reservoirs, energy storage, mining, and geotechnics. There is a wide range of traditional methods for driving, collecting, and analyzing rock mechanics data. However, these approaches may not be suitable or work perfectly in some situations, such as fractured zones. Cutting-edge technologies have been provided to solve and optimize the mentioned issues. Internet of Things (IoT), Edge, and Cloud Computing technologies (ECt & CCt, respectively) are among the most widely used and new artificial intelligence methods employed for geomechanical studies. IoT devices act as sensors and cameras for real-time monitoring and mechanical-geological data collection of rocks, such as temperature, movement, pressure, or stress levels. Structural integrity, especially for cap rocks within hydrocarbon systems, and rock mass behavior assessment, to further activities such as enhanced oil recovery (EOR) and underground gas storage (UGS), or to improve safety risk management (SRM) and potential hazards identification (P.H.I), are other benefits from IoT technologies. EC techniques can process, aggregate, and analyze data immediately collected by IoT on a real-time scale, providing detailed insights into the behavior of rocks in various situations (e.g., stress, temperature, and pressure), establishing patterns quickly, and detecting trends. Therefore, this state-of-the-art and useful technology can adopt autonomous systems in rock mechanical surveys, such as drilling and production (in hydrocarbon wells) or excavation (in mining and geotechnics industries). Besides, ECt allows all rock-related operations to be controlled remotely and enables operators to apply changes or make adjustments. It must be mentioned that this feature is very important in environmental goals. More often than not, rock mechanical studies consist of different data, such as laboratory tests, field operations, and indirect information like seismic or well-logging data. CCt provides a useful platform for storing and managing a great deal of volume and different information, which can be very useful in fractured zones. Additionally, CCt supplies powerful tools for predicting, modeling, and simulating rock mechanical information, especially in fractured zones within vast areas. Also, it is a suitable source for sharing extensive information on rock mechanics, such as the direction and size of fractures in a large oil field or mine. The comprehensive review findings demonstrate that digital transformation through integrated IoT, Edge, and Cloud solutions is revolutionizing traditional rock mechanical investigation. These advanced technologies have empowered real-time monitoring, predictive analysis, and data-driven decision-making, culminating in noteworthy enhancements in safety, efficiency, and sustainability. Therefore, by employing IoT, CCt, and ECt, underground operations have experienced a significant boost, allowing for timely and informed actions using real-time data insights. The successful implementation of IoT, CCt, and ECt has led to optimized and safer operations, optimized processes, and environmentally conscious approaches in underground geological endeavors.Keywords: rock mechanical studies, internet of things, edge computing, cloud computing, underground surveys, geological operations
Procedia PDF Downloads 5928 An Integrated Real-Time Hydrodynamic and Coastal Risk Assessment Model
Authors: M. Reza Hashemi, Chris Small, Scott Hayward
Abstract:
The Northeast Coast of the US faces damaging effects of coastal flooding and winds due to Atlantic tropical and extratropical storms each year. Historically, several large storm events have produced substantial levels of damage to the region; most notably of which were the Great Atlantic Hurricane of 1938, Hurricane Carol, Hurricane Bob, and recently Hurricane Sandy (2012). The objective of this study was to develop an integrated modeling system that could be used as a forecasting/hindcasting tool to evaluate and communicate the risk coastal communities face from these coastal storms. This modeling system utilizes the ADvanced CIRCulation (ADCIRC) model for storm surge predictions and the Simulating Waves Nearshore (SWAN) model for the wave environment. These models were coupled, passing information to each other and computing over the same unstructured domain, allowing for the most accurate representation of the physical storm processes. The coupled SWAN-ADCIRC model was validated and has been set up to perform real-time forecast simulations (as well as hindcast). Modeled storm parameters were then passed to a coastal risk assessment tool. This tool, which is generic and universally applicable, generates spatial structural damage estimate maps on an individual structure basis for an area of interest. The required inputs for the coastal risk model included a detailed information about the individual structures, inundation levels, and wave heights for the selected region. Additionally, calculation of wind damage to structures was incorporated. The integrated coastal risk assessment system was then tested and applied to Charlestown, a small vulnerable coastal town along the southern shore of Rhode Island. The modeling system was applied to Hurricane Sandy and a synthetic storm. In both storm cases, effect of natural dunes on coastal risk was investigated. The resulting damage maps for the area (Charlestown) clearly showed that the dune eroded scenarios affected more structures, and increased the estimated damage. The system was also tested in forecast mode for a large Nor’Easters: Stella (March 2017). The results showed a good performance of the coupled model in forecast mode when compared to observations. Finally, a nearshore model XBeach was then nested within this regional grid (ADCIRC-SWAN) to simulate nearshore sediment transport processes and coastal erosion. Hurricane Irene (2011) was used to validate XBeach, on the basis of a unique beach profile dataset at the region. XBeach showed a relatively good performance, being able to estimate eroded volumes along the beach transects with a mean error of 16%. The validated model was then used to analyze the effectiveness of several erosion mitigation methods that were recommended in a recent study of coastal erosion in New England: beach nourishment, coastal bank (engineered core), and submerged breakwater as well as artificial surfing reef. It was shown that beach nourishment and coastal banks perform better to mitigate shoreline retreat and coastal erosion.Keywords: ADCIRC, coastal flooding, storm surge, coastal risk assessment, living shorelines
Procedia PDF Downloads 11527 A Case Study of a Rehabilitated Child by Joint Efforts of Parents and Community
Authors: Fouzia Arif, Arif S. Mohammad, Hifsa Altaf, Lubna Raees
Abstract:
Introduction: The term "disability", refers to any condition that impedes the completion of daily tasks using traditional methods. In developing countries like Pakistan, disable population is usually excluded from the mainstream. In squatter settlements the situation is more critical. Sultanabad is one of the squatter settlements of Karachi. Purpose of case study is to improve the health of disabled children’s, and create awareness among the parents and community. Through a household visit, Shiraz, a young disabled boy of 15.5 years old was identified. Her mother articulated that her son was living normally and happily with his parents two years back. When he was 13 years old and student of class 8th, both his legs were traumatized in a Railway Train Accident while playing cricket. He got both femoral shaft fractured severely. He was taken to Jinnah Post Graduate Medical Centre (JPMC) where his left leg was amputated at above knee level and right leg was opened & fixed by reduction internally, luckily bone healed moderately with the passage of time. Methods: In Squatter settlements of Karachi Sultanabad, a survey was conducted in two sectors. Disability screening questionnaire was developed, collaboration with community through household visits, outreach sessions 23cases of disabled were identified who were socialized through sports, Musical program and get-together was organized with stockholder for creating awareness among community and parent’s. Collaboration was established with different NGOs, Government, stakeholders and community support for establishment of Physiotherapy Center. During home visit it was identified that Shiraz was on bed since last 1 year, his family could not afforded cost of physiotherapist and medical consultation due to poverty. Parents counseling was done mentioning that Shiraz needed to take treatment. After motivation his parents agreed for treatment. He was consulted by an orthopedic surgeon in AKUH, Who referred to DMC University of Health Science for rehabilitation service. There he was assessed and referred for Community Based Physiotherapy Centre Sultanabad. Physiotherapist visited home along with Coordinator for Special children and assessed him regularly, planned Physiotherapy treatment for abdominal, high muscles strutting exercise foot muscles strengthening exercise, knee mobilization weight bearing from partial to full weight gradually, also strengthen exercise were given for residual limb as the boy was dependent on it. He was also provided by an artificial leg and training was done. Result: Shiraz is now fully mobile, he can walk independently even out of home, functional ability progress improved and dependency factors reduced. It was difficult but not impossible. We all have sympathy but if we have empathy then we can rehabilitate the community in a better way. His parents are very happy and also the community is surprised to see him in such better condition. Conclusion: Combined efforts of physiotherapist, Coordinator of special children, community and parents made a drastic change in Shiraz’s case by continuously motivating him for better outcome. He is going to school regularly without support. Since he belongs to a poor family he faces financial constraints for education and clinical follow ups regularly.Keywords: femoral shaft fracture, trauma, orthopedic surgeon, physiotherapy treatment
Procedia PDF Downloads 24226 Learning from Dendrites: Improving the Point Neuron Model
Authors: Alexander Vandesompele, Joni Dambre
Abstract:
The diversity in dendritic arborization, as first illustrated by Santiago Ramon y Cajal, has always suggested a role for dendrites in the functionality of neurons. In the past decades, thanks to new recording techniques and optical stimulation methods, it has become clear that dendrites are not merely passive electrical components. They are observed to integrate inputs in a non-linear fashion and actively participate in computations. Regardless, in simulations of neural networks dendritic structure and functionality are often overlooked. Especially in a machine learning context, when designing artificial neural networks, point neuron models such as the leaky-integrate-and-fire (LIF) model are dominant. These models mimic the integration of inputs at the neuron soma, and ignore the existence of dendrites. In this work, the LIF point neuron model is extended with a simple form of dendritic computation. This gives the LIF neuron increased capacity to discriminate spatiotemporal input sequences, a dendritic functionality as observed in another study. Simulations of the spiking neurons are performed using the Bindsnet framework. In the common LIF model, incoming synapses are independent. Here, we introduce a dependency between incoming synapses such that the post-synaptic impact of a spike is not only determined by the weight of the synapse, but also by the activity of other synapses. This is a form of short term plasticity where synapses are potentiated or depressed by the preceding activity of neighbouring synapses. This is a straightforward way to prevent inputs from simply summing linearly at the soma. To implement this, each pair of synapses on a neuron is assigned a variable,representing the synaptic relation. This variable determines the magnitude ofthe short term plasticity. These variables can be chosen randomly or, more interestingly, can be learned using a form of Hebbian learning. We use Spike-Time-Dependent-Plasticity (STDP), commonly used to learn synaptic strength magnitudes. If all neurons in a layer receive the same input, they tend to learn the same through STDP. Adding inhibitory connections between the neurons creates a winner-take-all (WTA) network. This causes the different neurons to learn different input sequences. To illustrate the impact of the proposed dendritic mechanism, even without learning, we attach five input neurons to two output neurons. One output neuron isa regular LIF neuron, the other output neuron is a LIF neuron with dendritic relationships. Then, the five input neurons are allowed to fire in a particular order. The membrane potentials are reset and subsequently the five input neurons are fired in the reversed order. As the regular LIF neuron linearly integrates its inputs at the soma, the membrane potential response to both sequences is similar in magnitude. In the other output neuron, due to the dendritic mechanism, the membrane potential response is different for both sequences. Hence, the dendritic mechanism improves the neuron’s capacity for discriminating spa-tiotemporal sequences. Dendritic computations improve LIF neurons even if the relationships between synapses are established randomly. Ideally however, a learning rule is used to improve the dendritic relationships based on input data. It is possible to learn synaptic strength with STDP, to make a neuron more sensitive to its input. Similarly, it is possible to learn dendritic relationships with STDP, to make the neuron more sensitive to spatiotemporal input sequences. Feeding structured data to a WTA network with dendritic computation leads to a significantly higher number of discriminated input patterns. Without the dendritic computation, output neurons are less specific and may, for instance, be activated by a sequence in reverse order.Keywords: dendritic computation, spiking neural networks, point neuron model
Procedia PDF Downloads 13225 Gis Based Flash Flood Runoff Simulation Model of Upper Teesta River Besin - Using Aster Dem and Meteorological Data
Authors: Abhisek Chakrabarty, Subhraprakash Mandal
Abstract:
Flash flood is one of the catastrophic natural hazards in the mountainous region of India. The recent flood in the Mandakini River in Kedarnath (14-17th June, 2013) is a classic example of flash floods that devastated Uttarakhand by killing thousands of people.The disaster was an integrated effect of high intensityrainfall, sudden breach of Chorabari Lake and very steep topography. Every year in Himalayan Region flash flood occur due to intense rainfall over a short period of time, cloud burst, glacial lake outburst and collapse of artificial check dam that cause high flow of river water. In Sikkim-Derjeeling Himalaya one of the probable flash flood occurrence zone is Teesta Watershed. The Teesta River is a right tributary of the Brahmaputra with draining mountain area of approximately 8600 Sq. km. It originates in the Pauhunri massif (7127 m). The total length of the mountain section of the river amounts to 182 km. The Teesta is characterized by a complex hydrological regime. The river is fed not only by precipitation, but also by melting glaciers and snow as well as groundwater. The present study describes an attempt to model surface runoff in upper Teesta basin, which is directly related to catastrophic flood events, by creating a system based on GIS technology. The main object was to construct a direct unit hydrograph for an excess rainfall by estimating the stream flow response at the outlet of a watershed. Specifically, the methodology was based on the creation of a spatial database in GIS environment and on data editing. Moreover, rainfall time-series data collected from Indian Meteorological Department and they were processed in order to calculate flow time and the runoff volume. Apart from the meteorological data, background data such as topography, drainage network, land cover and geological data were also collected. Clipping the watershed from the entire area and the streamline generation for Teesta watershed were done and cross-sectional profiles plotted across the river at various locations from Aster DEM data using the ERDAS IMAGINE 9.0 and Arc GIS 10.0 software. The analysis of different hydraulic model to detect flash flood probability ware done using HEC-RAS, Flow-2D, HEC-HMS Software, which were of great importance in order to achieve the final result. With an input rainfall intensity above 400 mm per day for three days the flood runoff simulation models shows outbursts of lakes and check dam individually or in combination with run-off causing severe damage to the downstream settlements. Model output shows that 313 Sq. km area were found to be most vulnerable to flash flood includes Melli, Jourthang, Chungthang, and Lachung and 655sq. km. as moderately vulnerable includes Rangpo,Yathang, Dambung,Bardang, Singtam, Teesta Bazarand Thangu Valley. The model was validated by inserting the rain fall data of a flood event took place in August 1968, and 78% of the actual area flooded reflected in the output of the model. Lastly preventive and curative measures were suggested to reduce the losses by probable flash flood event.Keywords: flash flood, GIS, runoff, simulation model, Teesta river basin
Procedia PDF Downloads 31624 Computer-Integrated Surgery of the Human Brain, New Possibilities
Authors: Ugo Galvanetto, Pirto G. Pavan, Mirco Zaccariotto
Abstract:
The discipline of Computer-integrated surgery (CIS) will provide equipment able to improve the efficiency of healthcare systems and, which is more important, clinical results. Surgeons and machines will cooperate in new ways that will extend surgeons’ ability to train, plan and carry out surgery. Patient specific CIS of the brain requires several steps: 1 - Fast generation of brain models. Based on image recognition of MR images and equipped with artificial intelligence, image recognition techniques should differentiate among all brain tissues and segment them. After that, automatic mesh generation should create the mathematical model of the brain in which the various tissues (white matter, grey matter, cerebrospinal fluid …) are clearly located in the correct positions. 2 – Reliable and fast simulation of the surgical process. Computational mechanics will be the crucial aspect of the entire procedure. New algorithms will be used to simulate the mechanical behaviour of cutting through cerebral tissues. 3 – Real time provision of visual and haptic feedback A sophisticated human-machine interface based on ergonomics and psychology will provide the feedback to the surgeon. The present work will address in particular point 2. Modelling the cutting of soft tissue in a structure as complex as the human brain is an extremely challenging problem in computational mechanics. The finite element method (FEM), that accurately represents complex geometries and accounts for material and geometrical nonlinearities, is the most used computational tool to simulate the mechanical response of soft tissues. However, the main drawback of FEM lies in the mechanics theory on which it is based, classical continuum Mechanics, which assumes matter is a continuum with no discontinuity. FEM must resort to complex tools such as pre-defined cohesive zones, external phase-field variables, and demanding remeshing techniques to include discontinuities. However, all approaches to equip FEM computational methods with the capability to describe material separation, such as interface elements with cohesive zone models, X-FEM, element erosion, phase-field, have some drawbacks that make them unsuitable for surgery simulation. Interface elements require a-priori knowledge of crack paths. The use of XFEM in 3D is cumbersome. Element erosion does not conserve mass. The Phase Field approach adopts a diffusive crack model instead of describing true tissue separation typical of surgical procedures. Modelling discontinuities, so difficult when using computational approaches based on classical continuum Mechanics, is instead easy for novel computational methods based on Peridynamics (PD). PD is a non-local theory of mechanics formulated with no use of spatial derivatives. Its governing equations are valid at points or surfaces of discontinuity, and it is, therefore especially suited to describe crack propagation and fragmentation problems. Moreover, PD does not require any criterium to decide the direction of crack propagation or the conditions for crack branching or coalescence; in the PD-based computational methods, cracks develop spontaneously in the way which is the most convenient from an energy point of view. Therefore, in PD computational methods, crack propagation in 3D is as easy as it is in 2D, with a remarkable advantage with respect to all other computational techniques.Keywords: computational mechanics, peridynamics, finite element, biomechanics
Procedia PDF Downloads 8023 Made on Land, Ends Up in the Water "I-Clare" Intelligent Remediation System for Removal of Harmful Contaminants in Water using Modified Reticulated Vitreous Carbon Foam
Authors: Sabina Żołędowska, Tadeusz Ossowski, Robert Bogdanowicz, Jacek Ryl, Paweł Rostkowski, Michał Kruczkowski, Michał Sobaszek, Zofia Cebula, Grzegorz Skowierzak, Paweł Jakóbczyk, Lilit Hovhannisyan, Paweł Ślepski, Iwona Kaczmarczyk, Mattia Pierpaoli, Bartłomiej Dec, Dawid Nidzworski
Abstract:
The circular economy of water presents a pressing environmental challenge in our society. Water contains various harmful substances, such as drugs, antibiotics, hormones, and dioxides, which can pose silent threats. Water pollution has severe consequences for aquatic ecosystems. It disrupts the balance of ecosystems by harming aquatic plants, animals, and microorganisms. Water pollution poses significant risks to human health. Exposure to toxic chemicals through contaminated water can have long-term health effects, such as cancer, developmental disorders, and hormonal imbalances. However, effective remediation systems can be implemented to remove these contaminants using electrocatalytic processes, which offer an environmentally friendly alternative to other treatment methods, and one of them is the innovative iCLARE system. The project's primary focus revolves around a few main topics: Reactor design and construction, selection of a specific type of reticulated vitreous carbon foams (RVC), analytical studies of harmful contaminants parameters and AI implementation. This high-performance electrochemical reactor will be build based on a novel type of electrode material. The proposed approach utilizes the application of reticulated vitreous carbon foams (RVC) with deposited modified metal oxides (MMO) and diamond thin films. The following setup is characterized by high surface area development and satisfactory mechanical and electrochemical properties, designed for high electrocatalytic process efficiency. The consortium validated electrode modification methods that are the base of the iCLARE product and established the procedures for the detection of chemicals detection: - deposition of metal oxides WO3 and V2O5-deposition of boron-doped diamond/nanowalls structures by CVD process. The chosen electrodes (porous Ferroterm electrodes) were stress tested for various parameters that might occur inside the iCLARE machine–corosis, the long-term structure of the electrode surface during electrochemical processes, and energetic efficacy using cyclic polarization and electrochemical impedance spectroscopy (before and after electrolysis) and dynamic electrochemical impedance spectroscopy (DEIS). This tool allows real-time monitoring of the changes at the electrode/electrolyte interphase. On the other hand, the toxicity of iCLARE chemicals and products of electrolysis are evaluated before and after the treatment using MARA examination (IBMM) and HPLC-MS-MS (NILU), giving us information about the harmfulness of using electrode material and the efficiency of iClare system in the disposal of pollutants. Implementation of data into the system that uses artificial intelligence and the possibility of practical application is in progress (SensDx).Keywords: waste water treatement, RVC, electrocatalysis, paracetamol
Procedia PDF Downloads 8622 An Intelligence-Led Methodologly for Detecting Dark Actors in Human Trafficking Networks
Authors: Andrew D. Henshaw, James M. Austin
Abstract:
Introduction: Human trafficking is an increasingly serious transnational criminal enterprise and social security issue. Despite ongoing efforts to mitigate the phenomenon and a significant expansion of security scrutiny over past decades, it is not receding. This is true for many nations in Southeast Asia, widely recognized as the global hub for trafficked persons, including men, women, and children. Clearly, human trafficking is difficult to address because there are numerous drivers, causes, and motivators for it to persist, such as non-military and non-traditional security challenges, i.e., climate change, global warming displacement, and natural disasters. These make displaced persons and refugees particularly vulnerable. The issue is so large conservative estimates put a dollar value at around $150 billion-plus per year (Niethammer, 2020) spanning sexual slavery and exploitation, forced labor, construction, mining and in conflict roles, and forced marriages of girls and women. Coupled with corruption throughout military, police, and civil authorities around the world, and the active hands of powerful transnational criminal organizations, it is likely that such figures are grossly underestimated as human trafficking is misreported, under-detected, and deliberately obfuscated to protect those profiting from it. For example, the 2022 UN report on human trafficking shows a 56% reduction in convictions in that year alone (UNODC, 2022). Our Approach: To better understand this, our research utilizes a bespoke methodology. Applying a JAM (Juxtaposition Assessment Matrix), which we previously developed to detect flows of dark money around the globe (Henshaw, A & Austin, J, 2021), we now focus on the human trafficking paradigm. Indeed, utilizing a JAM methodology has identified key indicators of human trafficking not previously explored in depth. Being a set of structured analytical techniques that provide panoramic interpretations of the subject matter, this iteration of the JAM further incorporates behavioral and driver indicators, including the employment of Open-Source Artificial Intelligence (OS-AI) across multiple collection points. The extracted behavioral data was then applied to identify non-traditional indicators as they contribute to human trafficking. Furthermore, as the JAM OS-AI analyses data from the inverted position, i.e., the viewpoint of the traffickers, it examines the behavioral and physical traits required to succeed. This transposed examination of the requirements of success delivers potential leverage points for exploitation in the fight against human trafficking in a new and novel way. Findings: Our approach identified new innovative datasets that have previously been overlooked or, at best, undervalued. For example, the JAM OS-AI approach identified critical 'dark agent' lynchpins within human trafficking that are difficult to detect and harder to connect to actors and agents within a network. Our preliminary data suggests this is in part due to the fact that ‘dark agents’ in extant research have been difficult to detect and potentially much harder to directly connect to the actors and organizations in human trafficking networks. Our research demonstrates that using new investigative techniques such as OS-AI-aided JAM introduces a powerful toolset to increase understanding of human trafficking and transnational crime and illuminate networks that, to date, avoid global law enforcement scrutiny.Keywords: human trafficking, open-source intelligence, transnational crime, human security, international human rights, intelligence analysis, JAM OS-AI, Dark Money
Procedia PDF Downloads 9021 Deciphering Information Quality: Unraveling the Impact of Information Distortion in the UK Aerospace Supply Chains
Authors: Jing Jin
Abstract:
The incorporation of artificial intelligence (AI) and machine learning (ML) in aircraft manufacturing and aerospace supply chains leads to the generation of a substantial amount of data among various tiers of suppliers and OEMs. Identifying the high-quality information challenges decision-makers. The application of AI/ML models necessitates access to 'high-quality' information to yield desired outputs. However, the process of information sharing introduces complexities, including distortion through various communication channels and biases introduced by both human and AI entities. This phenomenon significantly influences the quality of information, impacting decision-makers engaged in configuring supply chain systems. Traditionally, distorted information is categorized as 'low-quality'; however, this study challenges this perception, positing that distorted information, contributing to stakeholder goals, can be deemed high-quality within supply chains. The main aim of this study is to identify and evaluate the dimensions of information quality crucial to the UK aerospace supply chain. Guided by a central research question, "What information quality dimensions are considered when defining information quality in the UK aerospace supply chain?" the study delves into the intricate dynamics of information quality in the aerospace industry. Additionally, the research explores the nuanced impact of information distortion on stakeholders' decision-making processes, addressing the question, "How does the information distortion phenomenon influence stakeholders’ decisions regarding information quality in the UK aerospace supply chain system?" This study employs deductive methodologies rooted in positivism, utilizing a cross-sectional approach and a mono-quantitative method -a questionnaire survey. Data is systematically collected from diverse tiers of supply chain stakeholders, encompassing end-customers, OEMs, Tier 0.5, Tier 1, and Tier 2 suppliers. Employing robust statistical data analysis methods, including mean values, mode values, standard deviation, one-way analysis of variance (ANOVA), and Pearson’s correlation analysis, the study interprets and extracts meaningful insights from the gathered data. Initial analyses challenge conventional notions, revealing that information distortion positively influences the definition of information quality, disrupting the established perception of distorted information as inherently low-quality. Further exploration through correlation analysis unveils the varied perspectives of different stakeholder tiers on the impact of information distortion on specific information quality dimensions. For instance, Tier 2 suppliers demonstrate strong positive correlations between information distortion and dimensions like access security, accuracy, interpretability, and timeliness. Conversely, Tier 1 suppliers emphasise strong negative influences on the security of accessing information and negligible impact on information timeliness. Tier 0.5 suppliers showcase very strong positive correlations with dimensions like conciseness and completeness, while OEMs exhibit limited interest in considering information distortion within the supply chain. Introducing social network analysis (SNA) provides a structural understanding of the relationships between information distortion and quality dimensions. The moderately high density of ‘information distortion-by-information quality’ underscores the interconnected nature of these factors. In conclusion, this study offers a nuanced exploration of information quality dimensions in the UK aerospace supply chain, highlighting the significance of individual perspectives across different tiers. The positive influence of information distortion challenges prevailing assumptions, fostering a more nuanced understanding of information's role in the Industry 4.0 landscape.Keywords: information distortion, information quality, supply chain configuration, UK aerospace industry
Procedia PDF Downloads 6320 The Influence of Screen Translation on Creative Audiovisual Writing: A Corpus-Based Approach
Authors: John D. Sanderson
Abstract:
The popularity of American cinema worldwide has contributed to the development of sociolects related to specific film genres in other cultural contexts by means of screen translation, in many cases eluding norms of usage in the target language, a process whose result has come to be known as 'dubbese'. A consequence for the reception in countries where local audiovisual fiction consumption is far lower than American imported productions is that this linguistic construct is preferred, even though it differs from common everyday speech. The iconography of film genres such as science-fiction, western or sword-and-sandal films, for instance, generates linguistic expectations in international audiences who will accept more easily the sociolects assimilated by the continuous reception of American productions, even if the themes, locations, characters, etc., portrayed on screen may belong in origin to other cultures. And the non-normative language (e.g., calques, semantic loans) used in the preferred mode of linguistic transfer, whether it is translation for dubbing or subtitling, has diachronically evolved in many cases into a status of canonized sociolect, not only accepted but also required, by foreign audiences of American films. However, a remarkable step forward is taken when this typology of artificial linguistic constructs starts being used creatively by nationals of these target cultural contexts. In the case of Spain, the success of American sitcoms such as Friends in the 1990s led Spanish television scriptwriters to include in national productions lexical and syntactical indirect borrowings (Anglicisms not formally identifiable as such because they include elements from their own language) in order to target audiences of the former. However, this commercial strategy had already taken place decades earlier when Spain became a favored location for the shooting of foreign films in the early 1960s. The international popularity of the then newly developed sub-genre known as Spaghetti-Western encouraged Spanish investors to produce their own movies, and local scriptwriters made use of the dubbese developed nationally since the advent of sound in film instead of using normative language. As a result, direct Anglicisms, as well as lexical and syntactical borrowings made up the creative writing of these Spanish productions, which also became commercially successful. Interestingly enough, some of these films were even marketed in English-speaking countries as original westerns (some of the names of actors and directors were anglified to that purpose) dubbed into English. The analysis of these 'back translations' will also foreground some semantic distortions that arose in the process. In order to perform the research on these issues, a wide corpus of American films has been used, which chronologically range from Stagecoach (John Ford, 1939) to Django Unchained (Quentin Tarantino, 2012), together with a shorter corpus of Spanish films produced during the golden age of Spaghetti Westerns, from una tumba para el sheriff (Mario Caiano; in English lone and angry man, William Hawkins) to tu fosa será la exacta, amigo (Juan Bosch, 1972; in English my horse, my gun, your widow, John Wood). The methodology of analysis and the conclusions reached could be applied to other genres and other cultural contexts.Keywords: dubbing, film genre, screen translation, sociolect
Procedia PDF Downloads 16819 A Review on Biological Control of Mosquito Vectors
Authors: Asim Abbasi, Muhammad Sufyan, Iqra, Hafiza Javaria Ashraf
Abstract:
The share of vector-borne diseases (VBDs) in the global burden of infectious diseases is almost 17%. The advent of new drugs and latest research in medical science helped mankind to compete with these lethal diseases but still diseases transmitted by different mosquito species, including filariasis, malaria, viral encephalitis and dengue are serious threats for people living in disease endemic areas. Injudicious and repeated use of pesticides posed selection pressure on mosquitoes leading to development of resistance. Hence biological control agents are under serious consideration of scientific community to be used in vector control programmes. Fish have a history of predating immature stages of different aquatic insects including mosquitoes. The noteworthy examples in Africa and Asia includes, Aphanius discolour and a fish in the Panchax group. Moreover, common mosquito fish, Gambusia affinis predates mostly on temporary water mosquitoes like anopheline as compared to permanent water breeders like culicines. Mosquitoes belonging to genus Toxorhynchites have a worldwide distribution and are mostly associated with the predation of other mosquito larvae habituating with them in natural and artificial water containers. These species are harmless to humans as their adults do not suck human blood but feeds on floral nectar. However, their activity is mostly temperature dependent as Toxorhynchites brevipalpis consume 359 Aedes aegypti larvae at 30-32 ºC in contrast to 154 larvae at 20-26 ºC. Although many bacterial species were isolated from mosquito cadavers but those belonging to genus Bacillus are found highly pathogenic against them. The successful species of this genus include Bacillus thuringiensis and Bacillus sphaericus. The prime targets of B. thuringiensis are mostly the immatures of genus Aedes, Culex, Anopheles and Psorophora while B. sphaericus is specifically toxic against species of Culex, Psorophora and Culiseta. The entomopathogenic nematodes belonging to family, mermithidae are also pathogenic to different mosquito species. Eighty different species of mosquitoes including Anopheles, Aedes and Culex proved to be highly vulnerable to the attack of two mermithid species, Romanomermis culicivorax and R. iyengari. Cytoplasmic polyhedrosis virus was the first described pathogenic virus, isolated from the cadavers of mosquito specie, Culex tarsalis. Other viruses which are pathogenic to culicine includes, iridoviruses, cytopolyhedrosis viruses, entomopoxviruses and parvoviruses. Protozoa species belonging to division microsporidia are the common pathogenic protozoans in mosquito populations which kill their host by the chronic effects of parasitism. Moreover, due to their wide prevalence in anopheline mosquitoes and transversal and horizontal transmission from infected to healthy host, microsporidia of the genera Nosema and Amblyospora have received much attention in various mosquito control programmes. Fungal based mycopesticides are used in biological control of insect pests with 47 species reported virulent against different stages of mosquitoes. These include both aquatic fungi i.e. species of Coelomomyces, Lagenidium giganteum and Culicinomyces clavosporus, and the terrestrial fungi Metarhizium anisopliae and Beauveria bassiana. Hence, it was concluded that the integrated use of all these biological control agents can be a healthy contribution in mosquito control programmes and become a dire need of the time to avoid repeated use of pesticides.Keywords: entomopathogenic nematodes, protozoa, Toxorhynchites, vector-borne
Procedia PDF Downloads 26418 Synthesis of Carbonyl Iron Particles Modified with Poly (Trimethylsilyloxyethyl Methacrylate) Nano-Grafts
Authors: Martin Cvek, Miroslav Mrlik, Michal Sedlacik, Tomas Plachy
Abstract:
Magnetorheological elastomers (MREs) are multi-phase composite materials containing micron-sized ferromagnetic particles dispersed in an elastomeric matrix. Their properties such as modulus, damping, magneto-striction, and electrical conductivity can be controlled by an external magnetic field and/or pressure. These features of the MREs are used in the development of damping devices, shock attenuators, artificial muscles, sensors or active elements of electric circuits. However, imperfections on the particle/matrix interfaces result in the lower performance of the MREs when compared with theoretical values. Moreover, magnetic particles are susceptible to corrosion agents such as acid rains or sea humidity. Therefore, the modification of particles is an effective tool for the improvement of MRE performance due to enhanced compatibility between particles and matrix as well as improvements of their thermo-oxidation and chemical stability. In this study, the carbonyl iron (CI) particles were controllably modified with poly(trimethylsilyloxyethyl methacrylate) (PHEMATMS) nano-grafts to develop magnetic core–shell structures exhibiting proper wetting with various elastomeric matrices resulting in improved performance within a frame of rheological, magneto-piezoresistance, pressure-piezoresistance, or radio-absorbing properties. The desired molecular weight of PHEMATMS nano-grafts was precisely tailored using surface-initiated atom transfer radical polymerization (ATRP). The CI particles were firstly functionalized using a 3-aminopropyltriethoxysilane agent, followed by esterification reaction with α-bromoisobutyryl bromide. The ATRP was performed in the anisole medium using ethyl α-bromoisobutyrate as a macroinitiator, N, N´, N´´, N´´-pentamethyldiethylenetriamine as a ligand, and copper bromide as an initiator. To explore the effect PHEMATMS molecular weights on final properties, two variants of core-shell structures with different nano-graft lengths were synthesized, while the reaction kinetics were designed through proper reactant feed ratios and polymerization times. The PHEMATMS nano-grafts were characterized by nuclear magnetic resonance and gel permeation chromatography proving information to their monomer conversions, molecular chain lengths, and low polydispersity indexes (1.28 and 1.35) as the results of the executed ATRP. The successful modifications were confirmed via Fourier transform infrared- and energy-dispersive spectroscopies while expected wavenumber outputs and element presences, respectively, of constituted PHEMATMS nano-grafts, were occurring in the spectra. The surface morphology of bare CI and their PHEMATMS-grafted analogues was further studied by scanning electron microscopy, and the thicknesses of grafted polymeric layers were directly observed by transmission electron microscopy. The contact angles as a measure of particle/matrix compatibility were investigated employing the static sessile drop method. The PHEMATMS nano-grafts enhanced compatibility of hydrophilic CI with low-surface-energy hydrophobic polymer matrix in terms of their wettability and dispersibility in an elastomeric matrix. Thus, the presence of possible defects at the particle/matrix interface is reduced, and higher performance of modified MREs is expected.Keywords: atom transfer radical polymerization, core-shell, particle modification, wettability
Procedia PDF Downloads 19917 Production, Characterisation, and in vitro Degradation and Biocompatibility of a Solvent-Free Polylactic-Acid/Hydroxyapatite Composite for 3D-Printed Maxillofacial Bone-Regeneration Implants
Authors: Carlos Amnael Orozco-Diaz, Robert David Moorehead, Gwendolen Reilly, Fiona Gilchrist, Cheryl Ann Miller
Abstract:
The current gold-standard for maxillofacial reconstruction surgery (MRS) utilizes auto-grafted cancellous bone as a filler. This study was aimed towards developing a polylactic-acid/hydroxyapatite (PLA-HA) composite suitable for fused-deposition 3D printing. Functionalization of the polymer through the addition of HA was directed to promoting bone-regeneration properties so that the material can rival the performance of cancellous bone grafts in terms of bone-lesion repair. This kind of composite enables the production of MRS implants based off 3D-reconstructions from image studies – namely computed tomography – for anatomically-correct fitting. The present study encompassed in-vitro degradation and in-vitro biocompatibility profiling for 3D-printed PLA and PLA-HA composites. PLA filament (Verbatim Co.) and Captal S hydroxyapatite micro-scale HA powder (Plasma Biotal Ltd) were used to produce PLA-HA composites at 5, 10, and 20%-by-weight HA concentration. These were extruded into 3D-printing filament, and processed in a BFB-3000 3D-Printer (3D Systems Co.) into tensile specimens, and were mechanically challenged as per ASTM D638-03. Furthermore, tensile specimens were subjected to accelerated degradation in phosphate-buffered saline solution at 70°C for 23 days, as per ISO-10993-13-2010. This included monitoring of mass loss (through dry-weighing), crystallinity (through thermogravimetric analysis/differential thermal analysis), molecular weight (through gel-permeation chromatography), and tensile strength. In-vitro biocompatibility analysis included cell-viability and extracellular matrix deposition, which were performed both on flat surfaces and on 3D-constructs – both produced through 3D-printing. Discs of 1 cm in diameter and cubic 3D-meshes of 1 cm3 were 3D printed in PLA and PLA-HA composites (n = 6). The samples were seeded with 5000 MG-63 osteosarcoma-like cells, with cell viability extrapolated throughout 21 days via resazurin reduction assays. As evidence of osteogenicity, collagen and calcium deposition were indirectly estimated through Sirius Red staining and Alizarin Red staining respectively. Results have shown that 3D printed PLA loses structural integrity as early as the first day of accelerated degradation, which was significantly faster than the literature suggests. This was reflected in the loss of tensile strength down to untestable brittleness. During degradation, mass loss, molecular weight, and crystallinity behaved similarly to results found in similar studies for PLA. All composite versions and pure PLA were found to perform equivalent to tissue-culture plastic (TCP) in supporting the seeded-cell population. Significant differences (p = 0.05) were found on collagen deposition for higher HA concentrations, with composite samples performing better than pure PLA and TCP. Additionally, per-cell-calcium deposition on the 3D-meshes was significantly lower when comparing 3D-meshes to discs of the same material (p = 0.05). These results support the idea that 3D-printable PLA-HA composites are a viable resorbable material for artificial grafts for bone-regeneration. Degradation data suggests that 3D-printing of these materials – as opposed to other manufacturing methods – might result in faster resorption than currently-used PLA implants.Keywords: bone regeneration implants, 3D-printing, in vitro testing, biocompatibility, polymer degradation, polymer-ceramic composites
Procedia PDF Downloads 15416 Screening and Improved Production of an Extracellular β-Fructofuranosidase from Bacillus Sp
Authors: Lynette Lincoln, Sunil S. More
Abstract:
With the rising demand of sugar used today, it is proposed that world sugar is expected to escalate up to 203 million tonnes by 2021. Hydrolysis of sucrose (table sugar) into glucose and fructose equimolar mixture is catalyzed by β-D-fructofuranoside fructohydrolase (EC 3.2.1.26), commonly called as invertase. For fluid filled center in chocolates, preparation of artificial honey, as a sweetener and especially to ensure that food stuffs remain fresh, moist and soft for longer spans invertase is applied widely and is extensively being used. From an industrial perspective, properties such as increased solubility, osmotic pressure and prevention of crystallization of sugar in food products are highly desired. Screening for invertase does not involve plate assay/qualitative test to determine the enzyme production. In this study, we use a three-step screening strategy for identification of a novel bacterial isolate from soil which is positive for invertase production. The primary step was serial dilution of soil collected from sugarcane fields (black soil, Maddur region of Mandya district, Karnataka, India) was grown on a Czapek-Dox medium (pH 5.0) containing sucrose as the sole C-source. Only colonies with the capability to utilize/breakdown sucrose exhibited growth. Bacterial isolates released invertase in order to take up sucrose, splitting the disaccharide into simple sugars. Secondly, invertase activity was determined from cell free extract by measuring the glucose released in the medium at 540 nm. Morphological observation of the most potent bacteria was examined by several identification tests using Bergey’s manual, which enabled us to know the genus of the isolate to be Bacillus. Furthermore, this potent bacterial colony was subjected to 16S rDNA PCR amplification and a single discrete PCR amplicon band of 1500 bp was observed. The 16S rDNA sequence was used to carry out BLAST alignment search tool of NCBI Genbank database to obtain maximum identity score of sequence. Molecular sequencing and identification was performed by Xcelris Labs Ltd. (Ahmedabad, India). The colony was identified as Bacillus sp. BAB-3434, indicating to be the first novel strain for extracellular invertase production. Molasses, a by-product of the sugarcane industry is a dark viscous liquid obtained upon crystallization of sugar. An enhanced invertase production and optimization studies were carried out by one-factor-at-a-time approach. Crucial parameters such as time course (24 h), pH (6.0), temperature (45 °C), inoculum size (2% v/v), N-source (yeast extract, 0.2% w/v) and C-source (molasses, 4% v/v) were found to be optimum demonstrating an increased yield. The findings of this study reveal a simple screening method of an extracellular invertase from a rapidly growing Bacillus sp., and selection of best factors that elevate enzyme activity especially utilization of molasses which served as an ideal substrate and also as C-source, results in a cost-effective production under submerged conditions. The invert mixture could be a replacement for table sugar which is an economic advantage and reduce the tedious work of sugar growers. On-going studies involve purification of extracellular invertase and determination of transfructosylating activity as at high concentration of sucrose, invertase produces fructooligosaccharides (FOS) which possesses probiotic properties.Keywords: Bacillus sp., invertase, molasses, screening, submerged fermentation
Procedia PDF Downloads 23115 Organic Tuber Production Fosters Food Security and Soil Health: A Decade of Evidence from India
Authors: G. Suja, J. Sreekumar, A. N. Jyothi, V. S. Santhosh Mithra
Abstract:
Worldwide concerns regarding food safety, environmental degradation and threats to human health have generated interest in alternative systems like organic farming. Tropical tuber crops, cassava, sweet potato, yams, and aroids are food-cum-nutritional security-cum climate resilient crops. These form stable or subsidiary food for about 500 million global population. Cassava, yams (white yam, greater yam, and lesser yam) and edible aroids (elephant foot yam, taro, and tannia) are high energy tuberous vegetables with good taste and nutritive value. Seven on-station field experiments at ICAR-Central Tuber Crops Research Institute, Thiruvananthapuram, India and seventeen on-farm trials in three districts of Kerala, were conducted over a decade (2004-2015) to compare the varietal response, yield, quality and soil properties under organic vs conventional system in these crops and to develop a learning system based on the data generated. The industrial, as well as domestic varieties of cassava, the elite and local varieties of elephant foot yam and taro and the three species of Dioscorea (yams), were on a par under both systems. Organic management promoted yield by 8%, 20%, 9%, 11% and 7% over conventional practice in cassava, elephant foot yam, white yam, greater yam and lesser yam respectively. Elephant foot yam was the most responsive to organic management followed by yams and cassava. In taro, slight yield reduction (5%) was noticed under organic farming with almost similar tuber quality. The tuber quality was improved with higher dry matter, starch, crude protein, K, Ca and Mg contents. The anti-nutritional factors, oxalate content in elephant foot yam and cyanogenic glucoside content in cassava were lowered by 21 and 12.4% respectively. Organic plots had significantly higher water holding capacity, pH, available K, Fe, Mn and Cu, higher soil organic matter, available N, P, exchangeable Ca and Mg, dehydrogenase enzyme activity and microbial count. Organic farming scored significantly higher soil quality index (1.93) than conventional practice (1.46). The soil quality index was driven by water holding capacity, pH and available Zn followed by soil organic matter. Organic management enhanced net profit by 20-40% over chemical farming. A case in point is the cost-benefit analysis in elephant foot yam which indicated that the net profit was 28% higher and additional income of Rs. 47,716 ha-1 was obtained due to organic farming. Cost-effective technologies were field validated. The on-station technologies developed were validated and popularized through on-farm trials in 10 sites (5 ha) under National Horticulture Mission funded programme in elephant foot yam and seven sites in yams and taro. The technologies are included in the Package of Practices Recommendations for crops of Kerala Agricultural University. A learning system developed using artificial neural networks (ANN) predicted the performance of elephant foot yam organic system. Use of organically produced seed materials, seed treatment in cow-dung, neem cake, bio-inoculant slurry, farmyard manure incubated with bio-inoculants, green manuring, use of neem cake, bio-fertilizers and ash formed the strategies for organic production. Organic farming is an eco-friendly management strategy that enables 10-20% higher yield, quality tubers and maintenance of soil health in tuber crops.Keywords: eco-agriculture, quality, root crops, healthy soil, yield
Procedia PDF Downloads 33314 Marketing and Business Intelligence and Their Impact on Products and Services Through Understanding Based on Experiential Knowledge of Customers in Telecommunications Companies
Authors: Ali R. Alshawawreh, Francisco Liébana-Cabanillas, Francisco J. Blanco-Encomienda
Abstract:
Collaboration between marketing and business intelligence (BI) is crucial in today's ever-evolving business landscape. These two domains play pivotal roles in molding customers' experiential knowledge. Marketing insights offer valuable information regarding customer needs, preferences, and behaviors. Conversely, BI facilitates data-driven decision-making, leading to heightened operational efficiency, product quality, and customer satisfaction. Customer experiential knowledge (CEK) encompasses customers' implicit comprehension of consumption experiences influenced by diverse factors, including social and cultural influences. This study primarily focuses on telecommunications companies in Jordan, scrutinizing how experiential customer knowledge mediates the relationship between marketing intelligence and business intelligence. Drawing on theoretical frameworks such as the resource-based view (RBV) and service-dominant logic (SDL), the research aims to comprehend how organizations utilize their resources, particularly knowledge, to foster Evolution. Employing a quantitative research approach, the study collected and analyzed primary data to explore hypotheses. Structural equation modeling (SEM) facilitated by Smart PLS software evaluated the relationships between the constructs, followed by mediation analysis to assess the indirect associations in the model. The study findings offer insights into the intricate dynamics of organizational Creation, uncovering the interconnected relationships between business intelligence, customer experiential knowledge-based innovation (CEK-DI), marketing intelligence (MI), and product and service innovation (PSI), underscoring the pivotal role of advanced intelligence capabilities in developing innovative practices rooted in a profound understanding of customer experiences. Furthermore, the positive impact of BI on PSI reaffirms the significance of data-driven decision-making in shaping the innovation landscape. The significant impact of CEK-DI on PSI highlights the critical role of customer experiences in driving an organization. Companies that actively integrate customer insights into their opportunity creation processes are more likely to create offerings that match customer expectations, which drives higher levels of product and service sophistication. Additionally, the positive and significant impact of MI on CEK-DI underscores the critical role of market insights in shaping evolutionary strategies. While the relationship between MI and PSI is positive, the slightly weaker significance level indicates a subtle association, suggesting that while MI contributes to the development of ideas, In conclusion, the study emphasizes the fundamental role of intelligence capabilities, especially artificial intelligence, emphasizing the need for organizations to leverage market and customer intelligence to achieve effective and competitive innovation practices. Collaborative efforts between marketing and business intelligence serve as pivotal drivers of development, influencing customer experiential knowledge and shaping organizational strategies and practices. Future research could adopt longitudinal designs and gather data from various sectors to offer broader insights. Additionally, the study focuses on the effects of marketing intelligence, business intelligence, customer experiential knowledge, and innovation, but other unexamined variables may also influence innovation processes. Future studies could investigate additional factors, mediators, or moderators, including the role of emerging technologies like AI and machine learning in driving innovation.Keywords: marketing intelligence, business intelligence, product, customer experiential knowledge-driven innovation
Procedia PDF Downloads 2913 Amphiphilic Compounds as Potential Non-Toxic Antifouling Agents: A Study of Biofilm Formation Assessed by Micro-titer Assays with Marine Bacteria and Eco-toxicological Effect on Marine Algae
Authors: D. Malouch, M. Berchel, C. Dreanno, S. Stachowski-Haberkorn, P-A. Jaffres
Abstract:
Biofilm is a predominant lifestyle chosen by bacteria. Whether it is developed on an immerged surface or a mobile biofilm known as flocs, the bacteria within this form of life show properties different from its planktonic ones. Within the biofilm, the self-formed matrix of Extracellular Polymeric Substances (EPS) offers hydration, resources capture, enhanced resistance to antimicrobial agents, and allows cell-communication. Biofouling is a complex natural phenomenon that involves biological, physical and chemical properties related to the environment, the submerged surface and the living organisms involved. Bio-colonization of artificial structures can cause various economic and environmental impacts. The increase in costs associated with the over-consumption of fuel from biocolonized vessels has been widely studied. Measurement drifts from submerged sensors, as well as obstructions in heat exchangers, and deterioration of offshore structures are major difficulties that industries are dealing with. Therefore, surfaces that inhibit biocolonization are required in different areas (water treatment, marine paints, etc.) and many efforts have been devoted to produce efficient and eco-compatible antifouling agents. The different steps of surface fouling are widely described in literature. Studying the biofilm and its stages provides a better understanding of how to elaborate more efficient antifouling strategies. Several approaches are currently applied, such as the use of biocide anti-fouling paint6 (mainly with copper derivatives) and super-hydrophobic coatings. While these two processes are proving to be the most effective, they are not entirely satisfactory, especially in a context of a changing legislation. Nowadays, the challenge is to prevent biofouling with non-biocide compounds, offering a cost effective solution, but with no toxic effects on marine organisms. Since the micro-fouling phase plays an important role in the regulation of the following steps of biofilm formation7, it is desired to reduce or delate biofouling of a given surface by inhibiting the micro fouling at its early stages. In our recent works, we reported that some amphiphilic compounds exhibited bacteriostatic or bactericidal properties at a concentration that did not affect eukaryotic cells. These remarkable properties invited us to assess this type of bio-inspired phospholipids9 to prevent the colonization of surfaces by marine bacteria. Of note, other studies reported that amphiphilic compounds interacted with bacteria leading to a reduction of their development. An amphiphilic compound is a molecule consisting of a hydrophobic domain and a polar head (ionic or non-ionic). These compounds appear to have interesting antifouling properties: some ionic compounds have shown antimicrobial activity, and zwitterions can reduce nonspecific adsorption of proteins. Herein, we investigate the potential of amphiphilic compounds as inhibitors of bacterial growth and marine biofilm formation. The aim of this study is to compare the efficacy of four synthetic phospholipids that features a cationic charge (BSV36, KLN47) or a zwitterionic polar-head group (SL386, MB2871) to prevent microfouling with marine bacteria. We also study the toxicity of these compounds in order to identify the most promising compound that must feature high anti-adhesive properties and a low cytotoxicity on two links representative of coastal marine food webs: phytoplankton and oyster larvae.Keywords: amphiphilic phospholipids, bacterial biofilm, marine microfouling, non-toxic antifouling
Procedia PDF Downloads 14512 Speciation of Bacteria Isolated from Clinical Canine and Feline Urine Samples by Using ChromID CPS Elite Agar: A Preliminary Study
Authors: Delsy Salinas, Andreia Garcês, Augusto Silva, Paula Brilhante Simões
Abstract:
Urinary tract infection (UTI) is a common disease affecting dogs and cats in both community and hospital environment. Bacteria is the most frequent agent isolated, fewer than 1% of infections are due to parasitic, fungal, or viral agents. Common symptoms and laboratory abnormalities includeabdominal pain, pyrexia, renomegaly, and neutrophilia with left shift. A rapid and precise identification of the bacterial agent is still a challenge in veterinarian laboratories. Therefore, this cross-sectional study aims to describe bacterial colony patterns of urine samples by using chromID™ CPS® EliteAgar (BioMérieux, France) from canine and feline specimens submitted to a veterinary laboratory in Portugal (INNO Veterinary Laboratory, Braga)from January to March2022. All urine samples were cultivated in CPS Elite Agar with calibrated 1 µL inoculating loop and incubated at 37ºC for 18-24h. Color,size, and shape (regular or irregular outline)were recorded for all samples. All colonies were classified as Gram-negative or Gram-positive bacteriausing Gram stain (PREVI® Color BioMérieux, France) and determined if they were pure colonies. Identification of bacteria species was performed using GP and GN cards inVitek 2® Compact(BioMérieux, France). A total of 256/1003 submitted urine samples presented bacterial growth, from which 172 isolates were included in this study. The sample’s population included 111 dogs (n=45 males and n=66 females) and 61 cats (n=35 males and n=26 females). The most frequent isolated bacteria wasEscherichia coli (44,7%), followed by Proteus mirabilis (13,4%). All Escherichia coli isolates presented red to burgundy colonies, a colony diameter between 2 to 6 mm, and regular or irregular outlines. Similarly, 100% of Proteus mirabilis isolates were dark yellow colonies with a diffuse pigment and the same size and shape as Escherichia coli. White and pink pale colonies where Staphylococcus species exclusively and S. pseudintermedius was the most frequent (8,2 %). Cian to blue colonies were mostly Enterococcusspp. (8,2%) and Streptococcus spp. (4,6%). Beige to brown colonies were Pseudomonas aeruginosa (2,9%) and Citrobacter spp. (1,2%).Klebsiella spp.,Serratia spp. and Enterobacter spp were green colonies. All Gram-positive isolates were 1 to 2 mm diameter long and had a regular outline, meanwhile, Gram-negative rods presented variable patterns. This results showed that theprevalence of E coli and P. mirabilis as uropathogenic agents follows the same trends in Europe as previously described in other studies. Both agents presented a particular color pattern in CPS Elite Agar to identify them without needing complementary tests. No other bacteria genus could be correlated strongly to a specific color pattern, and similar results have been observed instudies using human’s samples. Chromogenic media shows a great advantage for common urine bacteria isolation than traditional COS, McConkey, and CLEDAgar mediums in a routine context, especially when mixed fermentative Gram-negative agents grow simultaneously. In addition, CPS Elite Agar is versatile for Artificial Intelligent Reading Plates Systems. Routine veterinarian laboratories could use CPS Elite Agar for a rapid screening for bacteria identification,mainlyE coli and P.mirabilis, saving 6h to 10h of automatized identification.Keywords: cats, CPS elite agar, dogs, urine pathogens
Procedia PDF Downloads 10211 An Innovation Decision Process View in an Adoption of Total Laboratory Automation
Authors: Chia-Jung Chen, Yu-Chi Hsu, June-Dong Lin, Kun-Chen Chan, Chieh-Tien Wang, Li-Ching Wu, Chung-Feng Liu
Abstract:
With fast advances in healthcare technology, various total laboratory automation (TLA) processes have been proposed. However, adopting TLA needs quite high funding. This study explores an early adoption experience by Taiwan’s large-scale hospital group, the Chimei Hospital Group (CMG), which owns three branch hospitals (Yongkang, Liouying and Chiali, in order by service scale), based on the five stages of Everett Rogers’ Diffusion Decision Process. 1.Knowledge stage: Over the years, two weaknesses exists in laboratory department of CMG: 1) only a few examination categories (e.g., sugar testing and HbA1c) can now be completed and reported within a day during an outpatient clinical visit; 2) the Yongkang Hospital laboratory space is dispersed across three buildings, resulting in duplicated investment in analysis instruments and inconvenient artificial specimen transportation. Thus, the senior management of the department raised a crucial question, was it time to process the redesign of the laboratory department? 2.Persuasion stage: At the end of 2013, Yongkang Hospital’s new building and restructuring project created a great opportunity for the redesign of the laboratory department. However, not all laboratory colleagues had the consensus for change. Thus, the top managers arranged a series of benchmark visits to stimulate colleagues into being aware of and accepting TLA. Later, the director of the department proposed a formal report to the top management of CMG with the results of the benchmark visits, preliminary feasibility analysis, potential benefits and so on. 3.Decision stage: This TLA suggestion was well-supported by the top management of CMG and, finally, they made a decision to carry out the project with an instrument-leasing strategy. After the announcement of a request for proposal and several vendor briefings, CMG confirmed their laboratory automation architecture and finally completed the contracts. At the same time, a cross-department project team was formed and the laboratory department assigned a section leader to the National Taiwan University Hospital for one month of relevant training. 4.Implementation stage: During the implementation, the project team called for regular meetings to review the results of the operations and to offer an immediate response to the adjustment. The main project tasks included: 1) completion of the preparatory work for beginning the automation procedures; 2) ensuring information security and privacy protection; 3) formulating automated examination process protocols; 4) evaluating the performance of new instruments and the instrument connectivity; 5)ensuring good integration with hospital information systems (HIS)/laboratory information systems (LIS); and 6) ensuring continued compliance with ISO 15189 certification. 5.Confirmation stage: In short, the core process changes include: 1) cancellation of signature seals on the specimen tubes; 2) transfer of daily examination reports to a data warehouse; 3) routine pre-admission blood drawing and formal inpatient morning blood drawing can be incorporated into an automatically-prepared tube mechanism. The study summarizes below the continuous improvement orientations: (1) Flexible reference range set-up for new instruments in LIS. (2) Restructure of the specimen category. (3) Continuous review and improvements to the examination process. (4) Whether installing the tube (specimen) delivery tracks need further evaluation.Keywords: innovation decision process, total laboratory automation, health care
Procedia PDF Downloads 41810 The Proposal for a Framework to Face Opacity and Discrimination ‘Sins’ Caused by Consumer Creditworthiness Machines in the EU
Authors: Diogo José Morgado Rebelo, Francisco António Carneiro Pacheco de Andrade, Paulo Jorge Freitas de Oliveira Novais
Abstract:
Not everything in AI-power consumer credit scoring turns out to be a wonder. When using AI in Creditworthiness Assessment (CWA), opacity and unfairness ‘sins’ must be considered to the task be deemed Responsible. AI software is not always 100% accurate, which can lead to misclassification. Discrimination of some groups can be exponentiated. A hetero personalized identity can be imposed on the individual(s) affected. Also, autonomous CWA sometimes lacks transparency when using black box models. However, for this intended purpose, human analysts ‘on-the-loop’ might not be the best remedy consumers are looking for in credit. This study seeks to explore the legality of implementing a Multi-Agent System (MAS) framework in consumer CWA to ensure compliance with the regulation outlined in Article 14(4) of the Proposal for an Artificial Intelligence Act (AIA), dated 21 April 2021 (as per the last corrigendum by the European Parliament on 19 April 2024), Especially with the adoption of Art. 18(8)(9) of the EU Directive 2023/2225, of 18 October, which will go into effect on 20 November 2026, there should be more emphasis on the need for hybrid oversight in AI-driven scoring to ensure fairness and transparency. In fact, the range of EU regulations on AI-based consumer credit will soon impact the AI lending industry locally and globally, as shown by the broad territorial scope of AIA’s Art. 2. Consequently, engineering the law of consumer’s CWA is imperative. Generally, the proposed MAS framework consists of several layers arranged in a specific sequence, as follows: firstly, the Data Layer gathers legitimate predictor sets from traditional sources; then, the Decision Support System Layer, whose Neural Network model is trained using k-fold Cross Validation, provides recommendations based on the feeder data; the eXplainability (XAI) multi-structure comprises Three-Step-Agents; and, lastly, the Oversight Layer has a 'Bottom Stop' for analysts to intervene in a timely manner. From the analysis, one can assure a vital component of this software is the XAY layer. It appears as a transparent curtain covering the AI’s decision-making process, enabling comprehension, reflection, and further feasible oversight. Local Interpretable Model-agnostic Explanations (LIME) might act as a pillar by offering counterfactual insights. SHapley Additive exPlanation (SHAP), another agent in the XAI layer, could address potential discrimination issues, identifying the contribution of each feature to the prediction. Alternatively, for thin or no file consumers, the Suggestion Agent can promote financial inclusion. It uses lawful alternative sources such as the share of wallet, among others, to search for more advantageous solutions to incomplete evaluation appraisals based on genetic programming. Overall, this research aspires to bring the concept of Machine-Centered Anthropocentrism to the table of EU policymaking. It acknowledges that, when put into service, credit analysts no longer exert full control over the data-driven entities programmers have given ‘birth’ to. With similar explanatory agents under supervision, AI itself can become self-accountable, prioritizing human concerns and values. AI decisions should not be vilified inherently. The issue lies in how they are integrated into decision-making and whether they align with non-discrimination principles and transparency rules.Keywords: creditworthiness assessment, hybrid oversight, machine-centered anthropocentrism, EU policymaking
Procedia PDF Downloads 339 Evaluating Forecasting Strategies for Day-Ahead Electricity Prices: Insights From the Russia-Ukraine Crisis
Authors: Alexandra Papagianni, George Filis, Panagiotis Papadopoulos
Abstract:
The liberalization of the energy market and the increasing penetration of fluctuating renewables (e.g., wind and solar power) have heightened the importance of the spot market for ensuring efficient electricity supply. This is further emphasized by the EU’s goal of achieving net-zero emissions by 2050. The day-ahead market (DAM) plays a key role in European energy trading, accounting for 80-90% of spot transactions and providing critical insights for next-day pricing. Therefore, short-term electricity price forecasting (EPF) within the DAM is crucial for market participants to make informed decisions and improve their market positioning. Existing literature highlights out-of-sample performance as a key factor in assessing EPF accuracy, with influencing factors such as predictors, forecast horizon, model selection, and strategy. Several studies indicate that electricity demand is a primary price determinant, while renewable energy sources (RES) like wind and solar significantly impact price dynamics, often lowering prices. Additionally, incorporating data from neighboring countries, due to market coupling, further improves forecast accuracy. Most studies predict up to 24 steps ahead using hourly data, while some extend forecasts using higher-frequency data (e.g., half-hourly or quarter-hourly). Short-term EPF methods fall into two main categories: statistical and computational intelligence (CI) methods, with hybrid models combining both. While many studies use advanced statistical methods, particularly through different versions of traditional AR-type models, others apply computational techniques such as artificial neural networks (ANNs) and support vector machines (SVMs). Recent research combines multiple methods to enhance forecasting performance. Despite extensive research on EPF accuracy, a gap remains in understanding how forecasting strategy affects prediction outcomes. While iterated strategies are commonly used, they are often chosen without justification. This paper contributes by examining whether the choice of forecasting strategy impacts the quality of day-ahead price predictions, especially for multi-step forecasts. We evaluate both iterated and direct methods, exploring alternative ways of conducting iterated forecasts on benchmark and state-of-the-art forecasting frameworks. The goal is to assess whether these factors should be considered by end-users to improve forecast quality. We focus on the Greek DAM using data from July 1, 2021, to March 31, 2022. This period is chosen due to significant price volatility in Greece, driven by its dependence on natural gas and limited interconnection capacity with larger European grids. The analysis covers two phases: pre-conflict (January 1, 2022, to February 23, 2022) and post-conflict (February 24, 2022, to March 31, 2022), following the Russian-Ukraine conflict that initiated an energy crisis. We use the mean absolute percentage error (MAPE) and symmetric mean absolute percentage error (sMAPE) for evaluation, as well as the Direction of Change (DoC) measure to assess the accuracy of price movement predictions. Our findings suggest that forecasters need to apply all strategies across different horizons and models. Different strategies may be required for different horizons to optimize both accuracy and directional predictions, ensuring more reliable forecasts.Keywords: short-term electricity price forecast, forecast strategies, forecast horizons, recursive strategy, direct strategy
Procedia PDF Downloads 48 EcoTeka, an Open-Source Software for Urban Ecosystem Restoration through Technology
Authors: Manon Frédout, Laëtitia Bucari, Mathias Aloui, Gaëtan Duhamel, Olivier Rovellotti, Javier Blanco
Abstract:
Ecosystems must be resilient to ensure cleaner air, better water and soil quality, and thus healthier citizens. Technology can be an excellent tool to support urban ecosystem restoration projects, especially when based on Open Source and promoting Open Data. This is the goal of the ecoTeka application: one single digital tool for tree management which allows decision-makers to improve their urban forestry practices, enabling more responsible urban planning and climate change adaptation. EcoTeka provides city councils with three main functionalities tackling three of their challenges: easier biodiversity inventories, better green space management, and more efficient planning. To answer the cities’ need for reliable tree inventories, the application has been first built with open data coming from the websites OpenStreetMap and OpenTrees, but it will also include very soon the possibility of creating new data. To achieve this, a multi-source algorithm will be elaborated, based on existing artificial intelligence Deep Forest, integrating open-source satellite images, 3D representations from LiDAR, and street views from Mapillary. This data processing will permit identifying individual trees' position, height, crown diameter, and taxonomic genus. To support urban forestry management, ecoTeka offers a dashboard for monitoring the city’s tree inventory and trigger alerts to inform about upcoming due interventions. This tool was co-constructed with the green space departments of the French cities of Alès, Marseille, and Rouen. The third functionality of the application is a decision-making tool for urban planning, promoting biodiversity and landscape connectivity metrics to drive ecosystem restoration roadmap. Based on landscape graph theory, we are currently experimenting with new methodological approaches to scale down regional ecological connectivity principles to local biodiversity conservation and urban planning policies. This methodological framework will couple graph theoretic approach and biological data, mainly biodiversity occurrences (presence/absence) data available on both international (e.g., GBIF), national (e.g., Système d’Information Nature et Paysage) and local (e.g., Atlas de la Biodiversté Communale) biodiversity data sharing platforms in order to help reasoning new decisions for ecological networks conservation and restoration in urban areas. An experiment on this subject is currently ongoing with Montpellier Mediterranee Metropole. These projects and studies have shown that only 26% of tree inventory data is currently geo-localized in France - the rest is still being done on paper or Excel sheets. It seems that technology is not yet used enough to enrich the knowledge city councils have about biodiversity in their city and that existing biodiversity open data (e.g., occurrences, telemetry, or genetic data), species distribution models, landscape graph connectivity metrics are still underexploited to make rational decisions for landscape and urban planning projects. This is the goal of ecoTeka: to support easier inventories of urban biodiversity and better management of urban spaces through rational planning and decisions relying on open databases. Future studies and projects will focus on the development of tools for reducing the artificialization of soils, selecting plant species adapted to climate change, and highlighting the need for ecosystem and biodiversity services in cities.Keywords: digital software, ecological design of urban landscapes, sustainable urban development, urban ecological corridor, urban forestry, urban planning
Procedia PDF Downloads 697 Establishment of a Classifier Model for Early Prediction of Acute Delirium in Adult Intensive Care Unit Using Machine Learning
Authors: Pei Yi Lin
Abstract:
Objective: The objective of this study is to use machine learning methods to build an early prediction classifier model for acute delirium to improve the quality of medical care for intensive care patients. Background: Delirium is a common acute and sudden disturbance of consciousness in critically ill patients. After the occurrence, it is easy to prolong the length of hospital stay and increase medical costs and mortality. In 2021, the incidence of delirium in the intensive care unit of internal medicine was as high as 59.78%, which indirectly prolonged the average length of hospital stay by 8.28 days, and the mortality rate is about 2.22% in the past three years. Therefore, it is expected to build a delirium prediction classifier through big data analysis and machine learning methods to detect delirium early. Method: This study is a retrospective study, using the artificial intelligence big data database to extract the characteristic factors related to delirium in intensive care unit patients and let the machine learn. The study included patients aged over 20 years old who were admitted to the intensive care unit between May 1, 2022, and December 31, 2022, excluding GCS assessment <4 points, admission to ICU for less than 24 hours, and CAM-ICU evaluation. The CAMICU delirium assessment results every 8 hours within 30 days of hospitalization are regarded as an event, and the cumulative data from ICU admission to the prediction time point are extracted to predict the possibility of delirium occurring in the next 8 hours, and collect a total of 63,754 research case data, extract 12 feature selections to train the model, including age, sex, average ICU stay hours, visual and auditory abnormalities, RASS assessment score, APACHE-II Score score, number of invasive catheters indwelling, restraint and sedative and hypnotic drugs. Through feature data cleaning, processing and KNN interpolation method supplementation, a total of 54595 research case events were extracted to provide machine learning model analysis, using the research events from May 01 to November 30, 2022, as the model training data, 80% of which is the training set for model training, and 20% for the internal verification of the verification set, and then from December 01 to December 2022 The CU research event on the 31st is an external verification set data, and finally the model inference and performance evaluation are performed, and then the model has trained again by adjusting the model parameters. Results: In this study, XG Boost, Random Forest, Logistic Regression, and Decision Tree were used to analyze and compare four machine learning models. The average accuracy rate of internal verification was highest in Random Forest (AUC=0.86), and the average accuracy rate of external verification was in Random Forest and XG Boost was the highest, AUC was 0.86, and the average accuracy of cross-validation was the highest in Random Forest (ACC=0.77). Conclusion: Clinically, medical staff usually conduct CAM-ICU assessments at the bedside of critically ill patients in clinical practice, but there is a lack of machine learning classification methods to assist ICU patients in real-time assessment, resulting in the inability to provide more objective and continuous monitoring data to assist Clinical staff can more accurately identify and predict the occurrence of delirium in patients. It is hoped that the development and construction of predictive models through machine learning can predict delirium early and immediately, make clinical decisions at the best time, and cooperate with PADIS delirium care measures to provide individualized non-drug interventional care measures to maintain patient safety, and then Improve the quality of care.Keywords: critically ill patients, machine learning methods, delirium prediction, classifier model
Procedia PDF Downloads 736 Speeding Up Lenia: A Comparative Study Between Existing Implementations and CUDA C++ with OpenGL Interop
Authors: L. Diogo, A. Legrand, J. Nguyen-Cao, J. Rogeau, S. Bornhofen
Abstract:
Lenia is a system of cellular automata with continuous states, space and time, which surprises not only with the emergence of interesting life-like structures but also with its beauty. This paper reports ongoing research on a GPU implementation of Lenia using CUDA C++ and OpenGL Interoperability. We demonstrate how CUDA as a low-level GPU programming paradigm allows optimizing performance and memory usage of the Lenia algorithm. A comparative analysis through experimental runs with existing implementations shows that the CUDA implementation outperforms the others by one order of magnitude or more. Cellular automata hold significant interest due to their ability to model complex phenomena in systems with simple rules and structures. They allow exploring emergent behavior such as self-organization and adaptation, and find applications in various fields, including computer science, physics, biology, and sociology. Unlike classic cellular automata which rely on discrete cells and values, Lenia generalizes the concept of cellular automata to continuous space, time and states, thus providing additional fluidity and richness in emerging phenomena. In the current literature, there are many implementations of Lenia utilizing various programming languages and visualization libraries. However, each implementation also presents certain drawbacks, which serve as motivation for further research and development. In particular, speed is a critical factor when studying Lenia, for several reasons. Rapid simulation allows researchers to observe the emergence of patterns and behaviors in more configurations, on bigger grids and over longer periods without annoying waiting times. Thereby, they enable the exploration and discovery of new species within the Lenia ecosystem more efficiently. Moreover, faster simulations are beneficial when we include additional time-consuming algorithms such as computer vision or machine learning to evolve and optimize specific Lenia configurations. We developed a Lenia implementation for GPU using the C++ and CUDA programming languages, and CUDA/OpenGL Interoperability for immediate rendering. The goal of our experiment is to benchmark this implementation compared to the existing ones in terms of speed, memory usage, configurability and scalability. In our comparison we focus on the most important Lenia implementations, selected for their prominence, accessibility and widespread use in the scientific community. The implementations include MATLAB, JavaScript, ShaderToy GLSL, Jupyter, Rust and R. The list is not exhaustive but provides a broad view of the principal current approaches and their respective strengths and weaknesses. Our comparison primarily considers computational performance and memory efficiency, as these factors are critical for large-scale simulations, but we also investigate the ease of use and configurability. The experimental runs conducted so far demonstrate that the CUDA C++ implementation outperforms the other implementations by one order of magnitude or more. The benefits of using the GPU become apparent especially with larger grids and convolution kernels. However, our research is still ongoing. We are currently exploring the impact of several software design choices and optimization techniques, such as convolution with Fast Fourier Transforms (FFT), various GPU memory management scenarios, and the trade-off between speed and accuracy using single versus double precision floating point arithmetic. The results will give valuable insights into the practice of parallel programming of the Lenia algorithm, and all conclusions will be thoroughly presented in the conference paper. The final version of our CUDA C++ implementation will be published on github and made freely accessible to the Alife community for further development.Keywords: artificial life, cellular automaton, GPU optimization, Lenia, comparative analysis.
Procedia PDF Downloads 40