Search results for: artificial neuronal networks
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4381

Search results for: artificial neuronal networks

151 Big Data Applications for Transportation Planning

Authors: Antonella Falanga, Armando Cartenì

Abstract:

"Big data" refers to extremely vast and complex sets of data, encompassing extraordinarily large and intricate datasets that require specific tools for meaningful analysis and processing. These datasets can stem from diverse origins like sensors, mobile devices, online transactions, social media platforms, and more. The utilization of big data is pivotal, offering the chance to leverage vast information for substantial advantages across diverse fields, thereby enhancing comprehension, decision-making, efficiency, and fostering innovation in various domains. Big data, distinguished by its remarkable attributes of enormous volume, high velocity, diverse variety, and significant value, represent a transformative force reshaping the industry worldwide. Their pervasive impact continues to unlock new possibilities, driving innovation and advancements in technology, decision-making processes, and societal progress in an increasingly data-centric world. The use of these technologies is becoming more widespread, facilitating and accelerating operations that were once much more complicated. In particular, big data impacts across multiple sectors such as business and commerce, healthcare and science, finance, education, geography, agriculture, media and entertainment and also mobility and logistics. Within the transportation sector, which is the focus of this study, big data applications encompass a wide variety, spanning across optimization in vehicle routing, real-time traffic management and monitoring, logistics efficiency, reduction of travel times and congestion, enhancement of the overall transportation systems, but also mitigation of pollutant emissions contributing to environmental sustainability. Meanwhile, in public administration and the development of smart cities, big data aids in improving public services, urban planning, and decision-making processes, leading to more efficient and sustainable urban environments. Access to vast data reservoirs enables deeper insights, revealing hidden patterns and facilitating more precise and timely decision-making. Additionally, advancements in cloud computing and artificial intelligence (AI) have further amplified the potential of big data, enabling more sophisticated and comprehensive analyses. Certainly, utilizing big data presents various advantages but also entails several challenges regarding data privacy and security, ensuring data quality, managing and storing large volumes of data effectively, integrating data from diverse sources, the need for specialized skills to interpret analysis results, ethical considerations in data use, and evaluating costs against benefits. Addressing these difficulties requires well-structured strategies and policies to balance the benefits of big data with privacy, security, and efficient data management concerns. Building upon these premises, the current research investigates the efficacy and influence of big data by conducting an overview of the primary and recent implementations of big data in transportation systems. Overall, this research allows us to conclude that big data better provide to enhance rational decision-making for mobility choices and is imperative for adeptly planning and allocating investments in transportation infrastructures and services.

Keywords: big data, public transport, sustainable mobility, transport demand, transportation planning

Procedia PDF Downloads 36
150 An Efficient Algorithm for Solving the Transmission Network Expansion Planning Problem Integrating Machine Learning with Mathematical Decomposition

Authors: Pablo Oteiza, Ricardo Alvarez, Mehrdad Pirnia, Fuat Can

Abstract:

To effectively combat climate change, many countries around the world have committed to a decarbonisation of their electricity, along with promoting a large-scale integration of renewable energy sources (RES). While this trend represents a unique opportunity to effectively combat climate change, achieving a sound and cost-efficient energy transition towards low-carbon power systems poses significant challenges for the multi-year Transmission Network Expansion Planning (TNEP) problem. The objective of the multi-year TNEP is to determine the necessary network infrastructure to supply the projected demand in a cost-efficient way, considering the evolution of the new generation mix, including the integration of RES. The rapid integration of large-scale RES increases the variability and uncertainty in the power system operation, which in turn increases short-term flexibility requirements. To meet these requirements, flexible generating technologies such as energy storage systems must be considered within the TNEP as well, along with proper models for capturing the operational challenges of future power systems. As a consequence, TNEP formulations are becoming more complex and difficult to solve, especially for its application in realistic-sized power system models. To meet these challenges, there is an increasing need for developing efficient algorithms capable of solving the TNEP problem with reasonable computational time and resources. In this regard, a promising research area is the use of artificial intelligence (AI) techniques for solving large-scale mixed-integer optimization problems, such as the TNEP. In particular, the use of AI along with mathematical optimization strategies based on decomposition has shown great potential. In this context, this paper presents an efficient algorithm for solving the multi-year TNEP problem. The algorithm combines AI techniques with Column Generation, a traditional decomposition-based mathematical optimization method. One of the challenges of using Column Generation for solving the TNEP problem is that the subproblems are of mixed-integer nature, and therefore solving them requires significant amounts of time and resources. Hence, in this proposal we solve a linearly relaxed version of the subproblems, and trained a binary classifier that determines the value of the binary variables, based on the results obtained from the linearized version. A key feature of the proposal is that we integrate the binary classifier into the optimization algorithm in such a way that the optimality of the solution can be guaranteed. The results of a study case based on the HRP 38-bus test system shows that the binary classifier has an accuracy above 97% for estimating the value of the binary variables. Since the linearly relaxed version of the subproblems can be solved with significantly less time than the integer programming counterpart, the integration of the binary classifier into the Column Generation algorithm allowed us to reduce the computational time required for solving the problem by 50%. The final version of this paper will contain a detailed description of the proposed algorithm, the AI-based binary classifier technique and its integration into the CG algorithm. To demonstrate the capabilities of the proposal, we evaluate the algorithm in case studies with different scenarios, as well as in other power system models.

Keywords: integer optimization, machine learning, mathematical decomposition, transmission planning

Procedia PDF Downloads 54
149 A Top-down vs a Bottom-up Approach on Lower Extremity Motor Recovery and Balance Following Acute Stroke: A Randomized Clinical Trial

Authors: Vijaya Kumar, Vidayasagar Pagilla, Abraham Joshua, Rakshith Kedambadi, Prasanna Mithra

Abstract:

Background: Post stroke rehabilitation are aimed to accelerate for optimal sensorimotor recovery, functional gain and to reduce long-term dependency. Intensive physical therapy interventions can enhance this recovery as experience-dependent neural plastic changes either directly act at cortical neural networks or at distal peripheral level (muscular components). Neuromuscular Electrical Stimulation (NMES), a traditional bottom-up approach, mirror therapy (MT), a relatively new top down approach have found to be an effective adjuvant treatment methods for lower extremity motor and functional recovery in stroke rehabilitation. However there is a scarcity of evidence to compare their therapeutic gain in stroke recovery.Aim: To compare the efficacy of neuromuscular electrical stimulation (NMES) and mirror therapy (MT) in very early phase of post stroke rehabilitation addressed to lower extremity motor recovery and balance. Design: observer blinded Randomized Clinical Trial. Setting: Neurorehabilitation Unit, Department of Physical Therapy, Tertiary Care Hospitals. Subjects: 32 acute stroke subjects with first episode of unilateral stroke with hemiparesis, referred for rehabilitation (onset < 3 weeks), Brunnstorm lower extremity recovery stages ≥3 and MMSE score more than 24 were randomized into two group [Group A-NMES and Group B-MT]. Interventions: Both the groups received eclectic approach to remediate lower extremity recovery which includes treatment components of Roods, Bobath and Motor learning approaches for 30 minutes a day for 6 days. Following which Group A (N=16) received 30 minutes of surface NMES training for six major paretic muscle groups (gluteus maximus and medius,quadriceps, hamstrings, tibialis anterior and gastrocnemius). Group B (N=16) was administered with 30 minutes of mirror therapy sessions to facilitate lower extremity motor recovery. Outcome measures: Lower extremity motor recovery, balance and activities of daily life (ADLs) were measured by Fugyl Meyer Assessment (FMA-LE), Berg Balance Scale (BBS), Barthel Index (BI) before and after intervention. Results: Pre Post analysis of either group across the time revealed statistically significant improvement (p < 0.001) for all the outcome variables for the either group. All parameters of NMES had greater change scores compared to MT group as follows: FMA-LE (25.12±3.01 vs. 23.31±2.38), BBS (35.12±4.61 vs. 34.68±5.42) and BI (40.00±10.32 vs. 37.18±7.73). Between the groups comparison of pre post values showed no significance with FMA-LE (p=0.09), BBS (p=0.80) and BI (p=0.39) respectively. Conclusion: Though either groups had significant improvement (pre to post intervention), none of them were superior to other in lower extremity motor recovery and balance among acute stroke subjects. We conclude that eclectic approach is an effective treatment irrespective of NMES or MT as an adjunct.

Keywords: balance, motor recovery, mirror therapy, neuromuscular electrical stimulation, stroke

Procedia PDF Downloads 257
148 Variations in Spatial Learning and Memory across Natural Populations of Zebrafish, Danio rerio

Authors: Tamal Roy, Anuradha Bhat

Abstract:

Cognitive abilities aid fishes in foraging, avoiding predators & locating mates. Factors like predation pressure & habitat complexity govern learning & memory in fishes. This study aims to compare spatial learning & memory across four natural populations of zebrafish. Zebrafish, a small cyprinid inhabits a diverse range of freshwater habitats & this makes it amenable to studies investigating role of native environment in spatial cognitive abilities. Four populations were collected across India from waterbodies with contrasting ecological conditions. Habitat complexity of the water-bodies was evaluated as a combination of channel substrate diversity and diversity of vegetation. Experiments were conducted on populations under controlled laboratory conditions. A square shaped spatial testing arena (maze) was constructed for testing the performance of adult zebrafish. The square tank consisted of an inner square shaped layer with the edges connected to the diagonal ends of the tank-walls by connections thereby forming four separate chambers. Each of the four chambers had a main door in the centre. Each chamber had three sections separated by two windows. A removable coloured window-pane (red, yellow, green or blue) identified each main door. A food reward associated with an artificial plant was always placed inside the left-hand section of the red-door chamber. The position of food-reward and plant within the red-door chamber was fixed. A test fish would have to explore the maze by taking turns and locate the food inside the right-side section of the red-door chamber. Fishes were sorted from each population stock and kept individually in separate containers for identification. At a time, a test fish was released into the arena and allowed 20 minutes to explore in order to find the food-reward. In this way, individual fishes were trained through the maze to locate the food reward for eight consecutive days. The position of red door, with the plant and the reward, was shuffled every day. Following training, an intermission of four days was given during which the fishes were not subjected to trials. Post-intermission, the fishes were re-tested on the 13th day following the same protocol for their ability to remember the learnt task. Exploratory tendencies and latency of individuals to explore on 1st day of training, performance time across trials, and number of mistakes made each day were recorded. Additionally, mechanism used by individuals to solve the maze each day was analyzed across populations. Fishes could be expected to use algorithm (sequence of turns) or associative cues in locating the food reward. Individuals of populations did not differ significantly in latencies and tendencies to explore. No relationship was found between exploration and learning across populations. High habitat-complexity populations had higher rates of learning & stronger memory while low habitat-complexity populations had lower rates of learning and much reduced abilities to remember. High habitat-complexity populations used associative cues more than algorithm for learning and remembering while low habitat-complexity populations used both equally. The study, therefore, helped understand the role of natural ecology in explaining variations in spatial learning abilities across populations.

Keywords: algorithm, associative cue, habitat complexity, population, spatial learning

Procedia PDF Downloads 270
147 Understanding the Impact of Out-of-Sequence Thrust Dynamics on Earthquake Mitigation: Implications for Hazard Assessment and Disaster Planning

Authors: Rajkumar Ghosh

Abstract:

Earthquakes pose significant risks to human life and infrastructure, highlighting the importance of effective earthquake mitigation strategies. Traditional earthquake modelling and mitigation efforts have largely focused on the primary fault segments and their slip behaviour. However, earthquakes can exhibit complex rupture dynamics, including out-of-sequence thrust (OOST) events, which occur on secondary or subsidiary faults. This abstract examines the impact of OOST dynamics on earthquake mitigation strategies and their implications for hazard assessment and disaster planning. OOST events challenge conventional seismic hazard assessments by introducing additional fault segments and potential rupture scenarios that were previously unrecognized or underestimated. Consequently, these events may increase the overall seismic hazard in affected regions. The study reviews recent case studies and research findings that illustrate the occurrence and characteristics of OOST events. It explores the factors contributing to OOST dynamics, such as stress interactions between fault segments, fault geometry, and mechanical properties of fault materials. Moreover, it investigates the potential triggers and precursory signals associated with OOST events to enhance early warning systems and emergency response preparedness. The abstract also highlights the significance of incorporating OOST dynamics into seismic hazard assessment methodologies. It discusses the challenges associated with accurately modelling OOST events, including the need for improved understanding of fault interactions, stress transfer mechanisms, and rupture propagation patterns. Additionally, the abstract explores the potential for advanced geophysical techniques, such as high-resolution imaging and seismic monitoring networks, to detect and characterize OOST events. Furthermore, the abstract emphasizes the practical implications of OOST dynamics for earthquake mitigation strategies and urban planning. It addresses the need for revising building codes, land-use regulations, and infrastructure designs to account for the increased seismic hazard associated with OOST events. It also underscores the importance of public awareness campaigns to educate communities about the potential risks and safety measures specific to OOST-induced earthquakes. This sheds light on the impact of out-of-sequence thrust dynamics in earthquake mitigation. By recognizing and understanding OOST events, researchers, engineers, and policymakers can improve hazard assessment methodologies, enhance early warning systems, and implement effective mitigation measures. By integrating knowledge of OOST dynamics into urban planning and infrastructure development, societies can strive for greater resilience in the face of earthquakes, ultimately minimizing the potential for loss of life and infrastructure damage.

Keywords: earthquake mitigation, out-of-sequence thrust, seismic, satellite imagery

Procedia PDF Downloads 61
146 Contextual Toxicity Detection with Data Augmentation

Authors: Julia Ive, Lucia Specia

Abstract:

Understanding and detecting toxicity is an important problem to support safer human interactions online. Our work focuses on the important problem of contextual toxicity detection, where automated classifiers are tasked with determining whether a short textual segment (usually a sentence) is toxic within its conversational context. We use “toxicity” as an umbrella term to denote a number of variants commonly named in the literature, including hate, abuse, offence, among others. Detecting toxicity in context is a non-trivial problem and has been addressed by very few previous studies. These previous studies have analysed the influence of conversational context in human perception of toxicity in controlled experiments and concluded that humans rarely change their judgements in the presence of context. They have also evaluated contextual detection models based on state-of-the-art Deep Learning and Natural Language Processing (NLP) techniques. Counterintuitively, they reached the general conclusion that computational models tend to suffer performance degradation in the presence of context. We challenge these empirical observations by devising better contextual predictive models that also rely on NLP data augmentation techniques to create larger and better data. In our study, we start by further analysing the human perception of toxicity in conversational data (i.e., tweets), in the absence versus presence of context, in this case, previous tweets in the same conversational thread. We observed that the conclusions of previous work on human perception are mainly due to data issues: The contextual data available does not provide sufficient evidence that context is indeed important (even for humans). The data problem is common in current toxicity datasets: cases labelled as toxic are either obviously toxic (i.e., overt toxicity with swear, racist, etc. words), and thus context does is not needed for a decision, or are ambiguous, vague or unclear even in the presence of context; in addition, the data contains labeling inconsistencies. To address this problem, we propose to automatically generate contextual samples where toxicity is not obvious (i.e., covert cases) without context or where different contexts can lead to different toxicity judgements for the same tweet. We generate toxic and non-toxic utterances conditioned on the context or on target tweets using a range of techniques for controlled text generation(e.g., Generative Adversarial Networks and steering techniques). On the contextual detection models, we posit that their poor performance is due to limitations on both of the data they are trained on (same problems stated above) and the architectures they use, which are not able to leverage context in effective ways. To improve on that, we propose text classification architectures that take the hierarchy of conversational utterances into account. In experiments benchmarking ours against previous models on existing and automatically generated data, we show that both data and architectural choices are very important. Our model achieves substantial performance improvements as compared to the baselines that are non-contextual or contextual but agnostic of the conversation structure.

Keywords: contextual toxicity detection, data augmentation, hierarchical text classification models, natural language processing

Procedia PDF Downloads 142
145 Exploring Accessible Filmmaking and Video for Deafblind Audiences through Multisensory Participatory Design

Authors: Aikaterini Tavoulari, Mike Richardson

Abstract:

Objective: This abstract presents a multisensory participatory design project, inspired by a deafblind PhD student's ambition to climb Mount Everest. The project aims to explore accessible routes for filmmaking and video content creation, catering to the needs of individuals with hearing and sight loss. By engaging participants from the Southwest area of England, recruited through multiple networks, the project seeks to gather qualitative data and insights to inform the development of inclusive media practices. Design: It will be a community-based participatory research design. The workshop will feature various stations that stimulate different senses, such as scent, touch, sight, hearing as well as movement. Participants will have the opportunity to engage with these multisensory experiences, providing valuable feedback on their effectiveness and potential for enhancing accessibility in filmmaking and video content. Methods: Brief semi-structured interviews will be conducted to collect qualitative data, allowing participants to share their perspectives, challenges, and suggestions for improvement. The participatory design approach emphasizes the importance of involving the target audience in the creative process. By actively engaging individuals with hearing and sight loss, the project aims to ensure that their needs and preferences are central to the development of accessible filmmaking techniques and video content. This collaborative effort seeks to bridge the gap between content creators and diverse audiences, fostering a more inclusive media landscape. Results: The findings from this study will contribute to the growing body of research on accessible filmmaking and video content creation. Via inductive thematic analysis of the qualitative data collected through interviews and observations, the researchers aim to identify key themes, challenges, and opportunities for creating engaging and inclusive media experiences for deafblind audiences. The insights will inform the development of best practices and guidelines for accessible filmmaking, empowering content creators to produce more inclusive and immersive video content. Conclusion: The abstract targets the hybrid International Conference for Disability and Diversity in Canada (January 2025), as this platform provides an excellent opportunity to share the outcomes of the project with a global audience of researchers, practitioners, and advocates working towards inclusivity and accessibility in various disability domains. By presenting this research at the conference in person, the authors aim to contribute to the ongoing discourse on disability and diversity, highlighting the importance of multisensory experiences and participatory design in creating accessible media content for the deafblind community and the community with sensory impairments more broadly.

Keywords: vision impairment, hearing impairment, deafblindness, accessibility, filmmaking

Procedia PDF Downloads 22
144 Generating Individualized Wildfire Risk Assessments Utilizing Multispectral Imagery and Geospatial Artificial Intelligence

Authors: Gus Calderon, Richard McCreight, Tammy Schwartz

Abstract:

Forensic analysis of community wildfire destruction in California has shown that reducing or removing flammable vegetation in proximity to buildings and structures is one of the most important wildfire defenses available to homeowners. State laws specify the requirements for homeowners to create and maintain defensible space around all structures. Unfortunately, this decades-long effort had limited success due to noncompliance and minimal enforcement. As a result, vulnerable communities continue to experience escalating human and economic costs along the wildland-urban interface (WUI). Quantifying vegetative fuels at both the community and parcel scale requires detailed imaging from an aircraft with remote sensing technology to reduce uncertainty. FireWatch has been delivering high spatial resolution (5” ground sample distance) wildfire hazard maps annually to the community of Rancho Santa Fe, CA, since 2019. FireWatch uses a multispectral imaging system mounted onboard an aircraft to create georeferenced orthomosaics and spectral vegetation index maps. Using proprietary algorithms, the vegetation type, condition, and proximity to structures are determined for 1,851 properties in the community. Secondary data processing combines object-based classification of vegetative fuels, assisted by machine learning, to prioritize mitigation strategies within the community. The remote sensing data for the 10 sq. mi. community is divided into parcels and sent to all homeowners in the form of defensible space maps and reports. Follow-up aerial surveys are performed annually using repeat station imaging of fixed GPS locations to address changes in defensible space, vegetation fuel cover, and condition over time. These maps and reports have increased wildfire awareness and mitigation efforts from 40% to over 85% among homeowners in Rancho Santa Fe. To assist homeowners fighting increasing insurance premiums and non-renewals, FireWatch has partnered with Black Swan Analytics, LLC, to leverage the multispectral imagery and increase homeowners’ understanding of wildfire risk drivers. For this study, a subsample of 100 parcels was selected to gain a comprehensive understanding of wildfire risk and the elements which can be mitigated. Geospatial data from FireWatch’s defensible space maps was combined with Black Swan’s patented approach using 39 other risk characteristics into a 4score Report. The 4score Report helps property owners understand risk sources and potential mitigation opportunities by assessing four categories of risk: Fuel sources, ignition sources, susceptibility to loss, and hazards to fire protection efforts (FISH). This study has shown that susceptibility to loss is the category residents and property owners must focus their efforts. The 4score Report also provides a tool to measure the impact of homeowner actions on risk levels over time. Resiliency is the only solution to breaking the cycle of community wildfire destruction and it starts with high-quality data and education.

Keywords: defensible space, geospatial data, multispectral imaging, Rancho Santa Fe, susceptibility to loss, wildfire risk.

Procedia PDF Downloads 83
143 Internet of Things, Edge and Cloud Computing in Rock Mechanical Investigation for Underground Surveys

Authors: Esmael Makarian, Ayub Elyasi, Fatemeh Saberi, Olusegun Stanley Tomomewo

Abstract:

Rock mechanical investigation is one of the most crucial activities in underground operations, especially in surveys related to hydrocarbon exploration and production, geothermal reservoirs, energy storage, mining, and geotechnics. There is a wide range of traditional methods for driving, collecting, and analyzing rock mechanics data. However, these approaches may not be suitable or work perfectly in some situations, such as fractured zones. Cutting-edge technologies have been provided to solve and optimize the mentioned issues. Internet of Things (IoT), Edge, and Cloud Computing technologies (ECt & CCt, respectively) are among the most widely used and new artificial intelligence methods employed for geomechanical studies. IoT devices act as sensors and cameras for real-time monitoring and mechanical-geological data collection of rocks, such as temperature, movement, pressure, or stress levels. Structural integrity, especially for cap rocks within hydrocarbon systems, and rock mass behavior assessment, to further activities such as enhanced oil recovery (EOR) and underground gas storage (UGS), or to improve safety risk management (SRM) and potential hazards identification (P.H.I), are other benefits from IoT technologies. EC techniques can process, aggregate, and analyze data immediately collected by IoT on a real-time scale, providing detailed insights into the behavior of rocks in various situations (e.g., stress, temperature, and pressure), establishing patterns quickly, and detecting trends. Therefore, this state-of-the-art and useful technology can adopt autonomous systems in rock mechanical surveys, such as drilling and production (in hydrocarbon wells) or excavation (in mining and geotechnics industries). Besides, ECt allows all rock-related operations to be controlled remotely and enables operators to apply changes or make adjustments. It must be mentioned that this feature is very important in environmental goals. More often than not, rock mechanical studies consist of different data, such as laboratory tests, field operations, and indirect information like seismic or well-logging data. CCt provides a useful platform for storing and managing a great deal of volume and different information, which can be very useful in fractured zones. Additionally, CCt supplies powerful tools for predicting, modeling, and simulating rock mechanical information, especially in fractured zones within vast areas. Also, it is a suitable source for sharing extensive information on rock mechanics, such as the direction and size of fractures in a large oil field or mine. The comprehensive review findings demonstrate that digital transformation through integrated IoT, Edge, and Cloud solutions is revolutionizing traditional rock mechanical investigation. These advanced technologies have empowered real-time monitoring, predictive analysis, and data-driven decision-making, culminating in noteworthy enhancements in safety, efficiency, and sustainability. Therefore, by employing IoT, CCt, and ECt, underground operations have experienced a significant boost, allowing for timely and informed actions using real-time data insights. The successful implementation of IoT, CCt, and ECt has led to optimized and safer operations, optimized processes, and environmentally conscious approaches in underground geological endeavors.

Keywords: rock mechanical studies, internet of things, edge computing, cloud computing, underground surveys, geological operations

Procedia PDF Downloads 34
142 Clinical Staff Perceptions of the Quality of End-of-Life Care in an Acute Private Hospital: A Mixed Methods Design

Authors: Rosemary Saunders, Courtney Glass, Karla Seaman, Karen Gullick, Julie Andrew, Anne Wilkinson, Ashwini Davray

Abstract:

Current literature demonstrates that most Australians receive end-of-life care in a hospital setting, despite most hoping to die within their own home. The necessity for high quality end-of-life care has been emphasised by the Australian Commission on Safety and Quality in Health Care and the National Safety and Quality in Health Services Standards depict the requirement for comprehensive care at the end of life (Action 5.20), reinforcing the obligation for continual organisational assessment to determine if these standards are suitably achieved. Limited research exploring clinical staff perspectives of end-of-life care delivery has been conducted within an Australian private health context. This study aimed to investigate clinical staff member perceptions of end-of-life care delivery at a private hospital in Western Australia. The study comprised of a multi-faceted mixed-methods methodology, part of a larger study. Data was obtained from clinical staff utilising surveys and focus groups. A total of 133 questionnaires were completed by clinical staff, including registered nurses (61.4%), enrolled nurses (22.7%), allied health professionals (9.9%), non-palliative care consultants (3.8%) and junior doctors (2.2%). A total of 14.7% of respondents were palliative care ward staff members. Additionally, seven staff focus groups were conducted with physicians (n=3), nurses (n=26) and allied health professionals including social workers (n=1), dietitians (n=2), physiotherapists (n=5) and speech pathologists (n=3). Key findings from the surveys highlighted that the majority of staff agreed it was part of their role to talk to doctors about the care of patients who they thought may be dying, and recognised the importance of communication, appropriate training and support for clinical staff to provide quality end-of-life care. Thematic analysis of the qualitative data generated three key themes: creating the setting which highlighted the importance of adequate resourcing and conducive physical environments for end-of-life care and to support staff and families; planning and care delivery which emphasised the necessity for collaboration between staff, families and patients to develop care plans and treatment directives; and collaborating in end-of-life care, with effective communication and teamwork leading to achievable care delivery expectations. These findings contribute to health professionals better understanding of end-of-life care provision and the importance of collaborating with patients and families in care delivery. It is crucial that health care providers implement strategies to overcome gaps in care, so quality end-of-life care is provided. Findings from this study have been translated into practice, with the development and implementation of resources, training opportunities, support networks and guidelines for the delivery of quality end-of-life care.

Keywords: clinical staff, end-of-life care, mixed-methods, private hospital.

Procedia PDF Downloads 121
141 An Integrated Real-Time Hydrodynamic and Coastal Risk Assessment Model

Authors: M. Reza Hashemi, Chris Small, Scott Hayward

Abstract:

The Northeast Coast of the US faces damaging effects of coastal flooding and winds due to Atlantic tropical and extratropical storms each year. Historically, several large storm events have produced substantial levels of damage to the region; most notably of which were the Great Atlantic Hurricane of 1938, Hurricane Carol, Hurricane Bob, and recently Hurricane Sandy (2012). The objective of this study was to develop an integrated modeling system that could be used as a forecasting/hindcasting tool to evaluate and communicate the risk coastal communities face from these coastal storms. This modeling system utilizes the ADvanced CIRCulation (ADCIRC) model for storm surge predictions and the Simulating Waves Nearshore (SWAN) model for the wave environment. These models were coupled, passing information to each other and computing over the same unstructured domain, allowing for the most accurate representation of the physical storm processes. The coupled SWAN-ADCIRC model was validated and has been set up to perform real-time forecast simulations (as well as hindcast). Modeled storm parameters were then passed to a coastal risk assessment tool. This tool, which is generic and universally applicable, generates spatial structural damage estimate maps on an individual structure basis for an area of interest. The required inputs for the coastal risk model included a detailed information about the individual structures, inundation levels, and wave heights for the selected region. Additionally, calculation of wind damage to structures was incorporated. The integrated coastal risk assessment system was then tested and applied to Charlestown, a small vulnerable coastal town along the southern shore of Rhode Island. The modeling system was applied to Hurricane Sandy and a synthetic storm. In both storm cases, effect of natural dunes on coastal risk was investigated. The resulting damage maps for the area (Charlestown) clearly showed that the dune eroded scenarios affected more structures, and increased the estimated damage. The system was also tested in forecast mode for a large Nor’Easters: Stella (March 2017). The results showed a good performance of the coupled model in forecast mode when compared to observations. Finally, a nearshore model XBeach was then nested within this regional grid (ADCIRC-SWAN) to simulate nearshore sediment transport processes and coastal erosion. Hurricane Irene (2011) was used to validate XBeach, on the basis of a unique beach profile dataset at the region. XBeach showed a relatively good performance, being able to estimate eroded volumes along the beach transects with a mean error of 16%. The validated model was then used to analyze the effectiveness of several erosion mitigation methods that were recommended in a recent study of coastal erosion in New England: beach nourishment, coastal bank (engineered core), and submerged breakwater as well as artificial surfing reef. It was shown that beach nourishment and coastal banks perform better to mitigate shoreline retreat and coastal erosion.

Keywords: ADCIRC, coastal flooding, storm surge, coastal risk assessment, living shorelines

Procedia PDF Downloads 85
140 Change of Education Business in the Age of 5G

Authors: Heikki Ruohomaa, Vesa Salminen

Abstract:

Regions are facing huge competition to attract companies, businesses, inhabitants, students, etc. This way to improve living and business environment, which is rapidly changing due to digitalization. On the other hand, from the industry's point of view, the availability of a skilled labor force and an innovative environment are crucial factors. In this context, qualified staff has been seen to utilize the opportunities of digitalization and respond to the needs of future skills. World Manufacturing Forum has stated in the year 2019- report that in next five years, 40% of workers have to change their core competencies. Through digital transformation, new technologies like cloud, mobile, big data, 5G- infrastructure, platform- technology, data- analysis, and social networks with increasing intelligence and automation, enterprises can capitalize on new opportunities and optimize existing operations to achieve significant business improvement. Digitalization will be an important part of the everyday life of citizens and present in the working day of the average citizen and employee in the future. For that reason, the education system and education programs on all levels of education from diaper age to doctorate have been directed to fulfill this ecosystem strategy. Goal: The Fourth Industrial Revolution will bring unprecedented change to societies, education organizations and business environments. This article aims to identify how education, education content, the way education has proceeded, and overall whole the education business is changing. Most important is how we should respond to this inevitable co- evolution. Methodology: The study aims to verify how the learning process is boosted by new digital content, new learning software and tools, and customer-oriented learning environments. The change of education programs and individual education modules can be supported by applied research projects. You can use them in making proof- of- the concept of new technology, new ways to teach and train, and through the experiences gathered change education content, way to educate and finally education business as a whole. Major findings: Applied research projects can prove the concept- phases on real environment field labs to test technology opportunities and new tools for training purposes. Customer-oriented applied research projects are also excellent for students to make assignments and use new knowledge and content and teachers to test new tools and create new ways to educate. New content and problem-based learning are used in future education modules. This article introduces some case study experiences on customer-oriented digital transformation projects and how gathered knowledge on new digital content and a new way to educate has influenced education. The case study is related to experiences of research projects, customer-oriented field labs/learning environments and education programs of Häme University of Applied Sciences.

Keywords: education process, digitalization content, digital tools for education, learning environments, transdisciplinary co-operation

Procedia PDF Downloads 152
139 Evaluation of the Performance Measures of Two-Lane Roundabout and Turbo Roundabout with Varying Truck Percentages

Authors: Evangelos Kaisar, Anika Tabassum, Taraneh Ardalan, Majed Al-Ghandour

Abstract:

The economy of any country is dependent on its ability to accommodate the movement and delivery of goods. The demand for goods movement and services increases truck traffic on highways and inside the cities. The livability of most cities is directly affected by the congestion and environmental impacts of trucks, which are the backbone of the urban freight system. Better operation of heavy vehicles on highways and arterials could lead to the network’s efficiency and reliability. In many cases, roundabouts can respond better than at-level intersections to enable traffic operations with increased safety for both cars and heavy vehicles. Recently emerged, the concept of turbo-roundabout is a viable alternative to the two-lane roundabout aiming to improve traffic efficiency. The primary objective of this study is to evaluate the operation and performance level of an at-grade intersection, a conventional two-lane roundabout, and a basic turbo roundabout for freight movements. To analyze and evaluate the performances of the signalized intersections and the roundabouts, micro simulation models were developed PTV VISSIM. The networks chosen for this analysis in this study are to experiment and evaluate changes in the performance of the movement of vehicles with different geometric and flow scenarios. There are several scenarios that were examined when attempting to assess the impacts of various geometric designs on vehicle movements. The overall traffic efficiency depends on the geometric layout of the intersections, which consists of traffic congestion rate, hourly volume, frequency of heavy vehicles, type of road, and the ratio of major-street versus side-street traffic. The traffic performance was determined by evaluating the delay time, number of stops, and queue length of each intersection for varying truck percentages. The results indicate that turbo-roundabouts can replace signalized intersections and two-lane roundabouts only when the traffic demand is low, even with high truck volume. More specifically, it is clear that two-lane roundabouts are seen to have shorter queue lengths compared to signalized intersections and turbo-roundabouts. For instance, considering the scenario where the volume is highest, and the truck movement and left turn movement are maximum, the signalized intersection has 3 times, and the turbo-roundabout has 5 times longer queue length than a two-lane roundabout in major roads. Similarly, on minor roads, signalized intersections and turbo-roundabouts have 11 times longer queue lengths than two-lane roundabouts for the same scenario. As explained from all the developed scenarios, while the traffic demand lowers, the queue lengths of turbo-roundabouts shorten. This proves that turbo roundabouts perform well for low and medium traffic demand. The results indicate that turbo-roundabouts can replace signalized intersections and two-lane roundabouts only when the traffic demand is low, even with high truck volume. Finally, this study provides recommendations on the conditions under which different intersections perform better than each other.

Keywords: At-grade intersection, simulation, turbo-roundabout, two-lane roundabout

Procedia PDF Downloads 120
138 A Case Study of a Rehabilitated Child by Joint Efforts of Parents and Community

Authors: Fouzia Arif, Arif S. Mohammad, Hifsa Altaf, Lubna Raees

Abstract:

Introduction: The term "disability", refers to any condition that impedes the completion of daily tasks using traditional methods. In developing countries like Pakistan, disable population is usually excluded from the mainstream. In squatter settlements the situation is more critical. Sultanabad is one of the squatter settlements of Karachi. Purpose of case study is to improve the health of disabled children’s, and create awareness among the parents and community. Through a household visit, Shiraz, a young disabled boy of 15.5 years old was identified. Her mother articulated that her son was living normally and happily with his parents two years back. When he was 13 years old and student of class 8th, both his legs were traumatized in a Railway Train Accident while playing cricket. He got both femoral shaft fractured severely. He was taken to Jinnah Post Graduate Medical Centre (JPMC) where his left leg was amputated at above knee level and right leg was opened & fixed by reduction internally, luckily bone healed moderately with the passage of time. Methods: In Squatter settlements of Karachi Sultanabad, a survey was conducted in two sectors. Disability screening questionnaire was developed, collaboration with community through household visits, outreach sessions 23cases of disabled were identified who were socialized through sports, Musical program and get-together was organized with stockholder for creating awareness among community and parent’s. Collaboration was established with different NGOs, Government, stakeholders and community support for establishment of Physiotherapy Center. During home visit it was identified that Shiraz was on bed since last 1 year, his family could not afforded cost of physiotherapist and medical consultation due to poverty. Parents counseling was done mentioning that Shiraz needed to take treatment. After motivation his parents agreed for treatment. He was consulted by an orthopedic surgeon in AKUH, Who referred to DMC University of Health Science for rehabilitation service. There he was assessed and referred for Community Based Physiotherapy Centre Sultanabad. Physiotherapist visited home along with Coordinator for Special children and assessed him regularly, planned Physiotherapy treatment for abdominal, high muscles strutting exercise foot muscles strengthening exercise, knee mobilization weight bearing from partial to full weight gradually, also strengthen exercise were given for residual limb as the boy was dependent on it. He was also provided by an artificial leg and training was done. Result: Shiraz is now fully mobile, he can walk independently even out of home, functional ability progress improved and dependency factors reduced. It was difficult but not impossible. We all have sympathy but if we have empathy then we can rehabilitate the community in a better way. His parents are very happy and also the community is surprised to see him in such better condition. Conclusion: Combined efforts of physiotherapist, Coordinator of special children, community and parents made a drastic change in Shiraz’s case by continuously motivating him for better outcome. He is going to school regularly without support. Since he belongs to a poor family he faces financial constraints for education and clinical follow ups regularly.

Keywords: femoral shaft fracture, trauma, orthopedic surgeon, physiotherapy treatment

Procedia PDF Downloads 217
137 The Lived Experiences and Coping Strategies of Women with Attention Deficit and Hyperactivity Disorder (ADHD)

Authors: Oli Sophie Meredith, Jacquelyn Osborne, Sarah Verdon, Jane Frawley

Abstract:

PROJECT OVERVIEW AND BACKGROUND: Over one million Australians are affected by ADHD at an economic and social cost of over $20 billion per annum. Despite health outcomes being significantly worse compared with men, women have historically been overlooked in ADHD diagnosis and treatment. While research suggests physical activity and other non-prescription options can help with ADHD symptoms, the frontline response to ADHD remains expensive stimulant medications that can have adverse side effects. By interviewing women with ADHD, this research will examine women’s self-directed approaches to managing symptoms, including alternatives to prescription medications. It will investigate barriers and affordances to potentially helpful approaches and identify any concerning strategies pursued in lieu of diagnosis. SIGNIFICANCE AND INNOVATION: Despite the economic and societal impact of ADHD on women, research investigating how women manage their symptoms is scant. This project is significant because although women’s ADHD symptoms are markedly different to those of men, mainstream treatment has been based on the experiences of men. Further, it is thought that in developing nuanced coping strategies, women may have masked their symptoms. Thus, this project will highlight strategies which women deem effective in ‘thriving’ rather than just ‘hiding’. By investigating the health service use, self-care and physical activity of women with ADHD, this research aligns with a priority research areas as identified by the November 2023 senate ADHD inquiry report. APPROACH AND METHODS: Semi-structured interviews will be conducted with up to 20 women with ADHD. Interviews will be conducted in person and online to capture experience across rural and metropolitan Australia. Participants will be recruited in partnership with the peak representative body, ADHD Australia. The research will use an intersectional framework, and data will be analysed thematically. This project is led by an interdisciplinary and cross-institutional team of women with ADHD. Reflexive interviewing skills will be employed to help interviewees feel more comfortable disclosing their experiences, especially where they share common ground ENGAGEMENT, IMPACT AND BENEFIT: This research will benefit women with ADHD by increasing knowledge of strategies and alternative treatments to prescription medications, reducing the social and economic burden of ADHD on Australia and on individuals. It will also benefit women by identifying risks involved with some self-directed approaches in lieu of medical advice. The project has an accessible impact plan to directly benefit end-users, which includes the development of a podcast and a PDF resource translating findings. The resources will reach a wide audience through ADHD Australia’s extensive national networks. We will collaborate with Charles Sturt’s Accessibility and Inclusion Division of Safety, Security and Well-being to create a targeted resource for students with ADHD.

Keywords: ADHD, women's health, self-directed strategies, health service use, physical activity, public health

Procedia PDF Downloads 47
136 [Keynote Talk]: New Generations and Employment: An Exploratory Study about Tensions between the Psycho-Social Characteristics of the Generation Z and Expectations and Actions of Organizational Structures Related with Employment (CABA, 2016)

Authors: Esteban Maioli

Abstract:

Generational studies have an important research tradition in social and human sciences. On the one hand, the speed of social change in the context of globalization imposes the need to research the transformations are identified both the subjectivity of the agents involved and its inclusion in the institutional matrix, specifically employment. Generation Z, (generally considered as the population group whose birth occurs after 1995) have unique psycho-social characteristics. Gen Z is characterized by a different set of values, beliefs, attitudes and ambitions that impact in their concrete action in organizational structures. On the other hand, managers often have to deal with generational differences in the workplace. Organizations have members who belong to different generations; they had never before faced the challenge of having such a diverse group of members. The members of each historical generation are characterized by a different set of values, beliefs, attitudes and ambitions that are manifest in their concrete action in organizational structures. Gen Z it’s the only one who can fully be considered "global," while its members were born in the consolidated context of globalization. Some salient features of the Generation Z can be summarized as follows. They’re the first fully born into a digital world. Social networks and technology are integrated into their lives. They are concerned about the challenges of the modern world (poverty, inequality, climate change, among others). They are self-expressive, more liberal and open to change. They often bore easily, with short attention spans. They do not like routine tasks. They want to achieve a good life-work balance, and they are interested in a flexible work environment, as opposed to traditional work schedule. They are critical thinkers, who come with innovative and creative ideas to help. Research design considered methodological triangulation. Data was collected with two techniques: a self-administered survey with multiple choice questions and attitudinal scales applied over a non-probabilistic sample by reasoned decision. According to the multi-method strategy, also it was conducted in-depth interviews. Organizations constantly face new challenges. One of the biggest ones is to learn to manage a multi-generational scope of work. While Gen Z has not yet been fully incorporated (expected to do so in five years or so), many organizations have already begun to implement a series of changes in its recruitment and development. The main obstacle to retaining young talent is the gap between the expectations of iGen applicants and what companies offer. Members of the iGen expect not only a good salary and job stability but also a clear career plan. Generation Z needs to have immediate feedback on their tasks. However, many organizations have yet to improve both motivation and monitoring practices. It is essential for companies to take a review of organizational practices anchored in the culture of the organization.

Keywords: employment, expectations, generation Z, organizational culture, organizations, psycho-social characteristics

Procedia PDF Downloads 186
135 Bioinformatic Prediction of Hub Genes by Analysis of Signaling Pathways, Transcriptional Regulatory Networks and DNA Methylation Pattern in Colon Cancer

Authors: Ankan Roy, Niharika, Samir Kumar Patra

Abstract:

Anomalous nexus of complex topological assemblies and spatiotemporal epigenetic choreography at chromosomal territory may forms the most sophisticated regulatory layer of gene expression in cancer. Colon cancer is one of the leading malignant neoplasms of the lower gastrointestinal tract worldwide. There is still a paucity of information about the complex molecular mechanisms of colonic cancerogenesis. Bioinformatics prediction and analysis helps to identify essential genes and significant pathways for monitoring and conquering this deadly disease. The present study investigates and explores potential hub genes as biomarkers and effective therapeutic targets for colon cancer treatment. Colon cancer patient sample containing gene expression profile datasets, such as GSE44076, GSE20916, and GSE37364 were downloaded from Gene Expression Omnibus (GEO) database and thoroughly screened using the GEO2R tool and Funrich software to find out common 2 differentially expressed genes (DEGs). Other approaches, including Gene Ontology (GO) and KEGG pathway analysis, Protein-Protein Interaction (PPI) network construction and hub gene investigation, Overall Survival (OS) analysis, gene correlation analysis, methylation pattern analysis, and hub gene-Transcription factors regulatory network construction, were performed and validated using various bioinformatics tool. Initially, we identified 166 DEGs, including 68 up-regulated and 98 down-regulated genes. Up-regulated genes are mainly associated with the Cytokine-cytokine receptor interaction, IL17 signaling pathway, ECM-receptor interaction, Focal adhesion and PI3K-Akt pathway. Downregulated genes are enriched in metabolic pathways, retinol metabolism, Steroid hormone biosynthesis, and bile secretion. From the protein-protein interaction network, thirty hub genes with high connectivity are selected using the MCODE and cytoHubba plugin. Survival analysis, expression validation, correlation analysis, and methylation pattern analysis were further verified using TCGA data. Finally, we predicted COL1A1, COL1A2, COL4A1, SPP1, SPARC, and THBS2 as potential master regulators in colonic cancerogenesis. Moreover, our experimental data highlights that disruption of lipid raft and RAS/MAPK signaling cascade affects this gene hub at mRNA level. We identified COL1A1, COL1A2, COL4A1, SPP1, SPARC, and THBS2 as determinant hub genes in colon cancer progression. They can be considered as biomarkers for diagnosis and promising therapeutic targets in colon cancer treatment. Additionally, our experimental data advertise that signaling pathway act as connecting link between membrane hub and gene hub.

Keywords: hub genes, colon cancer, DNA methylation, epigenetic engineering, bioinformatic predictions

Procedia PDF Downloads 103
134 An in silico Approach for Exploring the Intercellular Communication in Cancer Cells

Authors: M. Cardenas-Garcia, P. P. Gonzalez-Perez

Abstract:

Intercellular communication is a necessary condition for cellular functions and it allows a group of cells to survive as a population. Throughout this interaction, the cells work in a coordinated and collaborative way which facilitates their survival. In the case of cancerous cells, these take advantage of intercellular communication to preserve their malignancy, since through these physical unions they can send signs of malignancy. The Wnt/β-catenin signaling pathway plays an important role in the formation of intercellular communications, being also involved in a large number of cellular processes such as proliferation, differentiation, adhesion, cell survival, and cell death. The modeling and simulation of cellular signaling systems have found valuable support in a wide range of modeling approaches, which cover a wide spectrum ranging from mathematical models; e.g., ordinary differential equations, statistical methods, and numerical methods– to computational models; e.g., process algebra for modeling behavior and variation in molecular systems. Based on these models, different simulation tools have been developed from mathematical ones to computational ones. Regarding cellular and molecular processes in cancer, its study has also found a valuable support in different simulation tools that, covering a spectrum as mentioned above, have allowed the in silico experimentation of this phenomenon at the cellular and molecular level. In this work, we simulate and explore the complex interaction patterns of intercellular communication in cancer cells using the Cellulat bioinformatics tool, a computational simulation tool developed by us and motivated by two key elements: 1) a biochemically inspired model of self-organizing coordination in tuple spaces, and 2) the Gillespie’s algorithm, a stochastic simulation algorithm typically used to mimic systems of chemical/biochemical reactions in an efficient and accurate way. The main idea behind the Cellulat simulation tool is to provide an in silico experimentation environment that complements and guides in vitro experimentation in intra and intercellular signaling networks. Unlike most of the cell signaling simulation tools, such as E-Cell, BetaWB and Cell Illustrator which provides abstractions to model only intracellular behavior, Cellulat is appropriate for modeling both intracellular signaling and intercellular communication, providing the abstractions required to model –and as a result, simulate– the interaction mechanisms that involve two or more cells, that is essential in the scenario discussed in this work. During the development of this work we made evident the application of our computational simulation tool (Cellulat) for the modeling and simulation of intercellular communication between normal and cancerous cells, and in this way, propose key molecules that may prevent the arrival of malignant signals to the cells that surround the tumor cells. In this manner, we could identify the significant role that has the Wnt/β-catenin signaling pathway in cellular communication, and therefore, in the dissemination of cancer cells. We verified, using in silico experiments, how the inhibition of this signaling pathway prevents that the cells that surround a cancerous cell are transformed.

Keywords: cancer cells, in silico approach, intercellular communication, key molecules, modeling and simulation

Procedia PDF Downloads 230
133 Planning for Location and Distribution of Regional Facilities Using Central Place Theory and Location-Allocation Model

Authors: Danjuma Bawa

Abstract:

This paper aimed at exploring the capabilities of Location-Allocation model in complementing the strides of the existing physical planning models in the location and distribution of facilities for regional consumption. The paper was designed to provide a blueprint to the Nigerian government and other donor agencies especially the Fertilizer Distribution Initiative (FDI) by the federal government for the revitalization of the terrorism ravaged regions. Theoretical underpinnings of central place theory related to spatial distribution, interrelationships, and threshold prerequisites were reviewed. The study showcased how Location-Allocation Model (L-AM) alongside Central Place Theory (CPT) was applied in Geographic Information System (GIS) environment to; map and analyze the spatial distribution of settlements; exploit their physical and economic interrelationships, and to explore their hierarchical and opportunistic influences. The study was purely spatial qualitative research which largely used secondary data such as; spatial location and distribution of settlements, population figures of settlements, network of roads linking them and other landform features. These were sourced from government ministries and open source consortium. GIS was used as a tool for processing and analyzing such spatial features within the dictum of CPT and L-AM to produce a comprehensive spatial digital plan for equitable and judicious location and distribution of fertilizer deports in the study area in an optimal way. Population threshold was used as yardstick for selecting suitable settlements that could stand as service centers to other hinterlands; this was accomplished using the query syntax in ArcMapTM. ArcGISTM’ network analyst was used in conducting location-allocation analysis for apportioning of groups of settlements around such service centers within a given threshold distance. Most of the techniques and models ever used by utility planners have been centered on straight distance to settlements using Euclidean distances. Such models neglect impedance cutoffs and the routing capabilities of networks. CPT and L-AM take into consideration both the influential characteristics of settlements and their routing connectivity. The study was undertaken in two terrorism ravaged Local Government Areas of Adamawa state. Four (4) existing depots in the study area were identified. 20 more depots in 20 villages were proposed using suitability analysis. Out of the 300 settlements mapped in the study area about 280 of such settlements where optimally grouped and allocated to the selected service centers respectfully within 2km impedance cutoff. This study complements the giant strides by the federal government of Nigeria by providing a blueprint for ensuring proper distribution of these public goods in the spirit of bringing succor to these terrorism ravaged populace. This will ardently at the same time help in boosting agricultural activities thereby lowering food shortage and raising per capita income as espoused by the government.

Keywords: central place theory, GIS, location-allocation, network analysis, urban and regional planning, welfare economics

Procedia PDF Downloads 125
132 Conservation Detection Dogs to Protect Europe's Native Biodiversity from Invasive Species

Authors: Helga Heylen

Abstract:

With dogs saving wildlife in New Zealand since 1890 and governments in Africa, Australia and Canada trusting them to give the best results, Conservation Dogs Ireland want to introduce more detection dogs to protect Europe's native wildlife. Conservation detection dogs are fast, portable and endlessly trainable. They are a cost-effective, highly sensitive and non-invasive way to detect protected and invasive species and wildlife disease. Conservation dogs find targets up to 40 times faster than any other method. They give results instantly, with near-perfect accuracy. They can search for multiple targets simultaneously, with no reduction in efficacy The European Red List indicates the decline in biodiversity has been most rapid in the past 50 years, and the risk of extinction never higher. Just two examples of major threats dogs are trained to tackle are: (I)Japanese Knotweed (Fallopia Japonica), not only a serious threat to ecosystems, crops, structures like bridges and roads - it can wipe out the entire value of a house. The property industry and homeowners are only just waking up to the full extent of the nightmare. When those working in construction on the roads move topsoil with a trace of Japanese Knotweed, it suffices to start a new colony. Japanese Knotweed grows up to 7cm a day. It can stay dormant and resprout after 20 years. In the UK, the cost of removing Japanese Knotweed from the London Olympic site in 2012 was around £70m (€83m). UK banks already no longer lend on a house that has Japanese Knotweed on-site. Legally, landowners are now obliged to excavate Japanese Knotweed and have it removed to a landfill. More and more, we see Japanese Knotweed grow where a new house has been constructed, and topsoil has been brought in. Conservation dogs are trained to detect small fragments of any part of the plant on sites and in topsoil. (II)Zebra mussels (Dreissena Polymorpha) are a threat to many waterways in the world. They colonize rivers, canals, docks, lakes, reservoirs, water pipes and cooling systems. They live up to 3 years and will release up to one million eggs each year. Zebra mussels attach to surfaces like rocks, anchors, boat hulls, intake pipes and boat engines. They cause changes in nutrient cycles, reduction of plankton and increased plant growth around lake edges, leading to the decline of Europe's native mussel and fish populations. There is no solution, only costly measures to keep it at bay. With many interconnected networks of waterways, they have spread uncontrollably. Conservation detection dogs detect the Zebra mussel from its early larvae stage, which is still invisible to the human eye. Detection dogs are more thorough and cost-effective than any other conservation method, and will greatly complement and speed up the work of biologists, surveyors, developers, ecologists and researchers.

Keywords: native biodiversity, conservation detection dogs, invasive species, Japanese Knotweed, zebra mussel

Procedia PDF Downloads 168
131 Gis Based Flash Flood Runoff Simulation Model of Upper Teesta River Besin - Using Aster Dem and Meteorological Data

Authors: Abhisek Chakrabarty, Subhraprakash Mandal

Abstract:

Flash flood is one of the catastrophic natural hazards in the mountainous region of India. The recent flood in the Mandakini River in Kedarnath (14-17th June, 2013) is a classic example of flash floods that devastated Uttarakhand by killing thousands of people.The disaster was an integrated effect of high intensityrainfall, sudden breach of Chorabari Lake and very steep topography. Every year in Himalayan Region flash flood occur due to intense rainfall over a short period of time, cloud burst, glacial lake outburst and collapse of artificial check dam that cause high flow of river water. In Sikkim-Derjeeling Himalaya one of the probable flash flood occurrence zone is Teesta Watershed. The Teesta River is a right tributary of the Brahmaputra with draining mountain area of approximately 8600 Sq. km. It originates in the Pauhunri massif (7127 m). The total length of the mountain section of the river amounts to 182 km. The Teesta is characterized by a complex hydrological regime. The river is fed not only by precipitation, but also by melting glaciers and snow as well as groundwater. The present study describes an attempt to model surface runoff in upper Teesta basin, which is directly related to catastrophic flood events, by creating a system based on GIS technology. The main object was to construct a direct unit hydrograph for an excess rainfall by estimating the stream flow response at the outlet of a watershed. Specifically, the methodology was based on the creation of a spatial database in GIS environment and on data editing. Moreover, rainfall time-series data collected from Indian Meteorological Department and they were processed in order to calculate flow time and the runoff volume. Apart from the meteorological data, background data such as topography, drainage network, land cover and geological data were also collected. Clipping the watershed from the entire area and the streamline generation for Teesta watershed were done and cross-sectional profiles plotted across the river at various locations from Aster DEM data using the ERDAS IMAGINE 9.0 and Arc GIS 10.0 software. The analysis of different hydraulic model to detect flash flood probability ware done using HEC-RAS, Flow-2D, HEC-HMS Software, which were of great importance in order to achieve the final result. With an input rainfall intensity above 400 mm per day for three days the flood runoff simulation models shows outbursts of lakes and check dam individually or in combination with run-off causing severe damage to the downstream settlements. Model output shows that 313 Sq. km area were found to be most vulnerable to flash flood includes Melli, Jourthang, Chungthang, and Lachung and 655sq. km. as moderately vulnerable includes Rangpo,Yathang, Dambung,Bardang, Singtam, Teesta Bazarand Thangu Valley. The model was validated by inserting the rain fall data of a flood event took place in August 1968, and 78% of the actual area flooded reflected in the output of the model. Lastly preventive and curative measures were suggested to reduce the losses by probable flash flood event.

Keywords: flash flood, GIS, runoff, simulation model, Teesta river basin

Procedia PDF Downloads 287
130 The Potential Impact of Big Data Analytics on Pharmaceutical Supply Chain Management

Authors: Maryam Ziaee, Himanshu Shee, Amrik Sohal

Abstract:

Big Data Analytics (BDA) in supply chain management has recently drawn the attention of academics and practitioners. Big data refers to a massive amount of data from different sources, in different formats, generated at high speed through transactions in business environments and supply chain networks. Traditional statistical tools and techniques find it difficult to analyse this massive data. BDA can assist organisations to capture, store, and analyse data specifically in the field of supply chain. Currently, there is a paucity of research on BDA in the pharmaceutical supply chain context. In this research, the Australian pharmaceutical supply chain was selected as the case study. This industry is highly significant since the right medicine must reach the right patients, at the right time, in right quantity, in good condition, and at the right price to save lives. However, drug shortages remain a substantial problem for hospitals across Australia with implications on patient care, staff resourcing, and expenditure. Furthermore, a massive volume and variety of data is generated at fast speed from multiple sources in pharmaceutical supply chain, which needs to be captured and analysed to benefit operational decisions at every stage of supply chain processes. As the pharmaceutical industry lags behind other industries in using BDA, it raises the question of whether the use of BDA can improve transparency among pharmaceutical supply chain by enabling the partners to make informed-decisions across their operational activities. This presentation explores the impacts of BDA on supply chain management. An exploratory qualitative approach was adopted to analyse data collected through interviews. This study also explores the BDA potential in the whole pharmaceutical supply chain rather than focusing on a single entity. Twenty semi-structured interviews were undertaken with top managers in fifteen organisations (five pharmaceutical manufacturers, five wholesalers/distributors, and five public hospital pharmacies) to investigate their views on the use of BDA. The findings revealed that BDA can enable pharmaceutical entities to have improved visibility over the whole supply chain and also the market; it enables entities, especially manufacturers, to monitor consumption and the demand rate in real-time and make accurate demand forecasts which reduce drug shortages. Timely and precise decision-making can allow the entities to source and manage their stocks more effectively. This can likely address the drug demand at hospitals and respond to unanticipated issues such as drug shortages. Earlier studies explore BDA in the context of clinical healthcare; however, this presentation investigates the benefits of BDA in the Australian pharmaceutical supply chain. Furthermore, this research enhances managers’ insight into the potentials of BDA at every stage of supply chain processes and helps to improve decision-making in their supply chain operations. The findings will turn the rhetoric of data-driven decision into a reality where the managers may opt for analytics for improved decision-making in the supply chain processes.

Keywords: big data analytics, data-driven decision, pharmaceutical industry, supply chain management

Procedia PDF Downloads 84
129 The Transformation of Hot Spring Destinations in Taiwan in a Post-pandemic Future: Exploring the COVID-19 Impacts on Hot Spring Experiences, Individual, and Community Resilience of Residents From a Posttraumatic Growth Perspective

Authors: Hsin-Hung Lin, Janet Chang, Te-Yi Chang, You-Sheng Huang

Abstract:

The natural and men-made disasters have become huge challenges for tourism destinations as well as emphasizing the fragility of the industry. Hot springs, among all destinations, are prone to disasters due to their dependence on natural resources and locations. After the COVID-19 outbreak, hot spring destinations have experienced not only the loss of businesses but also the psychological trauma. However, evidence has also shown that the impacts may not necessarily reduce the resilience for people but may be converted into posttraumatic growth. In Taiwan, a large proportion of hot springs are located in rural or indigenous areas. As a result, hot spring resources are associated with community cohesion for local residents. Yet prior research on hot spring destinations has mainly focused on visitors, whereas residents have been overlooked. More specifically, the relationship between hot springs resources and resident resilience in the face of the COVID-19 impacts remains unclear. To fulfill this knowledge gap, this paper aims to explore the COVID-19 impacts on residents’ hot spring experiences as well as individual and community resilience from the perspective of posttraumatic growth. A total of 315 residents of 13 hot spring destinations that are most popular in Taiwan were recruited. Online questionnaires were distributed over travel forums and social networks after the COVID-19. This paper subsequently used Partial Least Squares Structural Equation Modeling for data analysis as the technique offers significant advantages in addressing nonnormal data and small sample sizes. A preliminary test was conducted, and the results showed acceptable internal consistency and no serious common method variance. The path analysis demonstrated that the COVID-19 impacts strengthened residents’ perceptions of hot spring resources and experiences, implying that the pandemic had propelled the residents to visit hot springs for the healing benefits. In addition, the COVID-19 impacts significantly enhanced residents’ individual and community resilience, which indicates that the residents at hot springs are more resilient thanks to their awareness of external risks. Thirdly, residents’ individual resilience was positively associated with hot spring experiences, while community resilience was not affected by hot spring experiences. Such findings may suggest that hot spring experiences are more related to individual-level experiences and, consequently, have insignificant influence on community resilience. Finally, individual resilience was proved to be the most relevant factor that help foster community resilience. To conclude, the authorities may consider exploiting the hot spring resources so as to increase individual resilience for local residents. Such implications can be used as a reference for other post-disaster tourist destinations as well. As for future research, longitudinal studies with qualitative methods are suggested to better understand how the hot spring experiences have changed individuals and communities over the long term. It should be noted that the main subjects of this paper were focused on the hot spring communities in Taiwan. Therefore, the results cannot be generalized for all types of tourism destinations. That is, more diverse tourism destinations may be investigated to provide a broader perspective of post-disaster recovery.

Keywords: community resilience, hot spring destinations, individual resilience, posttraumatic growth

Procedia PDF Downloads 54
128 The Transformation of Hot Spring Destinations in Taiwan in a Post-pandemic Future: Exploring the COVID-19 Impacts on Hot Spring Experiences and Resilience of Local Residents from a Posttraumatic Growth Perspective

Authors: Hsin-Hung Lin, Janet Chang, Te-Yi Chang, You-Sheng Huang

Abstract:

The natural and men-made disasters have become huge challenges for tourism destinations as well as emphasizing the fragility of the industry. Hot springs, among all destinations, are prone to disasters due to their dependence on natural resources and locations. After the COVID-19 outbreak, hot spring destinations have experienced not only the loss of businesses but also the psychological trauma. However, evidence has also shown that the impacts may not necessarily reduce the resilience for people but may be converted into posttraumatic growth. In Taiwan, a large proportion of hot springs are located in rural or indigenous areas. As a result, hot spring resources are associated with community cohesion for local residents. Yet prior research on hot spring destinations has mainly focused on visitors, whereas residents have been overlooked. More specifically, the relationship between hot springs resources and resident resilience in the face of the COVID-19 impacts remains unclear. To fulfill this knowledge gap, this paper aims to explore the COVID-19 impacts on residents’ hot spring experiences as well as individual and community resilience from the perspective of posttraumatic growth. A total of 315 residents of 13 hot spring destinations that are most popular in Taiwan were recruited. Online questionnaires were distributed over travel forums and social networks after the COVID-19. This paper subsequently used Partial Least Squares Structural Equation Modeling for data analysis as the technique offers significant advantages in addressing nonnormal data and small sample sizes. A preliminary test was conducted, and the results showed acceptable internal consistency and no serious common method variance. The path analysis demonstrated that the COVID-19 impacts strengthened residents’ perceptions of hot spring resources and experiences, implying that the pandemic had propelled the residents to visit hot springs for the healing benefits. In addition, the COVID-19 impacts significantly enhanced residents’ individual and community resilience, which indicates that the residents at hot springs are more resilient thanks to their awareness of external risks. Thirdly, residents’ individual resilience was positively associated with hot spring experiences, while community resilience was not affected by hot spring experiences. Such findings may suggest that hot spring experiences are more related to individual-level experiences and, consequently, have insignificant influence on community resilience. Finally, individual resilience was proved to be the most relevant factor that help foster community resilience. To conclude, the authorities may consider exploiting the hot spring resources so as to increase individual resilience for local residents. Such implications can be used as a reference for other post-disaster tourist destinations as well.As for future research, longitudinal studies with qualitative methods are suggested to better understand how the hot spring experiences have changed individuals and communities over the long term. It should be noted that the main subjects of this paper were focused on the hot spring communities in Taiwan. Therefore, the results cannot be generalized for all types of tourism destinations. That is, more diverse tourism destinations may be investigated to provide a broader perspective of post-disaster recovery.

Keywords: community resilience, hot spring destinations, individual resilience, posttraumatic growth (PTG)

Procedia PDF Downloads 53
127 Leveraging Multimodal Neuroimaging Techniques to in vivo Address Compensatory and Disintegration Patterns in Neurodegenerative Disorders: Evidence from Cortico-Cerebellar Connections in Multiple Sclerosis

Authors: Efstratios Karavasilis, Foteini Christidi, Georgios Velonakis, Agapi Plousi, Kalliopi Platoni, Nikolaos Kelekis, Ioannis Evdokimidis, Efstathios Efstathopoulos

Abstract:

Introduction: Advanced structural and functional neuroimaging techniques contribute to the study of anatomical and functional brain connectivity and its role in the pathophysiology and symptoms’ heterogeneity in several neurodegenerative disorders, including multiple sclerosis (MS). Aim: In the present study, we applied multiparametric neuroimaging techniques to investigate the structural and functional cortico-cerebellar changes in MS patients. Material: We included 51 MS patients (28 with clinically isolated syndrome [CIS], 31 with relapsing-remitting MS [RRMS]) and 51 age- and gender-matched healthy controls (HC) who underwent MRI in a 3.0T MRI scanner. Methodology: The acquisition protocol included high-resolution 3D T1 weighted, diffusion-weighted imaging and echo planar imaging sequences for the analysis of volumetric, tractography and functional resting state data, respectively. We performed between-group comparisons (CIS, RRMS, HC) using CAT12 and CONN16 MATLAB toolboxes for the analysis of volumetric (cerebellar gray matter density) and functional (cortico-cerebellar resting-state functional connectivity) data, respectively. Brainance suite was used for the analysis of tractography data (cortico-cerebellar white matter integrity; fractional anisotropy [FA]; axial and radial diffusivity [AD; RD]) to reconstruct the cerebellum tracts. Results: Patients with CIS did not show significant gray matter (GM) density differences compared with HC. However, they showed decreased FA and increased diffusivity measures in cortico-cerebellar tracts, and increased cortico-cerebellar functional connectivity. Patients with RRMS showed decreased GM density in cerebellar regions, decreased FA and increased diffusivity measures in cortico-cerebellar WM tracts, as well as a pattern of increased and mostly decreased functional cortico-cerebellar connectivity compared to HC. The comparison between CIS and RRMS patients revealed significant GM density difference, reduced FA and increased diffusivity measures in WM cortico-cerebellar tracts and increased/decreased functional connectivity. The identification of decreased WM integrity and increased functional cortico-cerebellar connectivity without GM changes in CIS and the pattern of decreased GM density decreased WM integrity and mostly decreased functional connectivity in RRMS patients emphasizes the role of compensatory mechanisms in early disease stages and the disintegration of structural and functional networks with disease progression. Conclusions: In conclusion, our study highlights the added value of multimodal neuroimaging techniques for the in vivo investigation of cortico-cerebellar brain changes in neurodegenerative disorders. An extension and future opportunity to leverage multimodal neuroimaging data inevitably remain the integration of such data in the recently-applied mathematical approaches of machine learning algorithms to more accurately classify and predict patients’ disease course.

Keywords: advanced neuroimaging techniques, cerebellum, MRI, multiple sclerosis

Procedia PDF Downloads 118
126 Encephalon-An Implementation of a Handwritten Mathematical Expression Solver

Authors: Shreeyam, Ranjan Kumar Sah, Shivangi

Abstract:

Recognizing and solving handwritten mathematical expressions can be a challenging task, particularly when certain characters are segmented and classified. This project proposes a solution that uses Convolutional Neural Network (CNN) and image processing techniques to accurately solve various types of equations, including arithmetic, quadratic, and trigonometric equations, as well as logical operations like logical AND, OR, NOT, NAND, XOR, and NOR. The proposed solution also provides a graphical solution, allowing users to visualize equations and their solutions. In addition to equation solving, the platform, called CNNCalc, offers a comprehensive learning experience for students. It provides educational content, a quiz platform, and a coding platform for practicing programming skills in different languages like C, Python, and Java. This all-in-one solution makes the learning process engaging and enjoyable for students. The proposed methodology includes horizontal compact projection analysis and survey for segmentation and binarization, as well as connected component analysis and integrated connected component analysis for character classification. The compact projection algorithm compresses the horizontal projections to remove noise and obtain a clearer image, contributing to the accuracy of character segmentation. Experimental results demonstrate the effectiveness of the proposed solution in solving a wide range of mathematical equations. CNNCalc provides a powerful and user-friendly platform for solving equations, learning, and practicing programming skills. With its comprehensive features and accurate results, CNNCalc is poised to revolutionize the way students learn and solve mathematical equations. The platform utilizes a custom-designed Convolutional Neural Network (CNN) with image processing techniques to accurately recognize and classify symbols within handwritten equations. The compact projection algorithm effectively removes noise from horizontal projections, leading to clearer images and improved character segmentation. Experimental results demonstrate the accuracy and effectiveness of the proposed solution in solving a wide range of equations, including arithmetic, quadratic, trigonometric, and logical operations. CNNCalc features a user-friendly interface with a graphical representation of equations being solved, making it an interactive and engaging learning experience for users. The platform also includes tutorials, testing capabilities, and programming features in languages such as C, Python, and Java. Users can track their progress and work towards improving their skills. CNNCalc is poised to revolutionize the way students learn and solve mathematical equations with its comprehensive features and accurate results.

Keywords: AL, ML, hand written equation solver, maths, computer, CNNCalc, convolutional neural networks

Procedia PDF Downloads 88
125 Investigating the Strategies for Managing On-plot Sanitation Systems’ Faecal Waste in Developing Regions: The Case of Ogun State, Nigeria

Authors: Olasunkanmi Olapeju

Abstract:

A large chunk of global population are not yet connected to water borne faecal management systems that rely on flush mechanisms and sewers networks that are linked with a central treatment plant. Only about 10% of sub-Saharan African countries are connected to central sewage systems. In Nigeria, majority of the population do not only depend on on-plot sanitation systems, a huge chunk do not also have access to safe and improved toilets. Apart from the organizational challenges and technical capacity, the other major factors that account for why faecal waste management is yet unimproved in developing countries are faulty planning frameworks that fail to maintain balance between urbanization dynamics and infrastructures, and misconceptions about what modern sanitation is all about. In most cases, the quest to implement developmental patterns that integrate modern sewers based sanitation systems have huge financial and political costs. Faecal waste management in poor countries largely lacks the needed political attention and budgetary prioritization. Yet, the on-plot sanitation systems being mainly relied upon the need to be managed in a manner that is sustainable and healthy, pending when development would embrace a more sustainable off-site central sewage system. This study is aimed at investigating existing strategies for managing on-plot sanitation systems’ faecal waste in Ogun state, Nigeria, with the aim of recommending sustainable sanitation management systems. The study adopted the convergent parallel variant of the mixed-mode technique, which involves the adoption of both quantitative and qualitative method of data collection. Adopting a four-level multi-stage approach, which is inclusive of all political divisions in the study area, a total of 330 questionnaires were respectively administered in the study area. Moreover, the qualitative data adopted the purposive approach in scoping down to 33 key informants. SPSS software (Version 22.0) was employed for descriptively analysis. The study shows that about 52% of households adopt the non-recovery management (NRM) means of burying their latrines with sand sludge shrinkage with chemicals such as carbides. The dominance of the non-recovery management means seriously constrains the quest for faecal resource recovery. Essentially, the management techniques adopted by households depend largely on the technology of their sanitary containments, emptying means available, the ability of households to pay for the cost of emptying, and the social acceptability of the reusability of faecal waste, which determines faecal resource recoverability. The study suggests that there is a need for municipal authorities in the study area to urgently intervene in the sanitation sector and consider it a key element of the planning process. There is a need for a comprehensive plan that would ensure a seamless transition to the adoption of a modern sanitation management system.

Keywords: faecal, management, planning, waste, sanitation, sustainability

Procedia PDF Downloads 80
124 Computer-Integrated Surgery of the Human Brain, New Possibilities

Authors: Ugo Galvanetto, Pirto G. Pavan, Mirco Zaccariotto

Abstract:

The discipline of Computer-integrated surgery (CIS) will provide equipment able to improve the efficiency of healthcare systems and, which is more important, clinical results. Surgeons and machines will cooperate in new ways that will extend surgeons’ ability to train, plan and carry out surgery. Patient specific CIS of the brain requires several steps: 1 - Fast generation of brain models. Based on image recognition of MR images and equipped with artificial intelligence, image recognition techniques should differentiate among all brain tissues and segment them. After that, automatic mesh generation should create the mathematical model of the brain in which the various tissues (white matter, grey matter, cerebrospinal fluid …) are clearly located in the correct positions. 2 – Reliable and fast simulation of the surgical process. Computational mechanics will be the crucial aspect of the entire procedure. New algorithms will be used to simulate the mechanical behaviour of cutting through cerebral tissues. 3 – Real time provision of visual and haptic feedback A sophisticated human-machine interface based on ergonomics and psychology will provide the feedback to the surgeon. The present work will address in particular point 2. Modelling the cutting of soft tissue in a structure as complex as the human brain is an extremely challenging problem in computational mechanics. The finite element method (FEM), that accurately represents complex geometries and accounts for material and geometrical nonlinearities, is the most used computational tool to simulate the mechanical response of soft tissues. However, the main drawback of FEM lies in the mechanics theory on which it is based, classical continuum Mechanics, which assumes matter is a continuum with no discontinuity. FEM must resort to complex tools such as pre-defined cohesive zones, external phase-field variables, and demanding remeshing techniques to include discontinuities. However, all approaches to equip FEM computational methods with the capability to describe material separation, such as interface elements with cohesive zone models, X-FEM, element erosion, phase-field, have some drawbacks that make them unsuitable for surgery simulation. Interface elements require a-priori knowledge of crack paths. The use of XFEM in 3D is cumbersome. Element erosion does not conserve mass. The Phase Field approach adopts a diffusive crack model instead of describing true tissue separation typical of surgical procedures. Modelling discontinuities, so difficult when using computational approaches based on classical continuum Mechanics, is instead easy for novel computational methods based on Peridynamics (PD). PD is a non-local theory of mechanics formulated with no use of spatial derivatives. Its governing equations are valid at points or surfaces of discontinuity, and it is, therefore especially suited to describe crack propagation and fragmentation problems. Moreover, PD does not require any criterium to decide the direction of crack propagation or the conditions for crack branching or coalescence; in the PD-based computational methods, cracks develop spontaneously in the way which is the most convenient from an energy point of view. Therefore, in PD computational methods, crack propagation in 3D is as easy as it is in 2D, with a remarkable advantage with respect to all other computational techniques.

Keywords: computational mechanics, peridynamics, finite element, biomechanics

Procedia PDF Downloads 52
123 Made on Land, Ends Up in the Water "I-Clare" Intelligent Remediation System for Removal of Harmful Contaminants in Water using Modified Reticulated Vitreous Carbon Foam

Authors: Sabina Żołędowska, Tadeusz Ossowski, Robert Bogdanowicz, Jacek Ryl, Paweł Rostkowski, Michał Kruczkowski, Michał Sobaszek, Zofia Cebula, Grzegorz Skowierzak, Paweł Jakóbczyk, Lilit Hovhannisyan, Paweł Ślepski, Iwona Kaczmarczyk, Mattia Pierpaoli, Bartłomiej Dec, Dawid Nidzworski

Abstract:

The circular economy of water presents a pressing environmental challenge in our society. Water contains various harmful substances, such as drugs, antibiotics, hormones, and dioxides, which can pose silent threats. Water pollution has severe consequences for aquatic ecosystems. It disrupts the balance of ecosystems by harming aquatic plants, animals, and microorganisms. Water pollution poses significant risks to human health. Exposure to toxic chemicals through contaminated water can have long-term health effects, such as cancer, developmental disorders, and hormonal imbalances. However, effective remediation systems can be implemented to remove these contaminants using electrocatalytic processes, which offer an environmentally friendly alternative to other treatment methods, and one of them is the innovative iCLARE system. The project's primary focus revolves around a few main topics: Reactor design and construction, selection of a specific type of reticulated vitreous carbon foams (RVC), analytical studies of harmful contaminants parameters and AI implementation. This high-performance electrochemical reactor will be build based on a novel type of electrode material. The proposed approach utilizes the application of reticulated vitreous carbon foams (RVC) with deposited modified metal oxides (MMO) and diamond thin films. The following setup is characterized by high surface area development and satisfactory mechanical and electrochemical properties, designed for high electrocatalytic process efficiency. The consortium validated electrode modification methods that are the base of the iCLARE product and established the procedures for the detection of chemicals detection: - deposition of metal oxides WO3 and V2O5-deposition of boron-doped diamond/nanowalls structures by CVD process. The chosen electrodes (porous Ferroterm electrodes) were stress tested for various parameters that might occur inside the iCLARE machine–corosis, the long-term structure of the electrode surface during electrochemical processes, and energetic efficacy using cyclic polarization and electrochemical impedance spectroscopy (before and after electrolysis) and dynamic electrochemical impedance spectroscopy (DEIS). This tool allows real-time monitoring of the changes at the electrode/electrolyte interphase. On the other hand, the toxicity of iCLARE chemicals and products of electrolysis are evaluated before and after the treatment using MARA examination (IBMM) and HPLC-MS-MS (NILU), giving us information about the harmfulness of using electrode material and the efficiency of iClare system in the disposal of pollutants. Implementation of data into the system that uses artificial intelligence and the possibility of practical application is in progress (SensDx).

Keywords: waste water treatement, RVC, electrocatalysis, paracetamol

Procedia PDF Downloads 52
122 Risk and Protective Factors for the Health of Primary Care-Givers of Children with Autism Spectrum Disorders or Intellectual Disability: A Narrative Review and Discussion

Authors: Jenny Fairthorne, Yuka Mori, Helen Leonard

Abstract:

Background: Primary care-givers of children with autism spectrum disorder (ASD) or intellectual disability (ID) have poorer health and quality of life (QoL) than primary care-givers (hereafter referred to as just care-givers) of typically developing children. We aimed to review original research which described factors impacting the health of care-givers of children with ASD or ID and to discuss how these factors might influence care-giver health. Methods: We searched Web of Knowledge, Medline, Scopus and Google Scholar using selections of words from each of three groups. The first comprised terms associated with ASD and ID and included autism, pervasive development disorder, intellectual disability, mental retardation, disability, disabled, Down and Asperger. The second included terms related to health such as depression, physical, mental, psychiatric, psychological and well-being. The third was terms related to care-givers such as mother, parent and care-giver. We included an original paper in our review if it was published between 1st January 1990 and 31st December, 2016, described original research in a peer-reviewed journal and was written in English. Additional criteria were that the research used a study population of 15 persons or more; described a risk or protective factor for the health of care-givers of a child with ASD, ID or a sub-type (such as ASD with ID or Down syndrome). Using previous research, we developed a simple and objective five-level tool to assess the strength of evidence provided by the reviewed papers. Results: We retained 33 papers. Factors impacting primary care-giver health included child behaviour, level of support, socio-economic status (SES) and diagnostic issues. Challenging child behaviour, the most commonly identified risk factor for poorer care-giver health and QoL was reported in ten of the studies. A higher level of support was associated with improved care-giver health and QoL. For example, substantial evidence indicated that family support reduced care-giver burden in families with a child with ASD and that family and neighbourhood support was associated with improved care-giver mental health. Higher socio-economic status (SES) was a protective factor for care-giver health and particularly maternal health. Diagnostic uncertainty and an unclear prognosis are factors which can cause the greatest concern to care-givers of children with ASD and those for whom a cause of their child’s ID has not been identified. We explain how each of these factors might impact caregiver health and how they might act differentially in care-givers of children with different types of ASD or ID (such as Down syndrome and ASD without ID). Conclusion: Care-givers of children with ASD may be more likely to experience many risk factors and less likely to experience the protective factors we identified for poorer mental health. Interventions to reduce risk factors and increase protective factors could pave the way for improved care-giver health. For example, workshops to train care-givers to better manage challenging child behaviours and earlier diagnosis of ASD (and particularly ASD without ID) would seem likely to improve care-giver well-being. Similarly, helping to expand support networks might reduce care-giver burden and stress leading to improved health.

Keywords: autism, caregivers, health, intellectual disability, mothers, review

Procedia PDF Downloads 141