Search results for: knowledge work
130 Learning from Dendrites: Improving the Point Neuron Model
Authors: Alexander Vandesompele, Joni Dambre
Abstract:
The diversity in dendritic arborization, as first illustrated by Santiago Ramon y Cajal, has always suggested a role for dendrites in the functionality of neurons. In the past decades, thanks to new recording techniques and optical stimulation methods, it has become clear that dendrites are not merely passive electrical components. They are observed to integrate inputs in a non-linear fashion and actively participate in computations. Regardless, in simulations of neural networks dendritic structure and functionality are often overlooked. Especially in a machine learning context, when designing artificial neural networks, point neuron models such as the leaky-integrate-and-fire (LIF) model are dominant. These models mimic the integration of inputs at the neuron soma, and ignore the existence of dendrites. In this work, the LIF point neuron model is extended with a simple form of dendritic computation. This gives the LIF neuron increased capacity to discriminate spatiotemporal input sequences, a dendritic functionality as observed in another study. Simulations of the spiking neurons are performed using the Bindsnet framework. In the common LIF model, incoming synapses are independent. Here, we introduce a dependency between incoming synapses such that the post-synaptic impact of a spike is not only determined by the weight of the synapse, but also by the activity of other synapses. This is a form of short term plasticity where synapses are potentiated or depressed by the preceding activity of neighbouring synapses. This is a straightforward way to prevent inputs from simply summing linearly at the soma. To implement this, each pair of synapses on a neuron is assigned a variable,representing the synaptic relation. This variable determines the magnitude ofthe short term plasticity. These variables can be chosen randomly or, more interestingly, can be learned using a form of Hebbian learning. We use Spike-Time-Dependent-Plasticity (STDP), commonly used to learn synaptic strength magnitudes. If all neurons in a layer receive the same input, they tend to learn the same through STDP. Adding inhibitory connections between the neurons creates a winner-take-all (WTA) network. This causes the different neurons to learn different input sequences. To illustrate the impact of the proposed dendritic mechanism, even without learning, we attach five input neurons to two output neurons. One output neuron isa regular LIF neuron, the other output neuron is a LIF neuron with dendritic relationships. Then, the five input neurons are allowed to fire in a particular order. The membrane potentials are reset and subsequently the five input neurons are fired in the reversed order. As the regular LIF neuron linearly integrates its inputs at the soma, the membrane potential response to both sequences is similar in magnitude. In the other output neuron, due to the dendritic mechanism, the membrane potential response is different for both sequences. Hence, the dendritic mechanism improves the neuron’s capacity for discriminating spa-tiotemporal sequences. Dendritic computations improve LIF neurons even if the relationships between synapses are established randomly. Ideally however, a learning rule is used to improve the dendritic relationships based on input data. It is possible to learn synaptic strength with STDP, to make a neuron more sensitive to its input. Similarly, it is possible to learn dendritic relationships with STDP, to make the neuron more sensitive to spatiotemporal input sequences. Feeding structured data to a WTA network with dendritic computation leads to a significantly higher number of discriminated input patterns. Without the dendritic computation, output neurons are less specific and may, for instance, be activated by a sequence in reverse order.Keywords: dendritic computation, spiking neural networks, point neuron model
Procedia PDF Downloads 133129 The Use of Artificial Intelligence in the Context of a Space Traffic Management System: Legal Aspects
Authors: George Kyriakopoulos, Photini Pazartzis, Anthi Koskina, Crystalie Bourcha
Abstract:
The need for securing safe access to and return from outer space, as well as ensuring the viability of outer space operations, maintains vivid the debate over the promotion of organization of space traffic through a Space Traffic Management System (STM). The proliferation of outer space activities in recent years as well as the dynamic emergence of the private sector has gradually resulted in a diverse universe of actors operating in outer space. The said developments created an increased adverse impact on outer space sustainability as the case of the growing number of space debris clearly demonstrates. The above landscape sustains considerable threats to outer space environment and its operators that need to be addressed by a combination of scientific-technological measures and regulatory interventions. In this context, recourse to recent technological advancements and, in particular, to Artificial Intelligence (AI) and machine learning systems, could achieve exponential results in promoting space traffic management with respect to collision avoidance as well as launch and re-entry procedures/phases. New technologies can support the prospects of a successful space traffic management system at an international scale by enabling, inter alia, timely, accurate and analytical processing of large data sets and rapid decision-making, more precise space debris identification and tracking and overall minimization of collision risks and reduction of operational costs. What is more, a significant part of space activities (i.e. launch and/or re-entry phase) takes place in airspace rather than in outer space, hence the overall discussion also involves the highly developed, both technically and legally, international (and national) Air Traffic Management System (ATM). Nonetheless, from a regulatory perspective, the use of AI for the purposes of space traffic management puts forward implications that merit particular attention. Key issues in this regard include the delimitation of AI-based activities as space activities, the designation of the applicable legal regime (international space or air law, national law), the assessment of the nature and extent of international legal obligations regarding space traffic coordination, as well as the appropriate liability regime applicable to AI-based technologies when operating for space traffic coordination, taking into particular consideration the dense regulatory developments at EU level. In addition, the prospects of institutionalizing international cooperation and promoting an international governance system, together with the challenges of establishment of a comprehensive international STM regime are revisited in the light of intervention of AI technologies. This paper aims at examining regulatory implications advanced by the use of AI technology in the context of space traffic management operations and its key correlating concepts (SSA, space debris mitigation) drawing in particular on international and regional considerations in the field of STM (e.g. UNCOPUOS, International Academy of Astronautics, European Space Agency, among other actors), the promising advancements of the EU approach to AI regulation and, last but not least, national approaches regarding the use of AI in the context of space traffic management, in toto. Acknowledgment: The present work was co-funded by the European Union and Greek national funds through the Operational Program "Human Resources Development, Education and Lifelong Learning " (NSRF 2014-2020), under the call "Supporting Researchers with an Emphasis on Young Researchers – Cycle B" (MIS: 5048145).Keywords: artificial intelligence, space traffic management, space situational awareness, space debris
Procedia PDF Downloads 258128 Design Aspects for Developing a Microfluidics Diagnostics Device Used for Low-Cost Water Quality Monitoring
Authors: Wenyu Guo, Malachy O’Rourke, Mark Bowkett, Michael Gilchrist
Abstract:
Many devices for real-time monitoring of surface water have been developed in the past few years to provide early warning of pollutions and so to decrease the risk of environmental pollution efficiently. One of the most common methodologies used in the detection system is a colorimetric process, in which a container with fixed volume is filled with target ions and reagents to combine a colorimetric dye. The colorimetric ions can sensitively absorb a specific-wavelength radiation beam, and its absorbance rate is proportional to the concentration of the fully developed product, indicating the concentration of target nutrients in the pre-mixed water samples. In order to achieve precise and rapid detection effect, channels with dimensions in the order of micrometers, i.e., microfluidic systems have been developed and introduced into these diagnostics studies. Microfluidics technology largely reduces the surface to volume ratios and decrease the samples/reagents consumption significantly. However, species transport in such miniaturized channels is limited by the low Reynolds numbers in the regimes. Thus, the flow is extremely laminar state, and diffusion is the dominant mass transport process all over the regimes of the microfluidic channels. The objective of this present work has been to analyse the mixing effect and chemistry kinetics in a stop-flow microfluidic device measuring Nitride concentrations in fresh water samples. In order to improve the temporal resolution of the Nitride microfluidic sensor, we have used computational fluid dynamics to investigate the influence that the effectiveness of the mixing process between the sample and reagent within a microfluidic device exerts on the time to completion of the resulting chemical reaction. This computational approach has been complemented by physical experiments. The kinetics of the Griess reaction involving the conversion of sulphanilic acid to a diazonium salt by reaction with nitrite in acidic solution is set in the Laminar Finite-rate chemical reaction in the model. Initially, a methodology was developed to assess the degree of mixing of the sample and reagent within the device. This enabled different designs of the mixing channel to be compared, such as straight, square wave and serpentine geometries. Thereafter, the time to completion of the Griess reaction within a straight mixing channel device was modeled and the reaction time validated with experimental data. Further simulations have been done to compare the reaction time to effective mixing within straight, square wave and serpentine geometries. Results show that square wave channels can significantly improve the mixing effect and provides a low standard deviations of the concentrations of nitride and reagent, while for straight channel microfluidic patterns the corresponding values are 2-3 orders of magnitude greater, and consequently are less efficiently mixed. This has allowed us to design novel channel patterns of micro-mixers with more effective mixing that can be used to detect and monitor levels of nutrients present in water samples, in particular, Nitride. Future generations of water quality monitoring and diagnostic devices will easily exploit this technology.Keywords: nitride detection, computational fluid dynamics, chemical kinetics, mixing effect
Procedia PDF Downloads 202127 Preliminary Results on Marine Debris Classification in The Island of Mykonos (Greece) via Coastal and Underwater Clean up over 2016-20: A Successful Case of Recycling Plastics into Useful Daily Items
Authors: Eleni Akritopoulou, Katerina Topouzoglou
Abstract:
The last 20 years marine debris has been identified as one of the main marine pollution sources caused by anthropogenic activities. Plastics has reached the farthest marine areas of the planet affecting all marine trophic levels including the, recently discovered, amphipoda Eurythenes plasticus inhabiting Mariana Trench to large cetaceans, marine reptiles and sea birds causing immunodeficiency disorders, deteriorating health and death overtime. For the time period 2016-20, in the framework of the national initiative ‘Keep Aegean Blue”, All for Blue team has been collecting marine debris (coastline and underwater) following a modified in situ MEDSEALITTER monitoring protocol from eight Greek islands. After collection, marine debris was weighted, sorted and categorised according to material; plastic (PL), glass (G), metal (M), wood (W), rubber (R), cloth (CL), paper (P), mixed (MX). The goal of the project included the documentation of marine debris sources, human trends, waste management and public marine environmental awareness. Waste management was focused on plastics recycling and utilisation into daily useful products. This research is focused on the island of Mykonos due to its continuous touristic activity and lack of scientific information. In overall, a field work area of 1.832.856 m2 was cleaned up yielding 5092 kg of marine debris. The preliminary results indicated PL as main source of marine debris (62,8%) followed by M (15,5%), GL (13,2%) and MX (2,8%). Main items found were fishing tools (lines, nets), disposable cutlery, cups and straws, cigarette butts, flip flops and other items like plastic boat compartments. In collaboration with a local company for plastic management and the Circular Economy and Eco Innovation Institute (Sweden), all plastic debris was recycled. Granulation process was applied transforming plastic into building materials used for refugees’ houses, litter bins bought by municipalities and schools and, other items like shower components. In terms of volunteering and attendance in public awareness seminars, there was a raise of interest by 63% from different age ranges and professions. Regardless, the research being fairly new for Mykonos island and logistics issues potentially affected systemic sampling, it appeared that plastic debris is the main littering source attributed, possibly to the intense touristic activity of the island all year around. However, marine environmental awareness activities were pointed out to be an effective tool in forming public perception against marine debris and, alter the daily habits of local society. Since the beginning of this project, three new local environmental teams were formed against marine pollution supported by the local authorities and stakeholders. The continuous need and request for the production of items made by recycled marine debris appeared to be beneficial socio-economically to the local community and actions are taken to expand the project nationally. Finally, as an ongoing project and whilst, new scientific information is collected, further funding and research is needed.Keywords: Greece, marine debris, marine environmental awareness, Mykonos island, plastics debris, plastic granulation, recycled plastic, tourism, waste management
Procedia PDF Downloads 110126 Assessment of Occupational Exposure and Individual Radio-Sensitivity in People Subjected to Ionizing Radiation
Authors: Oksana G. Cherednichenko, Anastasia L. Pilyugina, Sergey N.Lukashenko, Elena G. Gubitskaya
Abstract:
The estimation of accumulated radiation doses in people professionally exposed to ionizing radiation was performed using methods of biological (chromosomal aberrations frequency in lymphocytes) and physical (radionuclides analysis in urine, whole-body radiation meter, individual thermoluminescent dosimeters) dosimetry. A group of 84 "A" category employees after their work in the territory of former Semipalatinsk test site (Kazakhstan) was investigated. The dose rate in some funnels exceeds 40 μSv/h. After radionuclides determination in urine using radiochemical and WBC methods, it was shown that the total effective dose of personnel internal exposure did not exceed 0.2 mSv/year, while an acceptable dose limit for staff is 20 mSv/year. The range of external radiation doses measured with individual thermo-luminescent dosimeters was 0.3-1.406 µSv. The cytogenetic examination showed that chromosomal aberrations frequency in staff was 4.27±0.22%, which is significantly higher than at the people from non-polluting settlement Tausugur (0.87±0.1%) (р ≤ 0.01) and citizens of Almaty (1.6±0.12%) (р≤ 0.01). Chromosomal type aberrations accounted for 2.32±0.16%, 0.27±0.06% of which were dicentrics and centric rings. The cytogenetic analysis of different types group radiosensitivity among «professionals» (age, sex, ethnic group, epidemiological data) revealed no significant differences between the compared values. Using various techniques by frequency of dicentrics and centric rings, the average cumulative radiation dose for group was calculated, and that was 0.084-0.143 Gy. To perform comparative individual dosimetry using physical and biological methods of dose assessment, calibration curves (including own ones) and regression equations based on general frequency of chromosomal aberrations obtained after irradiation of blood samples by gamma-radiation with the dose rate of 0,1 Gy/min were used. Herewith, on the assumption of individual variation of chromosomal aberrations frequency (1–10%), the accumulated dose of radiation varied 0-0.3 Gy. The main problem in the interpretation of individual dosimetry results is reduced to different reaction of the objects to irradiation - radiosensitivity, which dictates the need of quantitative definition of this individual reaction and its consideration in the calculation of the received radiation dose. The entire examined contingent was assigned to a group based on the received dose and detected cytogenetic aberrations. Radiosensitive individuals, at the lowest received dose in a year, showed the highest frequency of chromosomal aberrations (5.72%). In opposite, radioresistant individuals showed the lowest frequency of chromosomal aberrations (2.8%). The cohort correlation according to the criterion of radio-sensitivity in our research was distributed as follows: radio-sensitive (26.2%) — medium radio-sensitivity (57.1%), radioresistant (16.7%). Herewith, the dispersion for radioresistant individuals is 2.3; for the group with medium radio-sensitivity — 3.3; and for radio-sensitive group — 9. These data indicate the highest variation of characteristic (reactions to radiation effect) in the group of radio-sensitive individuals. People with medium radio-sensitivity show significant long-term correlation (0.66; n=48, β ≥ 0.999) between the values of doses defined according to the results of cytogenetic analysis and dose of external radiation obtained with the help of thermoluminescent dosimeters. Mathematical models based on the type of violation of the radiation dose according to the professionals radiosensitivity level were offered.Keywords: biodosimetry, chromosomal aberrations, ionizing radiation, radiosensitivity
Procedia PDF Downloads 184125 Unity in Diversity: Exploring the Psychological Processes and Mechanisms of the Sense of Community for the Chinese Nation in Ethnic Inter-embedded Communities
Authors: Jiamin Chen, Liping Yang
Abstract:
In 2007, sociologist Putnam proposed a pessimistic forecast in the United States' "Social Capital Community Benchmark Survey," suggesting that "ethnic diversity would challenge social unity and undermine social cohesion." If this pessimistic assumption were proven true, it would indicate a risk of division in diverse societies. China, with 56 ethnic groups, is a multi-ethnic country. On May 26, 2014, General Secretary Xi Jinping proposed "building ethnically inter-embedded communities to promote deeper development in interactions, exchanges, and integration among ethnic groups." Researchers unanimously agree that ethnic inter-embedded communities can serve as practical arenas and pathways for solidifying the sense of the Chinese national community However, there is no research providing evidence that ethnic inter-embedded communities can foster the sense of the Chinese national community, and the influencing factors remain unclear. This study adopts a constructivist grounded theory research approach. Convenience sampling and snowball sampling were used in the study. Data were collected in three communities in Kunming City. Twelve individuals were eventually interviewed, and the transcribed interviews totaled 187,000 words. The research has obtained ethical approval from the Ethics Committee of Nanjing Normal University (NNU202310030). The research analyzed the data and constructed theories, employing strategies such as coding, constant comparison, and theoretical sampling. The study found that: firstly, ethnic inter-embedded communities exhibit characteristics of diversity, including ethnic diversity, cultural diversity, and linguistic diversity. Diversity has positive functions, including increased opportunities for contact, promoting self-expansion, and increasing happiness; negative functions of diversity include highlighting ethnic differences, causing ethnic conflicts, and reminding of ethnic boundaries. Secondly, individuals typically engage in interactions within the community using active embedding and passive embedding strategies. Active embedding strategies include maintaining openness, focusing on similarities, and pro-diversity beliefs, which can increase external group identification, intergroup relational identity, and promote ethnic integration. Individuals using passive embedding strategies tend to focus on ethnic stereotypes, perceive stigmatization of their own ethnic group, and adopt an authoritarian-oriented approach to interactions, leading to a perception of more identity threats and ultimately rejecting ethnic integration. Thirdly, the commonality of the Chinese nation is reflected in the 56 ethnic groups as an "identity community" and "interest community," and both active and passive embedding paths affect individual understanding of the commonality of the Chinese nation. Finally, community work and environment can influence the embedding process. The research constructed a social psychological process and mechanism model for solidifying sense of the Chinese national community in ethnic inter-embedded communities. Based on this theoretical model, future research can conduct more micro-level psychological mechanism tests and intervention studies to enhance Chinese national cohesion.Keywords: diversity, sense of the chinese national community, ethnic inter-embedded communities, ethnic group
Procedia PDF Downloads 38124 Residential Building Facade Retrofit
Authors: Galit Shiff, Yael Gilad
Abstract:
The need to retrofit old buildings lies in the fact that buildings are responsible for the main energy use and CO₂ emission. Existing old structures are more dominant in their effect than new energy-efficient buildings. Nevertheless not every case of urban renewal that aims to replace old buildings with new neighbourhoods necessarily has a financial or sustainable justification. Façade design plays a vital role in the building's energy performance and the unit's comfort conditions. A retrofit façade residential methodology and feasibility applicative study has been carried out for the past four years, with two projects already fully renovated. The intention of this study is to serve as a case study for limited budget façade retrofit in Mediterranean climate urban areas. The two case study buildings are set in Israel. However, they are set in different local climatic conditions. One is in 'Sderot' in the south of the country, and one is in' Migdal Hahemek' in the north of the country. The building typology is similar. The budget of the projects is around $14,000 per unit and includes interventions at the buildings' envelope while tenants are living in. Extensive research and analysis of the existing conditions have been done. The building's components, materials and envelope sections were mapped, examined and compared to relevant updated standards. Solar radiation simulations for the buildings in their surroundings during winter and summer days were done. The energy rate of each unit, as well as the building as a whole, was calculated according to the Israeli Energy Code. The buildings’ facades were documented with the use of a thermal camera during different hours of the day. This information was superimposed with data about the electricity use and the thermal comfort that was collected from the residential units. Later in the process, similar tools were further used in order to compare the effectiveness of different design options and to evaluate the chosen solutions. Both projects showed that the most problematic units were the ones below the roof and the ones on top of the elevated entrance floor (pilotis). Old buildings tend to have poor insulation on those two horizontal surfaces which require treatment. Different radiation levels and wall sections in the two projects influenced the design strategies: In the southern project, there was an extreme difference in solar radiations levels between the main façade and the back elevation. Eventually, it was decided to invest in insulating the main south-west façade and the side façades, leaving the back north-east façade almost untouched. Lower levels of radiation in the northern project led to a different tactic: a combination of basic insulation on all façades, together with intense treatment on areas with problematic thermal behavior. While poor execution of construction details and bad installation of windows in the northern project required replacing them all, in the southern project it was found that it is more essential to shade the windows than replace them. Although the buildings and the construction typology was chosen for this study are similar, the research shows that there are large differences due to the location in different climatic zones and variation in local conditions. Therefore, in order to reach a systematic and cost-effective method of work, a more extensive catalogue database is needed. Such a catalogue will enable public housing companies in the Mediterranean climate to promote massive projects of renovating existing old buildings, drawing on minimal analysis and planning processes.Keywords: facade, low budget, residential, retrofit
Procedia PDF Downloads 208123 Numerical Analysis of Mandible Fracture Stabilization System
Authors: Piotr Wadolowski, Grzegorz Krzesinski, Piotr Gutowski
Abstract:
The aim of the presented work is to recognize the impact of mini-plate application approach on the stress and displacement within the stabilization devices and surrounding bones. The mini-plate osteosynthesis technique is widely used by craniofacial surgeons as an improved replacement of wire connection approach. Many different types of metal plates and screws are used to the physical connection of fractured bones. Below investigation is based on a clinical observation of patient hospitalized with mini-plate stabilization system. Analysis was conducted on a solid mandible geometry, which was modeled basis on the computed tomography scan of the hospitalized patient. In order to achieve most realistic connected system behavior, the cortical and cancellous bone layers were assumed. The temporomandibular joint was simplified to the elastic element to allow physiological movement of loaded bone. The muscles of mastication system were reduced to three pairs, modeled as shell structures. Finite element grid was created by the ANSYS software, where hexahedral and tetrahedral variants of SOLID185 element were used. A set of nonlinear contact conditions were applied on connecting devices and bone common surfaces. Properties of particular contact pair depend on screw - mini-plate connection type and possible gaps between fractured bone around osteosynthesis region. Some of the investigated cases contain prestress introduced to the mini-plate during the application, what responds the initial bending of the connecting device to fit the retromolar fossa region. Assumed bone fracture occurs within the mandible angle zone. Due to the significant deformation of the connecting plate in some of the assembly cases the elastic-plastic model of titanium alloy was assumed. The bone tissues were covered by the orthotropic material. As a loading were used the gauge force of magnitude of 100N applied in three different locations. Conducted analysis shows significant impact of mini-plate application methodology on the stress distribution within the miniplate. Prestress effect introduces additional loading, which leads to locally exceed the titanium alloy yield limit. Stress in surrounding bone increases rapidly around the screws application region, exceeding assumed bone yield limit, what indicate the local bone destruction. Approach with the doubled mini-plate shows increased stress within the connector due to the too rigid connection, where the main path of loading leads through the mini-plates instead of plates and connected bones. Clinical observations confirm more frequent plate destruction of stiffer connections. Some of them could be an effect of decreased low cyclic fatigue capability caused by the overloading. The executed analysis prove that the mini-plate system provides sufficient support to mandible fracture treatment, however, many applicable solutions shifts the entire system to the allowable material limits. The results show that connector application with the initial loading needs to be carefully established due to the small material capability tolerances. Comparison to the clinical observations allows optimizing entire connection to prevent future incidents.Keywords: mandible fracture, mini-plate connection, numerical analysis, osteosynthesis
Procedia PDF Downloads 273122 Improvements and Implementation Solutions to Reduce the Computational Load for Traffic Situational Awareness with Alerts (TSAA)
Authors: Salvatore Luongo, Carlo Luongo
Abstract:
This paper discusses the implementation solutions to reduce the computational load for the Traffic Situational Awareness with Alerts (TSAA) application, based on Automatic Dependent Surveillance-Broadcast (ADS-B) technology. In 2008, there were 23 total mid-air collisions involving general aviation fixed-wing aircraft, 6 of which were fatal leading to 21 fatalities. These collisions occurred during visual meteorological conditions, indicating the limitations of the see-and-avoid concept for mid-air collision avoidance as defined in the Federal Aviation Administration’s (FAA). The commercial aviation aircraft are already equipped with collision avoidance system called TCAS, which is based on classic transponder technology. This system dramatically reduced the number of mid-air collisions involving air transport aircraft. In general aviation, the same reduction in mid-air collisions has not occurred, so this reduction is the main objective of the TSAA application. The major difference between the original conflict detection application and the TSAA application is that the conflict detection is focused on preventing loss of separation in en-route environments. Instead TSAA is devoted to reducing the probability of mid-air collision in all phases of flight. The TSAA application increases the flight crew traffic situation awareness providing alerts of traffic that are detected in conflict with ownship in support of the see-and-avoid responsibility. The relevant effort has been spent in the design process and the code generation in order to maximize the efficiency and performances in terms of computational load and memory consumption reduction. The TSAA architecture is divided into two high-level systems: the “Threats database” and the “Conflict detector”. The first one receives the traffic data from ADS-B device and provides the memorization of the target’s data history. Conflict detector module estimates ownship and targets trajectories in order to perform the detection of possible future loss of separation between ownship and each target. Finally, the alerts are verified by additional conflict verification logic, in order to prevent possible undesirable behaviors of the alert flag. In order to reduce the computational load, a pre-check evaluation module is used. This pre-check is only a computational optimization, so the performances of the conflict detector system are not modified in terms of number of alerts detected. The pre-check module uses analytical trajectories propagation for both target and ownship. This allows major accuracy and avoids the step-by-step propagation, which requests major computational load. Furthermore, the pre-check permits to exclude the target that is certainly not a threat, using an analytical and efficient geometrical approach, in order to decrease the computational load for the following modules. This software improvement is not suggested by FAA documents, and so it is the main innovation of this work. The efficiency and efficacy of this enhancement are verified using fast-time and real-time simulations and by the execution on a real device in several FAA scenarios. The final implementation also permits the FAA software certification in compliance with DO-178B standard. The computational load reduction allows the installation of TSAA application also on devices with multiple applications and/or low capacity in terms of available memory and computational capabilitiesKeywords: traffic situation awareness, general aviation, aircraft conflict detection, computational load reduction, implementation solutions, software certification
Procedia PDF Downloads 285121 Recent Findings of Late Bronze Age Mining and Archaeometallurgy Activities in the Mountain Region of Colchis (Southern Lechkhumi, Georgia)
Authors: Rusudan Chagelishvili, Nino Sulava, Tamar Beridze, Nana Rezesidze, Nikoloz Tatuashvili
Abstract:
The South Caucasus is one of the most important centers of prehistoric metallurgy, known for its Colchian bronze culture. Modern Lechkhumi – historical Mountainous Colchis where the existence of prehistoric metallurgy is confirmed by the discovery of many artifacts is a part of this area. Studies focused on prehistoric smelting sites, related artefacts, and ore deposits have been conducted during last ten years in Lechkhumi. More than 20 prehistoric smelting sites and artefacts associated with metallurgical activities (ore roasting furnaces, slags, crucible, and tuyères fragments) have been identified so far. Within the framework of integrated studies was established that these sites were operating in 13-9 centuries B.C. and used for copper smelting. Palynological studies of slags revealed that chestnut (Castanea sativa) and hornbeam (Carpinus sp.) wood were used as smelting fuel. Geological exploration-analytical studies revealed that copper ore mining, processing, and smelting sites were distributed close to each other. Despite recent complex data, the signs of prehistoric mines (trenches) haven’t been found in this part of the study area so far. Since 2018 the archaeological-geological exploration has been focused on the southern part of Lechkhumi and covered the areas of villages Okureshi and Opitara. Several copper smelting sites (Okureshi 1 and 2, Opitara 1), as well as a Colchian Bronze culture settlement, have been identified here. Three mine workings have been found in the narrow gorge of the river Rtkhmelebisgele in the vicinities of the village Opitara. In order to establish a link between the Opitara-Okureshi archaeometallurgical sites, Late Bronze Age settlements, and mines, various scientific analytical methods -mineralized rock and slags petrography and atomic absorption spectrophotometry (AAS) analysis have been applied. The careful examination of Opitara mine workings revealed that there is a striking difference between the mine #1 on the right bank of the river and mines #2 and #3 on the left bank. The first one has all characteristic features of the Soviet period mine working (e. g. high portal with angular ribs and roof showing signs of blasting). In contrast, mines #2 and #3, which are located very close to each other, have round-shaped portals/entrances, low roofs, and fairly smooth ribs and are filled with thick layers of river sediments and collapsed weathered rock mass. A thorough review of the publications related to prehistoric mine workings revealed some striking similarities between mines #2 and #3 with their worldwide analogues. Apparently, the ore extraction from these mines was conducted by fire-setting applying primitive tools. It was also established that mines are cut in Jurassic mineralized volcanic rocks. Ore minerals (chalcopyrite, pyrite, galena) are related to calcite and quartz veins. The results obtained through the petrochemical and petrography studies of mineralized rock samples from Opitara mines and prehistoric slags are in complete correlation with each other, establishing the direct link between copper mining and smelting within the study area. Acknowledgment: This work was supported by the Shota Rustaveli National Science Foundation of Georgia (grant # FR-19-13022).Keywords: archaeometallurgy, Mountainous Colchis, mining, ore minerals
Procedia PDF Downloads 179120 Planning Railway Assets Renewal with a Multiobjective Approach
Authors: João Coutinho-Rodrigues, Nuno Sousa, Luís Alçada-Almeida
Abstract:
Transportation infrastructure systems are fundamental in modern society and economy. However, they need modernizing, maintaining, and reinforcing interventions which require large investments. In many countries, accumulated intervention delays arise from aging and intense use, being magnified by financial constraints of the past. The decision problem of managing the renewal of large backlogs is common to several types of important transportation infrastructures (e.g., railways, roads). This problem requires considering financial aspects as well as operational constraints under a multidimensional framework. The present research introduces a linear programming multiobjective model for managing railway infrastructure asset renewal. The model aims at minimizing three objectives: (i) yearly investment peak, by evenly spreading investment throughout multiple years; (ii) total cost, which includes extra maintenance costs incurred from renewal backlogs; (iii) priority delays related to work start postponements on the higher priority railway sections. Operational constraints ensure that passenger and freight services are not excessively delayed from having railway line sections under intervention. Achieving a balanced annual investment plan, without compromising the total financial effort or excessively postponing the execution of the priority works, was the motivation for pursuing the research which is now presented. The methodology, inspired by a real case study and tested with real data, reflects aspects of the practice of an infrastructure management company and is generalizable to different types of infrastructure (e.g., railways, highways). It was conceived for treating renewal interventions in infrastructure assets, which is a railway network may be rails, ballasts, sleepers, etc.; while a section is under intervention, trains must run at reduced speed, causing delays in services. The model cannot, therefore, allow for an accumulation of works on the same line, which may cause excessively large delays. Similarly, the lines do not all have the same socio-economic importance or service intensity, making it is necessary to prioritize the sections to be renewed. The model takes these issues into account, and its output is an optimized works schedule for the renewal project translatable in Gantt charts The infrastructure management company provided all the data for the first test case study and validated the parameterization. This case consists of several sections to be renewed, over 5 years and belonging to 17 lines. A large instance was also generated, reflecting a problem of a size similar to the USA railway network (considered the largest one in the world), so it is not expected that considerably larger problems appear in real life; an average of 25 years backlog and ten years of project horizon was considered. Despite the very large increase in the number of decision variables (200 times as large), the computational time cost did not increase very significantly. It is thus expectable that just about any real-life problem can be treated in a modern computer, regardless of size. The trade-off analysis shows that if the decision maker allows some increase in max yearly investment (i.e., degradation of objective ii), solutions improve considerably in the remaining two objectives.Keywords: transport infrastructure, asset renewal, railway maintenance, multiobjective modeling
Procedia PDF Downloads 145119 IoT Continuous Monitoring Biochemical Oxygen Demand Wastewater Effluent Quality: Machine Learning Algorithms
Authors: Sergio Celaschi, Henrique Canavarro de Alencar, Claaudecir Biazoli
Abstract:
Effluent quality is of the highest priority for compliance with the permit limits of environmental protection agencies and ensures the protection of their local water system. Of the pollutants monitored, the biochemical oxygen demand (BOD) posed one of the greatest challenges. This work presents a solution for wastewater treatment plants - WWTP’s ability to react to different situations and meet treatment goals. Delayed BOD5 results from the lab take 7 to 8 analysis days, hindered the WWTP’s ability to react to different situations and meet treatment goals. Reducing BOD turnaround time from days to hours is our quest. Such a solution is based on a system of two BOD bioreactors associated with Digital Twin (DT) and Machine Learning (ML) methodologies via an Internet of Things (IoT) platform to monitor and control a WWTP to support decision making. DT is a virtual and dynamic replica of a production process. DT requires the ability to collect and store real-time sensor data related to the operating environment. Furthermore, it integrates and organizes the data on a digital platform and applies analytical models allowing a deeper understanding of the real process to catch sooner anomalies. In our system of continuous time monitoring of the BOD suppressed by the effluent treatment process, the DT algorithm for analyzing the data uses ML on a chemical kinetic parameterized model. The continuous BOD monitoring system, capable of providing results in a fraction of the time required by BOD5 analysis, is composed of two thermally isolated batch bioreactors. Each bioreactor contains input/output access to wastewater sample (influent and effluent), hydraulic conduction tubes, pumps, and valves for batch sample and dilution water, air supply for dissolved oxygen (DO) saturation, cooler/heater for sample thermal stability, optical ODO sensor based on fluorescence quenching, pH, ORP, temperature, and atmospheric pressure sensors, local PLC/CPU for TCP/IP data transmission interface. The dynamic BOD system monitoring range covers 2 mg/L < BOD < 2,000 mg/L. In addition to the BOD monitoring system, there are many other operational WWTP sensors. The CPU data is transmitted/received to/from the digital platform, which in turn performs analyses at periodic intervals, aiming to feed the learning process. BOD bulletins and their credibility intervals are made available in 12-hour intervals to web users. The chemical kinetics ML algorithm is composed of a coupled system of four first-order ordinary differential equations for the molar masses of DO, organic material present in the sample, biomass, and products (CO₂ and H₂O) of the reaction. This system is solved numerically linked to its initial conditions: DO (saturated) and initial products of the kinetic oxidation process; CO₂ = H₂0 = 0. The initial values for organic matter and biomass are estimated by the method of minimization of the mean square deviations. A real case of continuous monitoring of BOD wastewater effluent quality is being conducted by deploying an IoT application on a large wastewater purification system located in S. Paulo, Brazil.Keywords: effluent treatment, biochemical oxygen demand, continuous monitoring, IoT, machine learning
Procedia PDF Downloads 73118 How the Writer Tells the Story Should Be the Primary Concern rather than Who Can Write about Whom: The Limits of Cultural Appropriation Vis-à-Vis The Ethics of Narrative Empathy
Authors: Alexandra Cheira
Abstract:
Cultural appropriation has been theorised as a form of colonialism in which members of a dominant culture reduce cultural elements that are deeply meaningful to a minority culture to the category of the “exotic other” since they do not experience the oppression and discriminations faced by members of the minority culture. Yet, in the particular case of literature, writers such as Lionel Shriver and Bernardine Evaristo have argued that authors from a cultural majority have a right to write in the voice of someone from a cultural minority, hence attacking the idea that this is a form of cultural appropriation. By definition, Shriver and Evaristo claim, writers are supposed to write beyond their own culture, gender, class, and/ or race. In this light, this paper discusses the limits of cultural appropriation vis-à-vis the ethics of narrative empathy by addressing the mixed critical reception of Kathryn Stockett’s The Help (2009) and Jeanine Cummins’s American Dirt (2020). In fact, both novels were acclaimed as global eye-openers regarding the struggles of respectively South American migrants and African American maids. At the same time, both novelists have been accused of cultural appropriation by telling a story that is not theirs to tell, given the fact that they are white women telling these stories in what critics have argued is really an American voice telling a story to American readers.These claims will be investigated within the framework of Edward Said’s foundational examination of Orientalism in the field of postcolonial studies as a Western style for authoritatively restructuring the Orient. This means that Orientalist stereotypes regarding Eastern cultures have implicitly validated colonial and imperial pursuits, in the specific context of literary representations of African American and Mexican cultures by white writers. At the same time, the conflicted reception of American Dirt and The Help will be examined within the critical framework of narrative empathy as theorised by Suzanne Keen. Hence, there will be a particular focus on the way a reader’s heated perception that the author’s perspective is purely dishonest can result from a friction between an author’s intention and a reader’s experience of narrative empathy, while a shared sense of empathy between authors and readers can be a rousing momentum to move beyond literary response to social action.Finally, in order to assess that “the key question should not be who can write about whom, but how the writer tells the story”, the recent controversy surrounding Dutch author Marieke Lucas Rijneveld’s decision to resign the translation of American poet Amanda Gorman’s work into Dutch will be duly investigated. In fact, Rijneveld stepped out after journalist and activist Janice Deul criticised Dutch publisher Meulenhoff for choosing a translator who was not also Black, despite the fact that 22-year-old Gorman had selected the 29-year-old Rijneveld herself, as a fellow young writer who had likewise come to fame early on in life. In this light, the critical argument that the controversial reception of The Help reveals as much about US race relations in the early twenty-first century as about the complex literary transactions between individual readers and the novel itself will also be discussed in the extended context of American Dirt and white author Marieke Rijneveld’s withdrawal from the projected translation of Black poet Amanda Gorman.Keywords: cultural appropriation, cultural stereotypes, narrative empathy, race relations
Procedia PDF Downloads 70117 Harnessing the Benefits and Mitigating the Challenges of Neurosensitivity for Learners: A Mixed Methods Study
Authors: Kaaryn Cater
Abstract:
People vary in how they perceive, process, and react to internal, external, social, and emotional environmental factors; some are more sensitive than others. Compassionate people have a highly reactive nervous system and are more impacted by positive and negative environmental conditions (Differential Susceptibility). Further, some sensitive individuals are disproportionately able to benefit from positive and supportive environments without necessarily suffering negative impacts in less supportive environments (Vantage Sensitivity). Environmental sensitivity is underpinned by physiological, genetic, and personality/temperamental factors, and the phenotypic expression of high sensitivity is Sensory Processing Sensitivity. The hallmarks of Sensory Processing Sensitivity are deep cognitive processing, emotional reactivity, high levels of empathy, noticing environmental subtleties, a tendency to observe new and novel situations, and a propensity to become overwhelmed when over-stimulated. Several educational advantages associated with high sensitivity include creativity, enhanced memory, divergent thinking, giftedness, and metacognitive monitoring. High sensitivity can also lead to some educational challenges, particularly managing multiple conflicting demands and negotiating low sensory thresholds. A mixed methods study was undertaken. In the first quantitative study, participants completed the Perceived Success in Study Survey (PSISS) and the Highly Sensitive Person Scale (HSPS-12). Inclusion criteria were current or previous postsecondary education experience. The survey was presented on social media, and snowball recruitment was employed (n=365). The Excel spreadsheets were uploaded to the statistical package for the social sciences (SPSS)26, and descriptive statistics found normal distribution. T-tests and analysis of variance (ANOVA) calculations found no difference in the responses of demographic groups, and Principal Components Analysis and the posthoc Tukey calculations identified positive associations between high sensitivity and three of the five PSISS factors. Further ANOVA calculations found positive associations between the PSISS and two of the three sensitivity subscales. This study included a response field to register interest in further research. Respondents who scored in the 70th percentile on the HSPS-12 were invited to participate in a semi-structured interview. Thirteen interviews were conducted remotely (12 female). Reflexive inductive thematic analysis was employed to analyse data, and a descriptive approach was employed to present data reflective of participant experience. The results of this study found that compassionate students prioritize work-life balance; employ a range of practical metacognitive study and self-care strategies; value independent learning; connect with learning that is meaningful; and are bothered by aspects of the physical learning environment, including lighting, noise, and indoor environmental pollutants. There is a dearth of research investigating sensitivity in the educational context, and these studies highlight the need to promote widespread education sector awareness of environmental sensitivity, and the need to include sensitivity in sector and institutional diversity and inclusion initiatives.Keywords: differential susceptibility, highly sensitive person, learning, neurosensitivity, sensory processing sensitivity, vantage sensitivity
Procedia PDF Downloads 65116 Feasibility of Implementing Digital Healthcare Technologies to Prevent Disease: A Mixed-Methods Evaluation of a Digital Intervention Piloted in the National Health Service
Authors: Rosie Cooper, Tracey Chantler, Ellen Pringle, Sadie Bell, Emily Edmundson, Heidi Nielsen, Sheila Roberts, Michael Edelstein, Sandra Mounier Jack
Abstract:
Introduction: In line with the National Health Service’s (NHS) long-term plan, the NHS is looking to implement more digital health interventions. This study explores a case study in this area: a digital intervention used by NHS Trusts in London to consent adolescents for Human Papilloma Virus (HPV) immunisation. Methods: The electronic consent intervention was implemented in 14 secondary schools in inner city, London. These schools were statistically matched with 14 schools from the same area that were consenting using paper forms. Schools were matched on deprivation and English as an additional language. Consent form return rates and HPV vaccine uptake were compared quantitatively between intervention and matched schools. Data from observations of immunisation sessions and school feedback forms were analysed thematically. Individual and group interviews were undertaken with implementers parents and adolescents and a focus group with adolescents were undertaken and analysed thematically. Results: Twenty-eight schools (14 e-consent schools and 14 paper consent schools) comprising 3219 girls (1733 in paper consent schools and 1486 in e-consent schools) were included in the study. The proportion of pupils eligible for free school meals, with English as an additional language and students' ethnicity profile, was similar between the e-consent and paper consent schools. Return of consent forms was not increased by the implementation of the e-consent intervention. There was no difference in the proportion of pupils that were vaccinated at the scheduled vaccination session between the paper (n=14) and e-consent (n=14) schools (80.6% vs. 81.3%, p=0.93). The transition to using the system was not straightforward, whilst schools and staff understood the potential benefits, they found it difficult to adapt to new ways of working which removed some level or control from schools. Part of the reason for lower consent form return in e-consent schools was that some parents found the intervention difficult to use due to limited access to the internet, finding it hard to open the weblink, language barriers, and in some cases, the system closed a few days prior to sessions. Adolescents also highlighted the potential for e-consent interventions to by-pass their information needs. Discussion: We would advise caution against dismissing the e-consent intervention because it did not achieve its goal of increasing the return of consent forms. Given the problems embedding a news service, it was encouraging that HPV vaccine uptake remained stable. Introducing change requires stakeholders to understand, buy in, and work together with others. Schools and staff understood the potential benefits of using e-consent but found the new ways of working removed some level of control from schools, which they found hard to adapt to, possibly suggesting implementing digital technology will require an embedding process. Conclusion: The future direction of the NHS will require implementation of digital technology. Obtaining electronic consent from parents could help streamline school-based adolescent immunisation programmes. Findings from this study suggest that when implementing new digital technologies, it is important to allow for a period of embedding to enable them to become incorporated in everyday practice.Keywords: consent, digital, immunisation, prevention
Procedia PDF Downloads 146115 Meta-Analysis of Previously Unsolved Cases of Aviation Mishaps Employing Molecular Pathology
Authors: Michael Josef Schwerer
Abstract:
Background: Analyzing any aircraft accident is mandatory based on the regulations of the International Civil Aviation Organization and the respective country’s criminal prosecution authorities. Legal medicine investigations are unavoidable when fatalities involve the flight crew or when doubts arise concerning the pilot’s aeromedical health status before the event. As a result of frequently tremendous blunt and sharp force trauma along with the impact of the aircraft to the ground, consecutive blast or fire exposition of the occupants or putrefaction of the dead bodies in cases of delayed recovery, relevant findings can be masked or destroyed and therefor being inaccessible in standard pathology practice comprising just forensic autopsy and histopathology. Such cases are of considerable risk of remaining unsolved without legal consequences for those responsible. Further, no lessons can be drawn from these scenarios to improve flight safety and prevent future mishaps. Aims and Methods: To learn from previously unsolved aircraft accidents, re-evaluations of the investigation files and modern molecular pathology studies were performed. Genetic testing involved predominantly PCR-based analysis of gene regulation, studying DNA promotor methylations, RNA transcription and posttranscriptional regulation. In addition, the presence or absence of infective agents, particularly DNA- and RNA-viruses, was studied. Technical adjustments of molecular genetic procedures when working with archived sample material were necessary. Standards for the proper interpretation of the respective findings had to be settled. Results and Discussion: Additional molecular genetic testing significantly contributes to the quality of forensic pathology assessment in aviation mishaps. Previously undetected cardiotropic viruses potentially explain e.g., a pilot’s sudden incapacitation resulting from cardiac failure or myocardial arrhythmia. In contrast, negative results for infective agents participate in ruling out concerns about an accident pilot’s fitness to fly and the aeromedical examiner’s precedent decision to issue him or her an aeromedical certificate. Care must be taken in the interpretation of genetic testing for pre-existing diseases such as hypertrophic cardiomyopathy or ischemic heart disease. Molecular markers such as mRNAs or miRNAs, which can establish these diagnoses in clinical patients, might be misleading in-flight crew members because of adaptive changes in their tissues resulting from repeated mild hypoxia during flight, for instance. Military pilots especially demonstrate significant physiological adjustments to their somatic burdens in flight, such as cardiocirculatory stress and air combat maneuvers. Their non-pathogenic alterations in gene regulation and expression will likely be misinterpreted for genuine disease by inexperienced investigators. Conclusions: The growing influence of molecular pathology on legal medicine practice has found its way into aircraft accident investigation. As appropriate quality standards for laboratory work and data interpretation are provided, forensic genetic testing supports the medico-legal analysis of aviation mishaps and potentially reduces the number of unsolved events in the future.Keywords: aviation medicine, aircraft accident investigation, forensic pathology, molecular pathology
Procedia PDF Downloads 44114 Investigation of the Controversial Immunomodulatory Potential of Trichinella spiralis Excretory-Secretory Products versus Extracellular Vesicles Derived from These Products in vitro
Authors: Natasa Ilic, Alisa Gruden-Movsesijan, Maja Kosanovic, Sofija Glamoclija, Marina Bekic, Ljiljana Sofronic-Milosavljevic, Sergej Tomic
Abstract:
As a very promising candidate for modulation of immune response in the sense of biasing the inflammatory towards an anti-inflammatory type of response, Trichinella spiralis infection was shown to successfully alleviate the severity of experimental autoimmune encephalomyelitis, the animal model of human disease multiple sclerosis. This effect is achieved via its excretory-secretory muscle larvae (ES L1) products which affect the maturation status and function of dendritic cells (DCs) by inducing the tolerogenic status of DCs, which leads to the mitigation of the Th1 type of response and the activation of a regulatory type of immune response both in vitro and in vivo. ES L1 alone or via treated DCs successfully mitigated EAE in the same manner as the infection itself. On the other hand, it has been shown that T. spiralis infection slows down the tumour growth and significantly reduces the tumour size in the model of mouse melanoma, while ES L1 possesses a pro-apoptotic and anti-survival effect on melanoma cells in vitro. Hence, although the mechanisms still need to be revealed, T. spiralis infection and its ES L1 products have a bit of controversial potential to modulate both inflammatory diseases and malignancies. The recent discovery of T. spiralis extracellular vesicles (TsEVs) suggested that the induction of complex regulation of the immune response requires simultaneous delivery of different signals in nano-sized packages. This study aimed to explore whether TsEVs bare the similar potential as ES L1 to influence the status of DCs in initiation, progression and regulation of immune response, but also to investigate the effect of both ES L1 and TsEVs on myeloid derived suppressor cells (MDSC) which present the regular tumour tissue environment. TsEVs were enriched from the conditioned medium of T. spiralis muscle larvae by differential centrifugation and used for the treatment of human monocyte-derived DCs and MDSC. On DCs, TsEVs induced low expression of HLA DR and CD40, moderate CD83 and CD86, and increased expression of ILT3 and CCR7 on treated DCs, i.e., they induced tolerogenic DCs. Such DCs possess the capacity to polarize T cell immune response towards regulatory type, with an increased proportion of IL-10 and TGF-β producing cells, similarly to ES L1. These findings indicated that the ability of TsEVs to induce tolerogenic DCs favoring anti-inflammatory responses may be helpful in coping with diseases that involve Th1/Th17-, but also Th2-mediated inflammation. In MDSC in vitro model, although both ES L1 and TsEVs had the same impact on MDSC phenotype i.e., they acted suppressive, ES L1 treated MDSC, unlike TsEVs treated ones, induced T cell response characterized by the increased RoRγT and IFN-γ, while the proportion of regulatory cells was decreased followed by the decrease in IL-10 and TGF-β positive cells proportion within this population. These findings indicate the interesting ability of ES L1 to modulate T cells response via MDSC towards pro-inflamatory type, suggesting that, unlike TsEVs which consistently demonstrate the suppresive effect on inflammatory response, it could be used also for the development of new approaches aimed for the treatment of malignant diseases. Acknowledgment: This work was funded by the Promis project – Nano-MDCS-Thera, Science Fund, Republic of Serbia.Keywords: dendritic cells, myeloid derived suppressor cells, immunomodulation, Trichinella spiralis
Procedia PDF Downloads 204113 Mapping Iron Content in the Brain with Magnetic Resonance Imaging and Machine Learning
Authors: Gabrielle Robertson, Matthew Downs, Joseph Dagher
Abstract:
Iron deposition in the brain has been linked with a host of neurological disorders such as Alzheimer’s, Parkinson’s, and Multiple Sclerosis. While some treatment options exist, there are no objective measurement tools that allow for the monitoring of iron levels in the brain in vivo. An emerging Magnetic Resonance Imaging (MRI) method has been recently proposed to deduce iron concentration through quantitative measurement of magnetic susceptibility. This is a multi-step process that involves repeated modeling of physical processes via approximate numerical solutions. For example, the last two steps of this Quantitative Susceptibility Mapping (QSM) method involve I) mapping magnetic field into magnetic susceptibility and II) mapping magnetic susceptibility into iron concentration. Process I involves solving an ill-posed inverse problem by using regularization via injection of prior belief. The end result from Process II highly depends on the model used to describe the molecular content of each voxel (type of iron, water fraction, etc.) Due to these factors, the accuracy and repeatability of QSM have been an active area of research in the MRI and medical imaging community. This work aims to estimate iron concentration in the brain via a single step. A synthetic numerical model of the human head was created by automatically and manually segmenting the human head on a high-resolution grid (640x640x640, 0.4mm³) yielding detailed structures such as microvasculature and subcortical regions as well as bone, soft tissue, Cerebral Spinal Fluid, sinuses, arteries, and eyes. Each segmented region was then assigned tissue properties such as relaxation rates, proton density, electromagnetic tissue properties and iron concentration. These tissue property values were randomly selected from a Probability Distribution Function derived from a thorough literature review. In addition to having unique tissue property values, different synthetic head realizations also possess unique structural geometry created by morphing the boundary regions of different areas within normal physical constraints. This model of the human brain is then used to create synthetic MRI measurements. This is repeated thousands of times, for different head shapes, volume, tissue properties and noise realizations. Collectively, this constitutes a training-set that is similar to in vivo data, but larger than datasets available from clinical measurements. This 3D convolutional U-Net neural network architecture was used to train data-driven Deep Learning models to solve for iron concentrations from raw MRI measurements. The performance was then tested on both synthetic data not used in training as well as real in vivo data. Results showed that the model trained on synthetic MRI measurements is able to directly learn iron concentrations in areas of interest more effectively than other existing QSM reconstruction methods. For comparison, models trained on random geometric shapes (as proposed in the Deep QSM method) are less effective than models trained on realistic synthetic head models. Such an accurate method for the quantitative measurement of iron deposits in the brain would be of important value in clinical studies aiming to understand the role of iron in neurological disease.Keywords: magnetic resonance imaging, MRI, iron deposition, machine learning, quantitative susceptibility mapping
Procedia PDF Downloads 136112 Genotoxic Effect of Tricyclic Antidepressant Drug “Clomipramine Hydrochloride’ on Somatic and Germ Cells of Male Mice
Authors: Samia A. El-Fiky, Fouad A. Abou-Zaid, Ibrahim M. Farag, Naira M. El-Fiky
Abstract:
Clomipramine hydrochloride is one of the most used tricyclic antidepressant drug in Egypt. This drug contains in its chemical structure on two benzene rings. Benzene is considered to be toxic and clastogenic agent. So, the present study was designed to assess the genotoxic effect of Clomipramine hydrochloride on somatic and germ cells in mice. Three dose levels 0.195 (Low), 0.26 (Medium), and 0.65 (High) mg/kg.b.wt. were used. Seven groups of male mice were utilized in this work. The first group was employed as a control. In the remaining six groups, each of the above doses was orally administrated for two groups, one of them was treated for 5 days and the other group was given the same dose for 30 days. At the end of experiments, the animals were sacrificed for cytogenetic and sperm examination as well as histopathological investigations by using hematoxylin and eosin stains (H and E stains) and electron microscope. Concerning the sperm studies, these studies were confined to 5 days treatment with different dose levels. Moreover, the ultrastructural investigation by electron microscope was restricted to 30 days treatment with drug doses. The results of the dose dependent effect of Clomipramine showed that the treatment with three different doses induced increases of frequencies of chromosome aberrations in bone marrow and spermatocyte cells as compared to control. In addition, mitotic and meiotic activities of somatic and germ cells were declined. The treatments with medium or high doses were more effective for inducing significant increases of chromosome aberrations and significant decreases of cell divisions than treatment with low dose. The effect of high dose was more pronounced for causing such genetic deleterious in respect to effect of medium dose. Moreover, the results of the time dependent effect of Clomipramine observed that the treatment with different dose levels for 30 days led to significant increases of genetic aberrations than treatment for 5 days. Sperm examinations revealed that the treatment with Clomipramine at different dose levels caused significant increase of sperm shape abnormalities and significant decrease in sperm count as compared to control. The adverse effects on sperm shape and count were more obviousness by using the treatments with medium or high doses than those found in treatment with low dose. The group of mice treated with high dose had the highest rate of sperm shape abnormalities and the lowest proportion of sperm count as compared to mice received medium dose. In histopathological investigation, hematoxylin and eosin stains showed that, the using of low dose of Clomipramine for 5 or 30 days caused a little pathological changes in liver tissue. However, using medium and high doses for 5 or 30 days induced severe damages than that observed in mice treated with low dose. The treatment with high dose for 30 days gave the worst results of pathological changes in hepatic cells. Moreover, ultrastructure examination revealed, the mice treated with low dose of Clomipramine had little differences in liver histological architecture as compared to control group. These differences were confined to cytoplasmic inclusions. Whereas, prominent pathological changes in nuclei as well as dilated of rough Endoplasmic Reticulum (rER) were observed in mice treated with medium or high doses of Clomipramine drug. In conclusion, the present study adds evidence that treatments with medium or high doses of Clomipramine have genotoxic effects on somatic and germ cells of mice, as unwanted side effects. However, the using of low dose (especially for short time, 5 days) can be utilized as a therapeutic dose, where it caused relatively similar proportions of genetic, sperm, and histopathological changes as those found in normal control.Keywords: chromosome aberrations, clomipramine, mice, histopathology, sperm abnormalities
Procedia PDF Downloads 521111 Long-Term Tillage, Lime Matter and Cover Crop Effects under Heavy Soil Conditions in Northern Lithuania
Authors: Aleksandras Velykis, Antanas Satkus
Abstract:
Clay loam and clay soils are typical for northern Lithuania. These soils are susceptible to physical degradation in the case of intensive use of heavy machinery for field operations. However, clayey soils having poor physical properties by origin require more intensive tillage to maintain proper physical condition for grown crops. Therefore not only choice of suitable tillage system is very important for these soils in the region, but also additional search of other measures is essential for good soil physical state maintenance. Research objective: To evaluate the long-term effects of different intensity tillage as well as its combinations with supplementary agronomic practices on improvement of soil physical conditions and environmental sustainability. The experiment examined the influence of deep and shallow ploughing, ploughless tillage, combinations of ploughless tillage with incorporation of lime sludge and cover crop for green manure and application of the same cover crop for mulch without autumn tillage under spring and winter crop growing conditions on clay loam (27% clay, 50% silt, 23% sand) Endocalcaric Endogleyic Cambisol. Methods: The indicators characterizing the impact of investigated measures were determined using the following methods and devices: Soil dry bulk density – by Eijkelkamp cylinder (100 cm3), soil water content – by weighing, soil structure – by Retsch sieve shaker, aggregate stability – by Eijkelkamp wet sieving apparatus, soil mineral nitrogen – in 1 N KCL extract using colorimetric method. Results: Clay loam soil physical state (dry bulk density, structure, aggregate stability, water content) depends on tillage system and its combination with additional practices used. Application of cover crop winter mulch without tillage in autumn, ploughless tillage and shallow ploughing causes the compaction of bottom (15-25 cm) topsoil layer. However, due to ploughless tillage the soil dry bulk density in subsoil (25-35 cm) layer is less compared to deep ploughing. Soil structure in the upper (0-15 cm) topsoil layer and in the seedbed (0-5 cm), prepared for spring crops is usually worse when applying the ploughless tillage or cover crop mulch without autumn tillage. Application of lime sludge under ploughless tillage conditions helped to avoid the compaction and structure worsening in upper topsoil layer, as well as increase aggregate stability. Application of reduced tillage increased soil water content at upper topsoil layer directly after spring crop sowing. However, due to reduced tillage the water content in all topsoil markedly decreased when droughty periods lasted for a long time. Combination of reduced tillage with cover crop for green manure and winter mulch is significant for preserving the environment. Such application of cover crops reduces the leaching of mineral nitrogen into the deeper soil layers and environmental pollution. This work was supported by the National Science Program ‘The effect of long-term, different-intensity management of resources on the soils of different genesis and on other components of the agro-ecosystems’ [grant number SIT-9/2015] funded by the Research Council of Lithuania.Keywords: clay loam, endocalcaric endogleyic cambisol, mineral nitrogen, physical state
Procedia PDF Downloads 226110 A Tool to Provide Advanced Secure Exchange of Electronic Documents through Europe
Authors: Jesus Carretero, Mario Vasile, Javier Garcia-Blas, Felix Garcia-Carballeira
Abstract:
Supporting cross-border secure and reliable exchange of data and documents and to promote data interoperability is critical for Europe to enhance sector (like eFinance, eJustice and eHealth). This work presents the status and results of the European Project MADE, a Research Project funded by Connecting Europe facility Programme, to provide secure e-invoicing and e-document exchange systems among Europe countries in compliance with the eIDAS Regulation (Regulation EU 910/2014 on electronic identification and trust services). The main goal of MADE is to develop six new AS4 Access Points and SMP in Europe to provide secure document exchanges using the eDelivery DSI (Digital Service Infrastructure) amongst both private and public entities. Moreover, the project demonstrates the feasibility and interest of the solution provided by providing several months of interoperability among the providers of the six partners in different EU countries. To achieve those goals, we have followed a methodology setting first a common background for requirements in the partner countries and the European regulations. Then, the partners have implemented access points in each country, including their service metadata publisher (SMP), to allow the access to their clients to the pan-European network. Finally, we have setup interoperability tests with the other access points of the consortium. The tests will include the use of each entity production-ready Information Systems that process the data to confirm all steps of the data exchange. For the access points, we have chosen AS4 instead of other existing alternatives because it supports multiple payloads, native web services, pulling facilities, lightweight client implementations, modern crypto algorithms, and more authentication types, like username-password and X.509 authentication and SAML authentication. The main contribution of MADE project is to open the path for European companies to use eDelivery services with cross-border exchange of electronic documents following PEPPOL (Pan-European Public Procurement Online) based on the e-SENS AS4 Profile. It also includes the development/integration of new components, integration of new and existing logging and traceability solutions and maintenance tool support for PKI. Moreover, we have found that most companies are still not ready to support those profiles. Thus further efforts will be needed to promote this technology into the companies. The consortium includes the following 9 partners. From them, 2 are research institutions: University Carlos III of Madrid (Coordinator), and Universidad Politecnica de Valencia. The other 7 (EDICOM, BIZbrains, Officient, Aksesspunkt Norge, eConnect, LMT group, Unimaze) are private entities specialized in secure delivery of electronic documents and information integration brokerage in their respective countries. To achieve cross-border operativity, they will include AS4 and SMP services in their platforms according to the EU Core Service Platform. Made project is instrumental to test the feasibility of cross-border documents eDelivery in Europe. If successful, not only einvoices, but many other types of documents will be securely exchanged through Europe. It will be the base to extend the network to the whole Europe. This project has been funded under the Connecting Europe Facility Agreement number: INEA/CEF/ICT/A2016/1278042. Action No: 2016-EU-IA-0063.Keywords: security, e-delivery, e-invoicing, e-delivery, e-document exchange, trust
Procedia PDF Downloads 265109 Design and Fabrication of AI-Driven Kinetic Facades with Soft Robotics for Optimized Building Energy Performance
Authors: Mohammadreza Kashizadeh, Mohammadamin Hashemi
Abstract:
This paper explores a kinetic building facade designed for optimal energy capture and architectural expression. The system integrates photovoltaic panels with soft robotic actuators for precise solar tracking, resulting in enhanced electricity generation compared to static facades. Driven by the growing interest in dynamic building envelopes, the exploration of facade systems are necessitated. Increased energy generation and regulation of energy flow within buildings are potential benefits offered by integrating photovoltaic (PV) panels as kinetic elements. However, incorporating these technologies into mainstream architecture presents challenges due to the complexity of coordinating multiple systems. To address this, the design leverages soft robotic actuators, known for their compliance, resilience, and ease of integration. Additionally, the project investigates the potential for employing Large Language Models (LLMs) to streamline the design process. The research methodology involved design development, material selection, component fabrication, and system assembly. Grasshopper (GH) was employed within the digital design environment for parametric modeling and scripting logic, and an LLM was experimented with to generate Python code for the creation of a random surface with user-defined parameters. Various techniques, including casting, Three-dimensional 3D printing, and laser cutting, were utilized to fabricate physical components. A modular assembly approach was adopted to facilitate installation and maintenance. A case study focusing on the application of this facade system to an existing library building at Polytechnic University of Milan is presented. The system is divided into sub-frames to optimize solar exposure while maintaining a visually appealing aesthetic. Preliminary structural analyses were conducted using Karamba3D to assess deflection behavior and axial loads within the cable net structure. Additionally, Finite Element (FE) simulations were performed in Abaqus to evaluate the mechanical response of the soft robotic actuators under pneumatic pressure. To validate the design, a physical prototype was created using a mold adapted for a 3D printer's limitations. Casting Silicone Rubber Sil 15 was used for its flexibility and durability. The 3D-printed mold components were assembled, filled with the silicone mixture, and cured. After demolding, nodes and cables were 3D-printed and connected to form the structure, demonstrating the feasibility of the design. This work demonstrates the potential of soft robotics and Artificial Intelligence (AI) for advancements in sustainable building design and construction. The project successfully integrates these technologies to create a dynamic facade system that optimizes energy generation and architectural expression. While limitations exist, this approach paves the way for future advancements in energy-efficient facade design. Continued research efforts will focus on cost reduction, improved system performance, and broader applicability.Keywords: artificial intelligence, energy efficiency, kinetic photovoltaics, pneumatic control, soft robotics, sustainable building
Procedia PDF Downloads 30108 Pathophysiological Implications in Immersion Treatment Methods of Icthyophthiriasis Disease in African Catfish (Clarias gariepinus) Using Moringa oleifera Extract
Authors: Ikele Chika Bright, Mgbenka Bernard Obialo, Ikele Chioma Faith
Abstract:
Icthyophthiriasis is a prevalent protozoan (ectoparasite) mostly affecting cultured and aquarium fishes. The majority of the chemotherapeutants lack efficacy for completely eliminating Ich parasite without affecting the environment and they are not safe for human health. The present work is focused on the evaluating different immersion treatments of African catfish (Clarias gariepinus) infected with ichthyophthiriasis and treated with a non-chemical and environmental friendly parasiticides Moringa oleifera. A total number of 800 apparently healthy parasites free (examined) post juvenile catfish were obtained from a reputable farm, disinfected with potassium permanganate in a quarantine tank to remove any possible external parasites. The fish were further challenged with approximately 44,000 infective stages of theronts which were obtained through serial passages by cohabitation. Seven groups (A-G) of post Juvenile were used for the experiment which was carried out into three stages; Dips (60minutes), short term treatment (24-96h) and prolong bath treatment (0-15 days). The concentrations selected were dependent on the outcome of the LC50 of the plant material from which dose-dependent factors were used to select various concentrations of the treatment. In Dips treatment, group D-G were treated with 1,500mg/L, 2500mg/L., 3500mg/L and 4500mg/L, short-term treatment was treated with 150mg/L, 250mg/L, 350mg/L and 450mg/L and prolong bath was treated with 15mg/L, 25mg/L, 35mg/L and 45mg/L of the plant extract whereas group A, B and C were normal control, Ich- infested not treated and Ich- infested treated with standard drug (Acriflavin), respectively. The various types of treatment applied with corresponding concentrations showed almost complete elimination of the adult parasites (trophonts) both in the gills and the body smear, thereby making M. oleifera a potential parasiticides. There were serious pathological alterations in the skin and gills which are usually the main point for Ich parasites invasion but no significant morphological characteristics was noted among the treated groups subjected to different immersion treatment patterns. Epitheliocystis, aneurysm, oedema, hemorrhage, and localization of the adult parasite in the gills were the overall common observations made in the gills whereas degeneration of muscle fibre, dermatitis, hemorrhage, oedema, abscess formation and keratinisation were observed in the skin. However, there are no pathological changes in the control group. Moreover, biochemical parameters such as urea, creatinine, albumin., globulin, total protein, ALT, AST), blood chemistry (sodium, chloride, potassium, bicarbonate), antioxidants (CAT, SOD, GPx, LPO), enzymatic activities (myeloperoxidase, thioreadoxin reductase), Inflammatory response (C-reactive protein), Stress markers (lactate dehydrogenase), heamatological parameters (RBC, PCV, WBC, HB and differential count), lipid profile (total cholesterol, tryglycerides , high density lipoprotein and low density lipoprotein) all showed various significant (P<0.05) and no significant (P>0.05) responses among the Ich-infested fish treated under three immersion treatments. It is suggested that M. oleifera may serve as an alternatives to chemotherapeutants for control of Ichthyophthiriasis in African catfish Clarias gariepinus.Keywords: Icthyophthirius multifilis, immersion treatment, pathophysiology, African catfish
Procedia PDF Downloads 389107 Numerical Prediction of Width Crack of Concrete Dapped-End Beams
Authors: Jatziri Y. Moreno-Martinez, Arturo Galvan, Xavier Chavez Cardenas, Hiram Arroyo
Abstract:
Several methods have been utilized to study the prediction of cracking of concrete structural under loading. The finite element analysis is an alternative that shows good results. The aim of this work was the numerical study of the width crack in reinforced concrete beams with dapped ends, these are frequently found in bridge girders and precast concrete construction. Properly restricting cracking is an important aspect of the design in dapped ends, it has been observed that the cracks that exceed the allowable widths are unacceptable in an aggressive environment for reinforcing steel. For simulating the crack width, the discrete crack approach was considered by means of a Cohesive Zone (CZM) Model using a function to represent the crack opening. Two cases of dapped-end were constructed and tested in the laboratory of Structures and Materials of Engineering Institute of UNAM. The first case considers a reinforcement based on hangers as well as on vertical and horizontal ring, the second case considers 50% of the vertical stirrups in the dapped end to the main part of the beam were replaced by an equivalent area (vertically projected) of diagonal bars under. The loading protocol consisted on applying symmetrical loading to reach the service load. The models were performed using the software package ANSYS v. 16.2. The concrete structure was modeled using three-dimensional solid elements SOLID65 capable of cracking in tension and crushing in compression. Drucker-Prager yield surface was used to include the plastic deformations. The reinforcement was introduced with smeared approach. Interface delamination was modeled by traditional fracture mechanics methods such as the nodal release technique adopting softening relationships between tractions and the separations, which in turn introduce a critical fracture energy that is also the energy required to break apart the interface surfaces. This technique is called CZM. The interface surfaces of the materials are represented by a contact elements Surface-to-Surface (CONTA173) with bonded (initial contact). The Mode I dominated bilinear CZM model assumes that the separation of the material interface is dominated by the displacement jump normal to the interface. Furthermore, the opening crack was taken into consideration according to the maximum normal contact stress, the contact gap at the completion of debonding, and the maximum equivalent tangential contact stress. The contact elements were placed in the crack re-entrant corner. To validate the proposed approach, the results obtained with the previous procedure are compared with experimental test. A good correlation between the experimental and numerical Load-Displacement curves was presented, the numerical models also allowed to obtain the load-crack width curves. In these two cases, the proposed model confirms the capability of predicting the maximum crack width, with an error of ± 30 %. Finally, the orientation of the crack is a fundamental for the prediction of crack width. The results regarding the crack width can be considered as good from the practical point view. Load-Displacement curve of the test and the location of the crack were able to obtain favorable results.Keywords: cohesive zone model, dapped-end beams, discrete crack approach, finite element analysis
Procedia PDF Downloads 167106 Developing and integrated Clinical Risk Management Model
Authors: Mohammad H. Yarmohammadian, Fatemeh Rezaei
Abstract:
Introduction: Improving patient safety in health systems is one of the main priorities in healthcare systems, so clinical risk management in organizations has become increasingly significant. Although several tools have been developed for clinical risk management, each has its own limitations. Aims: This study aims to develop a comprehensive tool that can complete the limitations of each risk assessment and management tools with the advantage of other tools. Methods: Procedure was determined in two main stages included development of an initial model during meetings with the professors and literature review, then implementation and verification of final model. Subjects and Methods: This study is a quantitative − qualitative research. In terms of qualitative dimension, method of focus groups with inductive approach is used. To evaluate the results of the qualitative study, quantitative assessment of the two parts of the fourth phase and seven phases of the research was conducted. Purposive and stratification sampling of various responsible teams for the selected process was conducted in the operating room. Final model verified in eight phases through application of activity breakdown structure, failure mode and effects analysis (FMEA), healthcare risk priority number (RPN), root cause analysis (RCA), FT, and Eindhoven Classification model (ECM) tools. This model has been conducted typically on patients admitted in a day-clinic ward of a public hospital for surgery in October 2012 to June. Statistical Analysis Used: Qualitative data analysis was done through content analysis and quantitative analysis done through checklist and edited RPN tables. Results: After verification the final model in eight-step, patient's admission process for surgery was developed by focus discussion group (FDG) members in five main phases. Then with adopted methodology of FMEA, 85 failure modes along with its causes, effects, and preventive capabilities was set in the tables. Developed tables to calculate RPN index contain three criteria for severity, two criteria for probability, and two criteria for preventability. Tree failure modes were above determined significant risk limitation (RPN > 250). After a 3-month period, patient's misidentification incidents were the most frequent reported events. Each RPN criterion of misidentification events compared and found that various RPN number for tree misidentification reported events could be determine against predicted score in previous phase. Identified root causes through fault tree categorized with ECM. Wrong side surgery event was selected by focus discussion group to purpose improvement action. The most important causes were lack of planning for number and priority of surgical procedures. After prioritization of the suggested interventions, computerized registration system in health information system (HIS) was adopted to prepare the action plan in the final phase. Conclusion: Complexity of health care industry requires risk managers to have a multifaceted vision. Therefore, applying only one of retrospective or prospective tools for risk management does not work and each organization must provide conditions for potential application of these methods in its organization. The results of this study showed that the integrated clinical risk management model can be used in hospitals as an efficient tool in order to improve clinical governance.Keywords: failure modes and effective analysis, risk management, root cause analysis, model
Procedia PDF Downloads 249105 A Microwave Heating Model for Endothermic Reaction in the Cement Industry
Authors: Sofia N. Gonçalves, Duarte M. S. Albuquerque, José C. F. Pereira
Abstract:
Microwave technology has been gaining importance in contributing to decarbonization processes in high energy demand industries. Despite the several numerical models presented in the literature, a proper Verification and Validation exercise is still lacking. This is important and required to evaluate the physical process model accuracy and adequacy. Another issue addresses impedance matching, which is an important mechanism used in microwave experiments to increase electromagnetic efficiency. Such mechanism is not available in current computational tools, thus requiring an external numerical procedure. A numerical model was implemented to study the continuous processing of limestone with microwave heating. This process requires the material to be heated until a certain temperature that will prompt a highly endothermic reaction. Both a 2D and 3D model were built in COMSOL Multiphysics to solve the two-way coupling between Maxwell and Energy equations, along with the coupling between both heat transfer phenomena and limestone endothermic reaction. The 2D model was used to study and evaluate the required numerical procedure, being also a benchmark test, allowing other authors to implement impedance matching procedures. To achieve this goal, a controller built in MATLAB was used to continuously matching the cavity impedance and predicting the required energy for the system, thus successfully avoiding energy inefficiencies. The 3D model reproduces realistic results and therefore supports the main conclusions of this work. Limestone was modeled as a continuous flow under the transport of concentrated species, whose material and kinetics properties were taken from literature. Verification and Validation of the coupled model was taken separately from the chemical kinetic model. The chemical kinetic model was found to correctly describe the chosen kinetic equation by comparing numerical results with experimental data. A solution verification was made for the electromagnetic interface, where second order and fourth order accurate schemes were found for linear and quadratic elements, respectively, with numerical uncertainty lower than 0.03%. Regarding the coupled model, it was demonstrated that the numerical error would diverge for the heat transfer interface with the mapped mesh. Results showed numerical stability for the triangular mesh, and the numerical uncertainty was less than 0.1%. This study evaluated limestone velocity, heat transfer, and load influence on thermal decomposition and overall process efficiency. The velocity and heat transfer coefficient were studied with the 2D model, while different loads of material were studied with the 3D model. Both models demonstrated to be highly unstable when solving non-linear temperature distributions. High velocity flows exhibited propensity to thermal runways, and the thermal efficiency showed the tendency to stabilize for the higher velocities and higher filling ratio. Microwave efficiency denoted an optimal velocity for each heat transfer coefficient, pointing out that electromagnetic efficiency is a consequence of energy distribution uniformity. The 3D results indicated the inefficient development of the electric field for low filling ratios. Thermal efficiencies higher than 90% were found for the higher loads and microwave efficiencies up to 75% were accomplished. The 80% fill ratio was demonstrated to be the optimal load with an associated global efficiency of 70%.Keywords: multiphysics modeling, microwave heating, verification and validation, endothermic reactions modeling, impedance matching, limestone continuous processing
Procedia PDF Downloads 140104 A Multimodal Discourse Analysis of Gender Representation on Health and Fitness Magazine Cover Pages
Authors: Nashwa Elyamany
Abstract:
In visual cultures, namely that of the United States, media representations are such influential and pervasive reflections of societal norms and expectations to the extent that they impact the manner in which both genders view themselves. Health and fitness magazines fall within the realm of visual culture. Since the main goal of communication is to ensure proper dissemination of information in order for the target audience to grasp the intended messages, it becomes imperative that magazine publishers, editors, advertisers and image producers use different modes of communication within their reach to convey messages to their readers and viewers. A rapid waxing flow of multimodality floods popular discourse, particularly health and fitness magazine cover pages. The use of well-crafted cover lines and visual images is imbued with agendas, consumerist ideologies and properties capable of effectively conveying implicit and explicit meaning to potential readers and viewers. In essence, the primary goal of this thesis is to interrogate the multi-semiotic operations and manifestations of hegemonic masculinity and femininity in male and female body culture, particularly on the cover pages of the twin American magazines Men's Health and Women's Health using corpora that spanned from 2011 to the mid of 2016. The researcher explores the semiotic resources that contribute to shaping and legitimizing a new form of postmodern, consumerist, gendered discourse that positions the reader-viewer ideologically. Methodologically, the researcher carries out analysis on the macro and micro levels. On the macro level, the researcher takes on a critical stance to illuminate the ideological nature of the multimodal ensemble of the cover pages, and, on the micro level, seeks to put forward new theoretical and methodological routes through which the semiotic choices well invested on the media texts can be more objectively scrutinized. On the macro level, a 'themes' analysis is initially conducted to isolate the overarching themes that dominate the fitness discourse on the cover pages under study. It is argued that variation in terms of frequencies of such themes is indicative, broadly speaking, of which facets of hegemonic masculinity and femininity are infused in the fitness discourse on the cover pages. On the micro level, this research work encompasses three sub-levels of analysis. The researcher follows an SF-MMDA approach, drawing on a trio of analytical frameworks: Halliday's SFG for the verbal analysis; Kress & van Leeuween's VG for the visual analysis; and CMT in relation to Sperber & Wilson's RT for the pragma-cognitive analysis of multimodal metaphors and metonymies. The data is presented in terms of detailed descriptions in conjunction with frequency tables, ANOVA with alpha=0.05 and MANOVA in the multiple phases of analysis. Insights and findings from this multi-faceted, social-semiotic analysis are interpreted in light of Cultivation Theory, Self-objectification Theory and the literature to date. Implications for future research include the implementation of a multi-dimensional approach whereby linguistic and visual analytical models are deployed with special regards to cultural variation.Keywords: gender, hegemony, magazine cover page, multimodal discourse analysis, multimodal metaphor, multimodal metonymy, systemic functional grammar, visual grammar
Procedia PDF Downloads 348103 Application of Satellite Remote Sensing in Support of Water Exploration in the Arab Region
Authors: Eman Ghoneim
Abstract:
The Arabian deserts include some of the driest areas on Earth. Yet, its landforms reserved a record of past wet climates. During humid phases, the desert was green and contained permanent rivers, inland deltas and lakes. Some of their water would have seeped and replenished the groundwater aquifers. When the wet periods came to an end, several thousand years ago, the entire region transformed into an extended band of desert and its original fluvial surface was totally covered by windblown sand. In this work, radar and thermal infrared images were used to reveal numerous hidden surface/subsurface features. Radar long wavelength has the unique ability to penetrate surface dry sands and uncover buried subsurface terrain. Thermal infrared also proven to be capable of spotting cooler moist areas particularly in hot dry surfaces. Integrating Radarsat images and GIS revealed several previously unknown paleoriver and lake basins in the region. One of these systems, known as the Kufrah, is the largest yet identified river basin in the Eastern Sahara. This river basin, which straddles the border between Egypt and Libya, flowed north parallel to the adjacent Nile River with an extensive drainage area of 235,500 km2 and massive valley width of 30 km in some parts. This river was most probably served as a spillway for an overflow from Megalake Chad to the Mediterranean Sea and, thus, may have acted as a natural water corridor used by human ancestors to migrate northward across the Sahara. The Gilf-Kebir is another large paleoriver system located just east of Kufrah and emanates from the Gilf Plateau in Egypt. Both river systems terminate with vast inland deltas at the southern margin of the Great Sand Sea. The trends of their distributary channels indicate that both rivers drained to a topographic depression that was periodically occupied by a massive lake. During dry climates, the lake dried up and roofed by sand deposits, which is today forming the Great Sand Sea. The enormity of the lake basin provides explanation as to why continuous extraction of groundwater in this area is possible. A similar lake basin, delimited by former shorelines, was detected by radar space data just across the border of Sudan. This lake, called the Northern Darfur Megalake, has a massive size of 30,750 km2. These former lakes and rivers could potentially hold vast reservoirs of groundwater, oil and natural gas at depth. Similar to radar data, thermal infrared images were proven to be useful in detecting potential locations of subsurface water accumulation in desert regions. Analysis of both Aster and daily MODIS thermal channels reveal several subsurface cool moist patches in the sandy desert of the Arabian Peninsula. Analysis indicated that such evaporative cooling anomalies were resulted from the subsurface transmission of the Monsoonal rainfall from the mountains to the adjacent plain. Drilling a number of wells in several locations proved the presence of productive water aquifers confirming the validity of the used data and the adopted approaches for water exploration in dry regions.Keywords: radarsat, SRTM, MODIS, thermal infrared, near-surface water, ancient rivers, desert, Sahara, Arabian peninsula
Procedia PDF Downloads 247102 Effect of Cerebellar High Frequency rTMS on the Balance of Multiple Sclerosis Patients with Ataxia
Authors: Shereen Ismail Fawaz, Shin-Ichi Izumi, Nouran Mohamed Salah, Heba G. Saber, Ibrahim Mohamed Roushdi
Abstract:
Background: Multiple sclerosis (MS) is a chronic, inflammatory, mainly demyelinating disease of the central nervous system, more common in young adults. Cerebellar involvement is one of the most disabling lesions in MS and is usually a sign of disease progression. It plays a major role in the planning, initiation, and organization of movement via its influence on the motor cortex and corticospinal outputs. Therefore, it contributes to controlling movement, motor adaptation, and motor learning, in addition to its vast connections with other major pathways controlling balance, such as the cerebellopropriospinal pathways and cerebellovestibular pathways. Hence, trying to stimulate the cerebellum by facilitatory protocols will add to our motor control and balance function. Non-invasive brain stimulation, both repetitive transcranial magnetic stimulation (rTMS) and transcranial direct current stimulation (tDCS), has recently emerged as effective neuromodulators to influence motor and nonmotor functions of the brain. Anodal tDCS has been shown to improve motor skill learning and motor performance beyond the training period. Similarly, rTMS, when used at high frequency (>5 Hz), has a facilitatory effect on the motor cortex. Objective: Our aim was to determine the effect of high-frequency rTMS over the cerebellum in improving balance and functional ambulation of multiple sclerosis patients with Ataxia. Patients and methods: This was a randomized single-blinded placebo-controlled prospective trial on 40 patients. The active group (N=20) received real rTMS sessions, and the control group (N=20) received Sham rTMS using a placebo program designed for this treatment. Both groups received 12 sessions of high-frequency rTMS over the cerebellum, followed by an intensive exercise training program. Sessions were given three times per week for four weeks. The active group protocol had a frequency of 10 Hz rTMS over the cerebellar vermis, work period 5S, number of trains 25, and intertrain interval 25s. The total number of pulses was 1250 pulses per session. The control group received Sham rTMS using a placebo program designed for this treatment. Both groups of patients received an intensive exercise program, which included generalized strengthening exercises, endurance and aerobic training, trunk abdominal exercises, generalized balance training exercises, and task-oriented training such as Boxing. As a primary outcome measure the Modified ICARS was used. Static Posturography was done with: Patients were tested both with open and closed eyes. Secondary outcome measures included the expanded Disability Status Scale (EDSS) and 8 Meter walk test (8MWT). Results: The active group showed significant improvements in all the functional scales, modified ICARS, EDSS, and 8-meter walk test, in addition to significant differences in static Posturography with open eyes, while the control group did not show such differences. Conclusion: Cerebellar high-frequency rTMS could be effective in the functional improvement of balance in MS patients with ataxia.Keywords: brain neuromodulation, high frequency rTMS, cerebellar stimulation, multiple sclerosis, balance rehabilitation
Procedia PDF Downloads 90101 Laboratory and Numerical Hydraulic Modelling of Annular Pipe Electrocoagulation Reactors
Authors: Alejandra Martin-Dominguez, Javier Canto-Rios, Velitchko Tzatchkov
Abstract:
Electrocoagulation is a water treatment technology that consists of generating coagulant species in situ by electrolytic oxidation of sacrificial anode materials triggered by electric current. It removes suspended solids, heavy metals, emulsified oils, bacteria, colloidal solids and particles, soluble inorganic pollutants and other contaminants from water, offering an alternative to the use of metal salts or polymers and polyelectrolyte addition for breaking stable emulsions and suspensions. The method essentially consists of passing the water being treated through pairs of consumable conductive metal plates in parallel, which act as monopolar electrodes, commonly known as ‘sacrificial electrodes’. Physicochemical, electrochemical and hydraulic processes are involved in the efficiency of this type of treatment. While the physicochemical and electrochemical aspects of the technology have been extensively studied, little is known about the influence of the hydraulics. However, the hydraulic process is fundamental for the reactions that take place at the electrode boundary layers and for the coagulant mixing. Electrocoagulation reactors can be open (with free water surface) and closed (pressurized). Independently of the type of rector, hydraulic head loss is an important factor for its design. The present work focuses on the study of the total hydraulic head loss and flow velocity and pressure distribution in electrocoagulation reactors with single or multiple concentric annular cross sections. An analysis of the head loss produced by hydraulic wall shear friction and accessories (minor head losses) is presented, and compared to the head loss measured on a semi-pilot scale laboratory model for different flow rates through the reactor. The tests included laminar, transitional and turbulent flow. The observed head loss was compared also to the head loss predicted by several known conceptual theoretical and empirical equations, specific for flow in concentric annular pipes. Four single concentric annular cross section and one multiple concentric annular cross section reactor configuration were studied. The theoretical head loss resulted higher than the observed in the laboratory model in some of the tests, and lower in others of them, depending also on the assumed value for the wall roughness. Most of the theoretical models assume that the fluid elements in all annular sections have the same velocity, and that flow is steady, uniform and one-dimensional, with the same pressure and velocity profiles in all reactor sections. To check the validity of such assumptions, a computational fluid dynamics (CFD) model of the concentric annular pipe reactor was implemented using the ANSYS Fluent software, demonstrating that pressure and flow velocity distribution inside the reactor actually is not uniform. Based on the analysis, the equations that predict better the head loss in single and multiple annular sections were obtained. Other factors that may impact the head loss, such as the generation of coagulants and gases during the electrochemical reaction, the accumulation of hydroxides inside the reactor, and the change of the electrode material with time, are also discussed. The results can be used as tools for design and scale-up of electrocoagulation reactors, to be integrated into new or existing water treatment plants.Keywords: electrocoagulation reactors, hydraulic head loss, concentric annular pipes, computational fluid dynamics model
Procedia PDF Downloads 218