Search results for: A Man Called Ove
296 Analysis and Comparison of Asymmetric H-Bridge Multilevel Inverter Topologies
Authors: Manel Hammami, Gabriele Grandi
Abstract:
In recent years, multilevel inverters have become more attractive for single-phase photovoltaic (PV) systems, due to their known advantages over conventional H-bridge pulse width-modulated (PWM) inverters. They offer improved output waveforms, smaller filter size, lower total harmonic distortion (THD), higher output voltages and others. The most common multilevel converter topologies, presented in literature, are the neutral-point-clamped (NPC), flying capacitor (FC) and Cascaded H-Bridge (CHB) converters. In both NPC and FC configurations, the number of components drastically increases with the number of levels what leads to complexity of the control strategy, high volume, and cost. Whereas, increasing the number of levels in case of the cascaded H-bridge configuration is a flexible solution. However, it needs isolated power sources for each stage, and it can be applied to PV systems only in case of PV sub-fields. In order to improve the ratio between the number of output voltage levels and the number of components, several hybrids and asymmetric topologies of multilevel inverters have been proposed in the literature such as the FC asymmetric H-bridge (FCAH) and the NPC asymmetric H-bridge (NPCAH) topologies. Another asymmetric multilevel inverter configuration that could have interesting applications is the cascaded asymmetric H-bridge (CAH), which is based on a modular half-bridge (two switches and one capacitor, also called level doubling network, LDN) cascaded to a full H-bridge in order to double the output voltage level. This solution has the same number of switches as the above mentioned AH configurations (i.e., six), and just one capacitor (as the FCAH). CAH is becoming popular, due to its simple, modular and reliable structure, and it can be considered as a retrofit which can be added in series to an existing H-Bridge configuration in order to double the output voltage levels. In this paper, an original and effective method for the analysis of the DC-link voltage ripple is given for single-phase asymmetric H-bridge multilevel inverters based on level doubling network (LDN). Different possible configurations of the asymmetric H-Bridge multilevel inverters have been considered and the analysis of input voltage and current are analytically determined and numerically verified by Matlab/Simulink for the case of cascaded asymmetric H-bridge multilevel inverters. A comparison between FCAH and the CAH configurations is done on the basis of the analysis of the DC and voltage ripple for the DC source (i.e., the PV system). The peak-to-peak DC and voltage ripple amplitudes are analytically calculated over the fundamental period as a function of the modulation index. On the basis of the maximum peak-to-peak values of low frequency and switching ripple voltage components, the DC capacitors can be designed. Reference is made to unity output power factor, as in case of most of the grid-connected PV generation systems. Simulation results will be presented in the full paper in order to prove the effectiveness of the proposed developments in all the operating conditions.Keywords: asymmetric inverters, dc-link voltage, level doubling network, single-phase multilevel inverter
Procedia PDF Downloads 206295 Methods for Early Detection of Invasive Plant Species: A Case Study of Hueston Woods State Nature Preserve
Authors: Suzanne Zazycki, Bamidele Osamika, Heather Craska, Kaelyn Conaway, Reena Murphy, Stephanie Spence
Abstract:
Invasive Plant Species (IPS) are an important component of effective preservation and conservation of natural lands management. IPS are non-native plants which can aggressively encroach upon native species and pose a significant threat to the ecology, public health, and social welfare of a community. The presence of IPS in U.S. nature preserves has caused economic costs, which has estimated to exceed $26 billion a year. While different methods have been identified to control IPS, few methods have been recognized for early detection of IPS. This study examined identified methods for early detection of IPS in Hueston Woods State Nature Preserve. Mixed methods research design was adopted in this four-phased study. The first phase entailed data gathering, the phase described the characteristics and qualities of IPS and the importance of early detection (ED). The second phase explored ED methods, Geographic Information Systems (GIS) and Citizen Science were discovered as ED methods for IPS. The third phase of the study involved the creation of hotspot maps to identify likely areas for IPS growth. While the fourth phase involved testing and evaluating mobile applications that can support the efforts of citizen scientists in IPS detection. Literature reviews were conducted on IPS and ED methods, and four regional experts from ODNR and Miami University were interviewed. A questionnaire was used to gather information about ED methods used across the state. The findings revealed that geospatial methods, including Unmanned Aerial Vehicles (UAVs), Multispectral Satellites (MSS), and Normalized Difference Vegetation Index (NDVI), are not feasible for early detection of IPS, as they require GIS expertise, are still an emerging technology, and are not suitable for every habitat for the ED of IPS. Therefore, Other ED methods options were explored, which include predicting areas where IPS will grow, which can be done through monitoring areas that are like the species’ native habitat. Through literature review and interviews, IPS are known to grow in frequently disturbed areas such as along trails, shorelines, and streambanks. The research team called these areas “hotspots” and created maps of these hotspots specifically for HW NP to support and narrow the efforts of citizen scientists and staff in the ED of IPS. The results further showed that utilizing citizen scientists in the ED of IPS is feasible, especially through single day events or passive monitoring challenges. The study concluded that the creation of hotspot maps to direct the efforts of citizen scientists are effective for the early detection of IPS. Several recommendations were made, among which is the creation of hotspot maps to narrow the ED efforts as citizen scientists continues to work in the preserves and utilize citizen science volunteers to identify and record emerging IPS.Keywords: early detection, hueston woods state nature preserve, invasive plant species, hotspots
Procedia PDF Downloads 102294 Biophysical Analysis of the Interaction of Polymeric Nanoparticles with Biomimetic Models of the Lung Surfactant
Authors: Weiam Daear, Patrick Lai, Elmar Prenner
Abstract:
The human body offers many avenues that could be used for drug delivery. The pulmonary route, which is delivered through the lungs, presents many advantages that have sparked interested in the field. These advantages include; 1) direct access to the lungs and the large surface area it provides, and 2) close proximity to the blood circulation. The air-blood barrier of the alveoli is about 500 nm thick. The air-blood barrier consist of a monolayer of lipids and few proteins called the lung surfactant and cells. This monolayer consists of ~90% lipids and ~10% proteins that are produced by the alveolar epithelial cells. The two major lipid classes constitutes of various saturation and chain length of phosphatidylcholine (PC) and phosphatidylglycerol (PG) representing 80% of total lipid component. The major role of the lung surfactant monolayer is to reduce surface tension experienced during breathing cycles in order to prevent lung collapse. In terms of the pulmonary drug delivery route, drugs pass through various parts of the respiratory system before reaching the alveoli. It is at this location that the lung surfactant functions as the air-blood barrier for drugs. As the field of nanomedicine advances, the use of nanoparticles (NPs) as drug delivery vehicles is becoming very important. This is due to the advantages NPs provide with their large surface area and potential specific targeting. Therefore, studying the interaction of NPs with lung surfactant and whether they affect its stability becomes very essential. The aim of this research is to develop a biomimetic model of the human lung surfactant followed by a biophysical analysis of the interaction of polymeric NPs. This biomimetic model will function as a fast initial mode of testing for whether NPs affect the stability of the human lung surfactant. The model developed thus far is an 8-component lipid system that contains major PC and PG lipids. Recently, a custom made 16:0/16:1 PC and PG lipids were added to the model system. In the human lung surfactant, these lipids constitute 16% of the total lipid component. According to the author’s knowledge, there is not much monolayer data on the biophysical analysis of the 16:0/16:1 lipids, therefore more analysis will be discussed here. Biophysical techniques such as the Langmuir Trough is used for stability measurements which monitors changes to a monolayer's surface pressure upon NP interaction. Furthermore, Brewster Angle Microscopy (BAM) employed to visualize changes to the lateral domain organization. Results show preferential interactions of NPs with different lipid groups that is also dependent on the monolayer fluidity. Furthermore, results show that the film stability upon compression is unaffected, but there are significant changes in the lateral domain organization of the lung surfactant upon NP addition. This research is significant in the field of pulmonary drug delivery. It is shown that NPs within a certain size range are safe for the pulmonary route, but little is known about the mode of interaction of those polymeric NPs. Moreover, this work will provide additional information about the nanotoxicology of NPs tested.Keywords: Brewster angle microscopy, lipids, lung surfactant, nanoparticles
Procedia PDF Downloads 177293 Improved Signal-To-Noise Ratio by the 3D-Functionalization of Fully Zwitterionic Surface Coatings
Authors: Esther Van Andel, Stefanie C. Lange, Maarten M. J. Smulders, Han Zuilhof
Abstract:
False outcomes of diagnostic tests are a major concern in medical health care. To improve the reliability of surface-based diagnostic tests, it is of crucial importance to diminish background signals that arise from the non-specific binding of biomolecules, a process called fouling. The aim is to create surfaces that repel all biomolecules except the molecule of interest. This can be achieved by incorporating antifouling protein repellent coatings in between the sensor surface and it’s recognition elements (e.g. antibodies, sugars, aptamers). Zwitterionic polymer brushes are considered excellent antifouling materials, however, to be able to bind the molecule of interest, the polymer brushes have to be functionalized and so far this was only achieved at the expense of either antifouling or binding capacity. To overcome this limitation, we combined both features into one single monomer: a zwitterionic sulfobetaine, ensuring antifouling capabilities, equipped with a clickable azide moiety which allows for further functionalization. By copolymerizing this monomer together with a standard sulfobetaine, the number of azides (and with that the number of recognition elements) can be tuned depending on the application. First, the clickable azido-monomer was synthesized and characterized, followed by copolymerizing this monomer to yield functionalizable antifouling brushes. The brushes were fully characterized using surface characterization techniques like XPS, contact angle measurements, G-ATR-FTIR and XRR. As a proof of principle, the brushes were subsequently functionalized with biotin via strain-promoted alkyne azide click reactions, which yielded a fully zwitterionic biotin-containing 3D-functionalized coating. The sensing capacity was evaluated by reflectometry using avidin and fibrinogen containing protein solutions. The surfaces showed excellent antifouling properties as illustrated by the complete absence of non-specific fibrinogen binding, while at the same time clear responses were seen for the specific binding of avidin. A great increase in signal-to-noise ratio was observed, even when the amount of functional groups was lowered to 1%, compared to traditional modification of sulfobetaine brushes that rely on a 2D-approach in which only the top-layer can be functionalized. This study was performed on stoichiometric silicon nitride surfaces for future microring resonator based assays, however, this methodology can be transferred to other biosensor platforms which are currently being investigated. The approach presented herein enables a highly efficient strategy for selective binding with retained antifouling properties for improved signal-to-noise ratios in binding assays. The number of recognition units can be adjusted to a specific need, e.g. depending on the size of the analyte to be bound, widening the scope of these functionalizable surface coatings.Keywords: antifouling, signal-to-noise ratio, surface functionalization, zwitterionic polymer brushes
Procedia PDF Downloads 305292 Brief Cognitive Behavior Therapy (BCBT) in a Japanese School Setting: Preliminary Outcomes on a Single Arm Study
Authors: Yuki Matsumoto, Yuma Ishimoto
Abstract:
Cognitive Behavior Therapy (CBT) with children has shown effective application to various problems such as anxiety and depression. Although there are barriers to access to mental health services including lack of professional services in communities and parental concerns about stigma, school has a significant role to address children’s health problems. Schools are regarded as a suitable arena for prevention and early intervention of mental health problems. In this line, CBT can be adaptable to school education and useful to enhance students’ social and emotional skills. However, Japanese school curriculum is rigorous so as to limit available time for implementation of CBT in schools. This paper describes Brief Cognitive Behavior Therapy (BCBT) with children in a Japanese school setting. The program has been developed in order to facilitate acceptability of CBT in schools and aimed to enhance students’ skills to manage anxiety and difficult behaviors. The present research used a single arm design in which 30 students aged 9-10 years old participated. The authors provided teachers a CBT training workshop (two hours) at two primary schools in Tokyo metropolitan area and recruited participants in the research. A homeroom teacher voluntarily delivered a 6-session BCBT program (15 minutes each) in classroom periods which is called as Kaerinokai, a meeting before leaving school. Students completed a questionnaire sheet at pre- and post-periods under the supervision of the teacher. The sheet included the Spence Child Anxiety Scale (SCAS), the Depression Self-Rating Scale for Children (DSRS), and the Strengths and Difficulties Questionnaire (SDQ). The teacher was asked for feedback after the completion. Significant positive changes were found in the total and five of six sub-scales of the SCAS and the total difficulty scale of the SDQ. However, no significant changes were seen in Physical Injury Fear sub-scale of the SCAS, in the DSRS or the Prosocial sub-scale of the SDQ. The effect sizes are mostly between small and medium. The teacher commented that the program was easy to use and found positive changes in classroom activities and personal relationships. This preliminary research showed the feasibility of the BCBT in a school setting. The results suggest that the BCBT offers effective treatment for reduction in anxiety and in difficult behaviors. There is a good prospect of the BCBT suggesting that BCBT may be easier to be delivered than CBT by Japanese teachers to promote child mental health. The study has limitations including no control group, small sample size, or a short teacher training. Future research should address these limitations.Keywords: brief cognitive behavior therapy, cognitive behavior therapy, mental health services in schools, teacher training workshop
Procedia PDF Downloads 333291 Training for Safe Tree Felling in the Forest with Symmetrical Collaborative Virtual Reality
Authors: Irene Capecchi, Tommaso Borghini, Iacopo Bernetti
Abstract:
One of the most common pieces of equipment still used today for pruning, felling, and processing trees is the chainsaw in forestry. However, chainsaw use highlights dangers and one of the highest rates of accidents in both professional and non-professional work. Felling is proportionally the most dangerous phase, both in severity and frequency, because of the risk of being hit by the plant the operator wants to cut down. To avoid this, a correct sequence of chainsaw cuts must be taught concerning the different conditions of the tree. Virtual reality (VR) makes it possible to virtually simulate chainsaw use without danger of injury. The limitations of the existing applications are as follow. The existing platforms are not symmetrical collaborative because the trainee is only in virtual reality, and the trainer can only see the virtual environment on a laptop or PC, and this results in an inefficient teacher-learner relationship. Therefore, most applications only involve the use of a virtual chainsaw, and the trainee thus cannot feel the real weight and inertia of a real chainsaw. Finally, existing applications simulate only a few cases of tree felling. The objectives of this research were to implement and test a symmetrical collaborative training application based on VR and mixed reality (MR) with the overlap between real and virtual chainsaws in MR. The research and training platform was developed for the Meta quest 2 head-mounted display. The research and training platform application is based on the Unity 3D engine, and Present Platform Interaction SDK (PPI-SDK) developed by Meta. PPI-SDK avoids the use of controllers and enables hand tracking and MR. With the combination of these two technologies, it was possible to overlay a virtual chainsaw with a real chainsaw in MR and synchronize their movements in VR. This ensures that the user feels the weight of the actual chainsaw, tightens the muscles, and performs the appropriate movements during the test allowing the user to learn the correct body posture. The chainsaw works only if the right sequence of cuts is made to felling the tree. Contact detection is done by Unity's physics system, which allows the interaction of objects that simulate real-world behavior. Each cut of the chainsaw is defined by a so-called collider, and the felling of the tree can only occur if the colliders are activated in the right order simulating a safe technique felling. In this way, the user can learn how to use the chainsaw safely. The system is also multiplayer, so the student and the instructor can experience VR together in a symmetrical and collaborative way. The platform simulates the following tree-felling situations with safe techniques: cutting the tree tilted forward, cutting the medium-sized tree tilted backward, cutting the large tree tilted backward, sectioning the trunk on the ground, and cutting branches. The application is being evaluated on a sample of university students through a special questionnaire. The results are expected to test both the increase in learning compared to a theoretical lecture and the immersive and telepresence of the platform.Keywords: chainsaw, collaborative symmetric virtual reality, mixed reality, operator training
Procedia PDF Downloads 105290 Analysis of Engagement Methods in the College Classroom Post Pandemic
Authors: Marsha D. Loda
Abstract:
College enrollment is declining and generation Z, today’s college students, are struggling. Before the pandemic, researchers characterized this generational cohort as unique. Gen Z has been called the most achievement-oriented generation, as they enjoy greater economic status, are more racially and ethnically diverse, and better educated than any other generation. However, they are also the most likely generation to suffer from depression and anxiety. Gen Z has grown up largely with usually well-intentioned but overprotective parents who inadvertently kept them from learning life skills, likely impacting their ability to cope with and to effectively manage challenges. The unprecedented challenges resulting from the pandemic up ended their world and left them emotionally reeling. One of the ramifications of this for higher education is how to reengage current Gen Z students in the classroom. This research presents qualitative findings from 24 single-spaced pages of verbatim comments from college students. Research questions concerned what helps them learn and what they abhor, as well as how to engage them with the university outside of the classroom to aid in retention. Students leave little doubt about what they want to experience in the classroom. In order of mention, students want discussion, to engage with questions, to hear how a topic relates to real life and the real world, to feel connections with the professor and fellow students, and to have an opportunity to give their opinions. They prefer a classroom that involves conversation, with interesting topics and active learning. “professor talks instead of lecturing” “professor builds a connection with the classroom” “I am engaged because it feels like a respectful conversation” Similarly, students are direct about what they dislike in a classroom. In order of frequency, students dislike teachers unenthusiastically reading word or word from notes or presentations, repeating the text without adding examples, or addressing how to apply the information. “All lecture. I can read the book myself” “Not taught how to apply the skill or lesson” “Lectures the entire time. Lesson goes in one ear and out the other.” Pertaining to engagement outside the classroom, Gen Z challenges higher education to step outside the box. They don’t want to just hear from professionals in their field, they want to meet and interact with them. Perhaps because of their dependence on technology and pandemic isolation, they seem to reach out for assistance in forming social bonds. “I believe fun and social events are the best way to connect with students and get them involved. Cookouts, raffles, socials, or networking events would all most likely appeal to many students”. “Events… even if they aren’t directly related to learning. Maybe like movie nights… doing meet ups at restaurants”. Qualitative research suggests strategy. This research is rife with strategic implications to improve learning, increase engagement and reduce drop-out rates among Generation Z higher education students. It also compliments existing research on student engagement. With college enrollment declining by some 1.3 million students over the last two years, this research is both timely and important.Keywords: college enrollment, generation Z, higher education, pandemic, student engagement
Procedia PDF Downloads 103289 Professional Learning, Professional Development and Academic Identity of Sessional Teachers: Underpinning Theoretical Frameworks
Authors: Aparna Datey
Abstract:
This paper explores the theoretical frameworks underpinning professional learning, professional development, and academic identity. The focus is on sessional teachers (also called tutors or adjuncts) in architectural design studios, who may be practitioners, masters or doctoral students and academics hired ‘as needed’. Drawing from Schön’s work on reflective practice, learning and developmental theories of Vygotsky (social constructionism and zones of proximal development), informal and workplace learning, this research proposes that sessional teachers not only develop their teaching skills but also shape their identities through their 'everyday' work. Continuing academic staff develop their teaching through a combination of active teaching, self-reflection on teaching, as well as learning to teach from others via formalised programs and informally in the workplace. They are provided professional development and recognised for their teaching efforts through promotion, student citations, and awards for teaching excellence. The teaching experiences of sessional staff, by comparison, may be discontinuous and they generally have fewer opportunities and incentives for teaching development. In the absence of access to formalised programs, sessional teachers develop their teaching informally in workplace settings that may be supportive or unhelpful. Their learning as teachers is embedded in everyday practice applying problem-solving skills in ambiguous and uncertain settings. Depending on their level of expertise, they understand how to teach a subject such that students are stimulated to learn. Adult learning theories posit that adults have different motivations for learning and fall into a matrix of readiness, that an adult’s ability to make sense of their learning is shaped by their values, expectations, beliefs, feelings, attitudes, and judgements, and they are self-directed. The level of expertise of sessional teachers depends on their individual attributes and motivations, as well as on their work environment, the good practices they acquire and enhance through their practice, career training and development, the clarity of their role in the delivery of teaching, and other factors. The architectural design studio is ideal for study due to the historical persistence of the vocational learning or apprenticeship model (learning under the guidance of experts) and a pedagogical format using two key approaches: project-based problem solving and collaborative learning. Hence, investigating the theoretical frameworks underlying academic roles and informal professional learning in the workplace would deepen understanding of their professional development and how they shape their academic identities. This qualitative research is ongoing at a major university in Australia, but the growing trend towards hiring sessional staff to teach core courses in many disciplines is a global one. This research will contribute to including transient sessional teachers in the discourse on institutional quality, effectiveness, and student learning.Keywords: academic identity, architectural design learning, pedagogy, teaching and learning, sessional teachers
Procedia PDF Downloads 123288 A Fast Multi-Scale Finite Element Method for Geophysical Resistivity Measurements
Authors: Mostafa Shahriari, Sergio Rojas, David Pardo, Angel Rodriguez- Rozas, Shaaban A. Bakr, Victor M. Calo, Ignacio Muga
Abstract:
Logging-While Drilling (LWD) is a technique to record down-hole logging measurements while drilling the well. Nowadays, LWD devices (e.g., nuclear, sonic, resistivity) are mostly used commercially for geo-steering applications. Modern borehole resistivity tools are able to measure all components of the magnetic field by incorporating tilted coils. The depth of investigation of LWD tools is limited compared to the thickness of the geological layers. Thus, it is a common practice to approximate the Earth’s subsurface with a sequence of 1D models. For a 1D model, we can reduce the dimensionality of the problem using a Hankel transform. We can solve the resulting system of ordinary differential equations (ODEs) either (a) analytically, which results in a so-called semi-analytic method after performing a numerical inverse Hankel transform, or (b) numerically. Semi-analytic methods are used by the industry due to their high performance. However, they have major limitations, namely: -The analytical solution of the aforementioned system of ODEs exists only for piecewise constant resistivity distributions. For arbitrary resistivity distributions, the solution of the system of ODEs is unknown by today’s knowledge. -In geo-steering, we need to solve inverse problems with respect to the inversion variables (e.g., the constant resistivity value of each layer and bed boundary positions) using a gradient-based inversion method. Thus, we need to compute the corresponding derivatives. However, the analytical derivatives of cross-bedded formation and the analytical derivatives with respect to the bed boundary positions have not been published to the best of our knowledge. The main contribution of this work is to overcome the aforementioned limitations of semi-analytic methods by solving each 1D model (associated with each Hankel mode) using an efficient multi-scale finite element method. The main idea is to divide our computations into two parts: (a) offline computations, which are independent of the tool positions and we precompute only once and use them for all logging positions, and (b) online computations, which depend upon the logging position. With the above method, (a) we can consider arbitrary resistivity distributions along the 1D model, and (b) we can easily and rapidly compute the derivatives with respect to any inversion variable at a negligible additional cost by using an adjoint state formulation. Although the proposed method is slower than semi-analytic methods, its computational efficiency is still high. In the presentation, we shall derive the mathematical variational formulation, describe the proposed multi-scale finite element method, and verify the accuracy and efficiency of our method by performing a wide range of numerical experiments and comparing the numerical solutions to semi-analytic ones when the latest are available.Keywords: logging-While-Drilling, resistivity measurements, multi-scale finite elements, Hankel transform
Procedia PDF Downloads 385287 A Case Study on Problems Originated from Critical Path Method Application in a Governmental Construction Project
Authors: Mohammad Lemar Zalmai, Osman Hurol Turkakin, Cemil Akcay, Ekrem Manisali
Abstract:
In public construction projects, determining the contract period in the award phase is one of the most important factors. The contract period establishes the baseline for creating the cash flow curve and progress payment planning in the post-award phase. If overestimated, project duration causes losses for both the owner and the contractor. Therefore, it is essential to base construction project duration on reliable forecasting. In Turkey, schedules are usually built using the bar chart (Gantt) schedule, especially for governmental construction agencies. The usage of these schedules is limited for bidding purposes. Although the bar-chart schedule is useful in some cases, it lacks logical connections between activities; it would be harder to obtain the activities that have more effects than others on the project's total duration, especially in large complex projects. In this study, a construction schedule is prepared with Critical Path Method (CPM) that addresses the above-mentioned discrepancies. CPM is a simple and effective method that displays project time and critical paths, showing results of forward and backward calculations with considering the logic relationships between activities; it is a powerful tool for planning and managing all kinds of construction projects and is a very convenient method for the construction industry. CPM provides a much more useful and precise approach than traditional bar-chart diagrams that form the basis of construction planning and control. CPM has two main application utilities in the construction field; the first one is obtaining project duration, which is called an as-planned schedule that includes as-planned activity durations with relationships between subsequent activities. Another utility is during the project execution; each activity is tracked, and their durations are recorded in order to obtain as-built schedule, which is named as a black box of the project. The latter is more useful for delay analysis, and conflict resolutions. These features of CPM have been popular around the world. However, it has not been yet extensively used in Turkey. In this study, a real construction project is investigated as a case study; CPM-based scheduling is used for establishing both of as-built and as-planned schedules. Problems that emerged during the construction phase are identified and categorized. Subsequently, solutions are suggested. Two scenarios were considered. In the first scenario, project progress was monitored based as CPM was used to track and manage progress; this was carried out based on real-time data. In the second scenario, project progress was supposedly tracked based on the assumption that the Gantt chart was used. The S-curves of the two scenarios are plotted and interpreted. Comparing the results, possible faults of the latter scenario are highlighted, and solutions are suggested. The importance of CPM implementation has been emphasized and it has been proposed to make it mandatory for preparation of construction schedule based on CPM for public construction projects contracts.Keywords: as-built, case-study, critical path method, Turkish government sector projects
Procedia PDF Downloads 117286 Decomposition of the Discount Function Into Impatience and Uncertainty Aversion. How Neurofinance Can Help to Understand Behavioral Anomalies
Authors: Roberta Martino, Viviana Ventre
Abstract:
Intertemporal choices are choices under conditions of uncertainty in which the consequences are distributed over time. The Discounted Utility Model is the essential reference for describing the individual in the context of intertemporal choice. The model is based on the idea that the individual selects the alternative with the highest utility, which is calculated by multiplying the cardinal utility of the outcome, as if the reception were instantaneous, by the discount function that determines a decrease in the utility value according to how the actual reception of the outcome is far away from the moment the choice is made. Initially, the discount function was assumed to have an exponential trend, whose decrease over time is constant, in line with a profile of a rational investor described by classical economics. Instead, empirical evidence called for the formulation of alternative, hyperbolic models that better represented the actual actions of the investor. Attitudes that do not comply with the principles of classical rationality are termed anomalous, i.e., difficult to rationalize and describe through normative models. The development of behavioral finance, which describes investor behavior through cognitive psychology, has shown that deviations from rationality are due to the limited rationality condition of human beings. What this means is that when a choice is made in a very difficult and information-rich environment, the brain does a compromise job between the cognitive effort required and the selection of an alternative. Moreover, the evaluation and selection phase of the alternative, the collection and processing of information, are dynamics conditioned by systematic distortions of the decision-making process that are the behavioral biases involving the individual's emotional and cognitive system. In this paper we present an original decomposition of the discount function to investigate the psychological principles of hyperbolic discounting. It is possible to decompose the curve into two components: the first component is responsible for the smaller decrease in the outcome as time increases and is related to the individual's impatience; the second component relates to the change in the direction of the tangent vector to the curve and indicates how much the individual perceives the indeterminacy of the future indicating his or her aversion to uncertainty. This decomposition allows interesting conclusions to be drawn with respect to the concept of impatience and the emotional drives involved in decision-making. The contribution that neuroscience can make to decision theory and inter-temporal choice theory is vast as it would allow the description of the decision-making process as the relationship between the individual's emotional and cognitive factors. Neurofinance is a discipline that uses a multidisciplinary approach to investigate how the brain influences decision-making. Indeed, considering that the decision-making process is linked to the activity of the prefrontal cortex and amygdala, neurofinance can help determine the extent to which abnormal attitudes respect the principles of rationality.Keywords: impatience, intertemporal choice, neurofinance, rationality, uncertainty
Procedia PDF Downloads 128285 Precise Determination of the Residual Stress Gradient in Composite Laminates Using a Configurable Numerical-Experimental Coupling Based on the Incremental Hole Drilling Method
Authors: A. S. Ibrahim Mamane, S. Giljean, M.-J. Pac, G. L’Hostis
Abstract:
Fiber reinforced composite laminates are particularly subject to residual stresses due to their heterogeneity and the complex chemical, mechanical and thermal mechanisms that occur during their processing. Residual stresses are now well known to cause damage accumulation, shape instability, and behavior disturbance in composite parts. Many works exist in the literature on techniques for minimizing residual stresses in thermosetting and thermoplastic composites mainly. To study in-depth the influence of processing mechanisms on the formation of residual stresses and to minimize them by establishing a reliable correlation, it is essential to be able to measure very precisely the profile of residual stresses in the composite. Residual stresses are important data to consider when sizing composite parts and predicting their behavior. The incremental hole drilling is very effective in measuring the gradient of residual stresses in composite laminates. This method is semi-destructive and consists of drilling incrementally a hole through the thickness of the material and measuring relaxation strains around the hole for each increment using three strain gauges. These strains are then converted into residual stresses using a matrix of coefficients. These coefficients, called calibration coefficients, depending on the diameter of the hole and the dimensions of the gauges used. The reliability of the incremental hole drilling depends on the accuracy with which the calibration coefficients are determined. These coefficients are calculated using a finite element model. The samples’ features and the experimental conditions must be considered in the simulation. Any mismatch can lead to inadequate calibration coefficients, thus introducing errors on residual stresses. Several calibration coefficient correction methods exist for isotropic material, but there is a lack of information on this subject concerning composite laminates. In this work, a Python program was developed to automatically generate the adequate finite element model. This model allowed us to perform a parametric study to assess the influence of experimental errors on the calibration coefficients. The results highlighted the sensitivity of the calibration coefficients to the considered errors and gave an order of magnitude of the precisions required on the experimental device to have reliable measurements. On the basis of these results, improvements were proposed on the experimental device. Furthermore, a numerical method was proposed to correct the calibration coefficients for different types of materials, including thick composite parts for which the analytical approach is too complex. This method consists of taking into account the experimental errors in the simulation. Accurate measurement of the experimental errors (such as eccentricity of the hole, angular deviation of the gauges from their theoretical position, or errors on increment depth) is therefore necessary. The aim is to determine more precisely the residual stresses and to expand the validity domain of the incremental hole drilling technique.Keywords: fiber reinforced composites, finite element simulation, incremental hole drilling method, numerical correction of the calibration coefficients, residual stresses
Procedia PDF Downloads 131284 Devulcanization of Waste Rubber Using Thermomechanical Method Combined with Supercritical CO₂
Authors: L. Asaro, M. Gratton, S. Seghar, N. Poirot, N. Ait Hocine
Abstract:
Rubber waste disposal is an environmental problem. Particularly, many researches are centered in the management of discarded tires. In spite of all different ways of handling used tires, the most common is to deposit them in a landfill, creating a stock of tires. These stocks can cause fire danger and provide ambient for rodents, mosquitoes and other pests, causing health hazards and environmental problems. Because of the three-dimensional structure of the rubbers and their specific composition that include several additives, their recycling is a current technological challenge. The technique which can break down the crosslink bonds in the rubber is called devulcanization. Strictly, devulcanization can be defined as a process where poly-, di-, and mono-sulfidic bonds, formed during vulcanization, are totally or partially broken. In the recent years, super critical carbon dioxide (scCO₂) was proposed as a green devulcanization atmosphere. This is because it is chemically inactive, nontoxic, nonflammable and inexpensive. Its critical point can be easily reached (31.1 °C and 7.38 MPa), and residual scCO₂ in the devulcanized rubber can be easily and rapidly removed by releasing pressure. In this study thermomechanical devulcanization of ground tire rubber (GTR) was performed in a twin screw extruder under diverse operation conditions. Supercritical CO₂ was added in different quantities to promote the devulcanization. Temperature, screw speed and quantity of CO₂ were the parameters that were varied during the process. The devulcanized rubber was characterized by its devulcanization percent and crosslink density by swelling in toluene. Infrared spectroscopy (FTIR) and Gel permeation chromatography (GPC) were also done, and the results were related with the Mooney viscosity. The results showed that the crosslink density decreases as the extruder temperature and speed increases, and, as expected, the soluble fraction increase with both parameters. The Mooney viscosity of the devulcanized rubber decreases as the extruder temperature increases. The reached values were in good correlation (R= 0.96) with de the soluble fraction. In order to analyze if the devulcanization was caused by main chains or crosslink scission, the Horikx's theory was used. Results showed that all tests fall in the curve that corresponds to the sulfur bond scission, which indicates that the devulcanization has successfully happened without degradation of the rubber. In the spectra obtained by FTIR, it was observed that none of the characteristic peaks of the GTR were modified by the different devulcanization conditions. This was expected, because due to the low sulfur content (~1.4 phr) and the multiphasic composition of the GTR, it is very difficult to evaluate the devulcanization by this technique. The lowest crosslink density was reached with 1 cm³/min of CO₂, and the power consumed in that process was also near to the minimum. These results encourage us to do further analyses to better understand the effect of the different conditions on the devulcanization process. The analysis is currently extended to monophasic rubbers as ethylene propylene diene monomer rubber (EPDM) and natural rubber (NR).Keywords: devulcanization, recycling, rubber, waste
Procedia PDF Downloads 383283 Modeling the International Economic Relations Development: The Prospects for Regional and Global Economic Integration
Authors: M. G. Shilina
Abstract:
The interstate economic interaction phenomenon is complex. ‘Economic integration’, as one of its types, can be explored through the prism of international law, the theories of the world economy, politics and international relations. The most objective study of the phenomenon requires a comprehensive multifactoral approach. In new geopolitical realities, the problems of coexistence and possible interconnection of various mechanisms of interstate economic interaction are actively discussed. Currently, the Eurasian continent states support the direction to economic integration. At the same time, the existing international economic law fragmentation in Eurasia is seen as the important problem. The Eurasian space is characterized by a various types of interstate relations: international agreements (multilateral and bilateral), and a large number of cooperation formats (from discussion platforms to organizations aimed at deep integration). For their harmonization, it is necessary to have a clear vision to the phased international economic relations regulation options. In the conditions of rapid development of international economic relations, the modeling (including prognostic) can be optimally used as the main scientific method for presenting the phenomenon. On the basis of this method, it is possible to form the current situation vision and the best options for further action. In order to determine the most objective version of the integration development, the combination of several approaches were used. The normative legal approach- the descriptive method of legal modeling- was taken as the basis for the analysis. A set of legal methods was supplemented by the international relations science prognostic methods. The key elements of the model are the international economic organizations and states' associations existing in the Eurasian space (the Eurasian Economic Union (EAEU), the European Union (EU), the Shanghai Cooperation Organization (SCO), Chinese project ‘One belt-one road’ (OBOR), the Commonwealth of Independent States (CIS), BRICS, etc.). A general term for the elements of the model is proposed - the interstate interaction mechanisms (IIM). The aim of building a model of current and future Eurasian economic integration is to show optimal options for joint economic development of the states and IIMs. The long-term goal of this development is the new economic and political space, so-called the ‘Great Eurasian Community’. The process of achievement this long-term goal consists of successive steps. Modeling the integration architecture and dividing the interaction into stages led us to the following conclusion: the SCO is able to transform Eurasia into a single economic space. Gradual implementation of the complex phased model, in which the SCO+ plays a key role, will allow building an effective economic integration for all its participants, to create an economically strong community. The model can have practical value for politicians, lawyers, economists and other participants involved in the economic integration process. A clear, systematic structure can serve as a basis for further governmental action.Keywords: economic integration, The Eurasian Economic Union, The European Union, The Shanghai Cooperation Organization, The Silk Road Economic Belt
Procedia PDF Downloads 147282 Cultural Adaptation of an Appropriate Intervention Tool for Mental Health among the Mohawk in Quebec
Authors: Liliana Gomez Cardona, Mary McComber, Kristyn Brown, Arlene Laliberté, Outi Linnaranta
Abstract:
The history of colonialism and more contemporary political issues have resulted in the exposure of Kanien'kehá:ka: non (Kanien'kehá:ka of Kahnawake) to challenging and even traumatic experiences. Colonization, religious missions, residential schools as well as economic and political marginalization are the factors that have challenged the wellbeing and mental health of these populations. In psychiatry, screening for mental illness is often done using questionnaires with which the patient is expected to respond to how often he/she has certain symptoms. However, the Indigenous view of mental wellbeing may not fit well with this approach. Moreover, biomedical treatments do not always meet the needs of Indigenous people because they do not understand the culture and traditional healing methods that persist in many communities. Assess whether the questionnaires used to measure symptoms, commonly used in psychiatry are appropriate and culturally safe for the Mohawk in Quebec. Identify the most appropriate tool to assess and promote wellbeing and follow the process necessary to improve its cultural sensitivity and safety for the Mohawk population. Qualitative, collaborative, and participatory action research project which respects First Nations protocols and the principles of ownership, control, access, and possession (OCAP). Data collection based on five focus groups with stakeholders working with these populations and members of Indigenous communities. Thematic analysis of the data collected and emerging through an advisory group that led a revision of the content, use, and cultural and conceptual relevance of the instruments. The questionnaires measuring psychiatric symptoms face significant limitations in the local indigenous context. We present the factors that make these tools not relevant among Mohawks. Although the scale called Growth and Empowerment Measure (GEM) was originally developed among Indigenous in Australia, the Mohawk in Quebec found that this tool comprehends critical aspects of their mental health and wellbeing more respectfully and accurately than questionnaires focused on measuring symptoms. We document the process of cultural adaptation of this tool which was supported by community members to create a culturally safe tool that helps in growth and empowerment. The cultural adaptation of the GEM provides valuable information about the factors affecting wellbeing and contributes to mental health promotion. This process improves mental health services by giving health care providers useful information about the Mohawk population and their clients. We believe that integrating this tool in interventions can help create a bridge to improve communication between the Indigenous cultural perspective of the patient and the biomedical view of health care providers. Further work is needed to confirm the clinical utility of this tool in psychological and psychiatric intervention along with social and community services.Keywords: cultural adaptation, cultural safety, empowerment, Mohawks, mental health, Quebec
Procedia PDF Downloads 153281 Using Convolutional Neural Networks to Distinguish Different Sign Language Alphanumerics
Authors: Stephen L. Green, Alexander N. Gorban, Ivan Y. Tyukin
Abstract:
Within the past decade, using Convolutional Neural Networks (CNN)’s to create Deep Learning systems capable of translating Sign Language into text has been a breakthrough in breaking the communication barrier for deaf-mute people. Conventional research on this subject has been concerned with training the network to recognize the fingerspelling gestures of a given language and produce their corresponding alphanumerics. One of the problems with the current developing technology is that images are scarce, with little variations in the gestures being presented to the recognition program, often skewed towards single skin tones and hand sizes that makes a percentage of the population’s fingerspelling harder to detect. Along with this, current gesture detection programs are only trained on one finger spelling language despite there being one hundred and forty-two known variants so far. All of this presents a limitation for traditional exploitation for the state of current technologies such as CNN’s, due to their large number of required parameters. This work aims to present a technology that aims to resolve this issue by combining a pretrained legacy AI system for a generic object recognition task with a corrector method to uptrain the legacy network. This is a computationally efficient procedure that does not require large volumes of data even when covering a broad range of sign languages such as American Sign Language, British Sign Language and Chinese Sign Language (Pinyin). Implementing recent results on method concentration, namely the stochastic separation theorem, an AI system is supposed as an operate mapping an input present in the set of images u ∈ U to an output that exists in a set of predicted class labels q ∈ Q of the alphanumeric that q represents and the language it comes from. These inputs and outputs, along with the interval variables z ∈ Z represent the system’s current state which implies a mapping that assigns an element x ∈ ℝⁿ to the triple (u, z, q). As all xi are i.i.d vectors drawn from a product mean distribution, over a period of time the AI generates a large set of measurements xi called S that are grouped into two categories: the correct predictions M and the incorrect predictions Y. Once the network has made its predictions, a corrector can then be applied through centering S and Y by subtracting their means. The data is then regularized by applying the Kaiser rule to the resulting eigenmatrix and then whitened before being split into pairwise, positively correlated clusters. Each of these clusters produces a unique hyperplane and if any element x falls outside the region bounded by these lines then it is reported as an error. As a result of this methodology, a self-correcting recognition process is created that can identify fingerspelling from a variety of sign language and successfully identify the corresponding alphanumeric and what language the gesture originates from which no other neural network has been able to replicate.Keywords: convolutional neural networks, deep learning, shallow correctors, sign language
Procedia PDF Downloads 99280 Development of an Interface between BIM-model and an AI-based Control System for Building Facades with Integrated PV Technology
Authors: Moser Stephan, Lukasser Gerald, Weitlaner Robert
Abstract:
Urban structures will be used more intensively in the future through redensification or new planned districts with high building densities. Especially, to achieve positive energy balances like requested for Positive Energy Districts (PED) the single use of roofs is not sufficient for dense urban areas. However, the increasing share of window significantly reduces the facade area available for use in PV generation. Through the use of PV technology at other building components, such as external venetian blinds, onsite generation can be maximized and standard functionalities of this product can be positively extended. While offering advantages in terms of infrastructure, sustainability in the use of resources and efficiency, these systems require an increased optimization in planning and control strategies of buildings. External venetian blinds with PV technology require an intelligent control concept to meet the required demands such as maximum power generation, glare prevention, high daylight autonomy, avoidance of summer overheating but also use of passive solar gains in wintertime. Today, geometric representation of outdoor spaces and at the building level, three-dimensional geometric information is available for planning with Building Information Modeling (BIM). In a research project, a web application which is called HELLA DECART was developed to provide this data structure to extract the data required for the simulation from the BIM models and to make it usable for the calculations and coupled simulations. The investigated object is uploaded as an IFC file to this web application and includes the object as well as the neighboring buildings and possible remote shading. This tool uses a ray tracing method to determine possible glare from solar reflections of a neighboring building as well as near and far shadows per window on the object. Subsequently, an annual estimate of the sunlight per window is calculated by taking weather data into account. This optimized daylight assessment per window provides the ability to calculate an estimation of the potential power generation at the integrated PV on the venetian blind but also for the daylight and solar entry. As a next step, these results of the calculations as well as all necessary parameters for the thermal simulation can be provided. The overall aim of this workflow is to advance the coordination between the BIM model and coupled building simulation with the resulting shading and daylighting system with the artificial lighting system and maximum power generation in a control system. In the research project Powershade, an AI based control concept for PV integrated façade elements with coupled simulation results is investigated. The developed automated workflow concept in this paper is tested by using an office living lab at the HELLA company.Keywords: BIPV, building simulation, optimized control strategy, planning tool
Procedia PDF Downloads 108279 Recovering Trust in Institutions through Networked Governance: An Analytical Approach via the Study of the Provincial Government of Gipuzkoa
Authors: Xabier Barandiaran, Igone Guerra
Abstract:
The economic and financial crisis that hit European countries in 2008 revealed the inability of governments to respond unilaterally to the so-called “wicked” problems that affect our societies. Closely linked to this, the increasing disaffection of citizens towards politics has resulted in growing distrust of the citizenry not only in the institutions in general but also in the political system, in particular. Precisely, these two factors provoked the action of the local government of Gipuzkoa (Basque Country) to move from old ways of “doing politics” to a new way of “thinking politics” based on a collaborative approach, in which innovative modes of public decision making are prominent. In this context, in 2015, the initiative Etorkizuna Eraikiz (Building the Future), a contemporary form of networked governance, was launched by the Provincial Government. The paper focuses on the Etorkizuna Eraikiz initiative, a sound commitment from a local government to build jointly with the citizens the future of the territory. This paper will present preliminary results obtained from three different experiences of co-creation developed within Etorkizuna Eraikiz in which the formulation of networked governance is a mandatory pre-requisite. These experiences show how the network building approach among the different agents of the territory as well as the co-creation of public policies is the cornerstone of this challenging mission. Through the analysis of the information and documentation gathered during the four years of Etorkizuna-Eraikiz, and, specifically by delving into the strategy promoted by the initiative, some emerging analytical conclusions resulting from the promotion of this collaborative culture will be presented. For example, some preliminary results have shown a significant positive relationship between shared leadership and the formulation of the public good. In the period 2016-2018, a total of 73 projects were launched and funding by the Provincial Government of Gipuzkoa within the Etorkizuna Eraikiz initiative, that indicates greater engagement of the citizenry in the process of policy-making and therefore improving, somehow, the quality of the public policies. These statements have been supported by the last survey about the perspectives of the citizens toward politics and policies. Some of the more prominent results show us that there is still a high level of distrust in Politics (78,9% of respondents) but a greater trust in institutions such the Political Government of Gipuzkoa (40,8% of respondents declared as “good” the performance of this provincial institution). Regarding the Etorkizuna Eraikiz Initiative, it is being more readily recognized by citizens over this period of time (25,4% of the respondents in June 2018 agreed to know about the initiative giving it a mark of 5,89 ) and thus build trust and a sense of ownership. Although, there is a clear requirement for further research on the linkages between collaborative governance and level of trust, the paper, based on these findings, will provide some managerial and theoretical implications for collaborative governance in the territory.Keywords: network governance, collaborative governance, public sector innovation, citizen participation, trust
Procedia PDF Downloads 122278 Electro-Hydrodynamic Effects Due to Plasma Bullet Propagation
Authors: Panagiotis Svarnas, Polykarpos Papadopoulos
Abstract:
Atmospheric-pressure cold plasmas continue to gain increasing interest for various applications due to their unique properties, like cost-efficient production, high chemical reactivity, low gas temperature, adaptability, etc. Numerous designs have been proposed for these plasmas production in terms of electrode configuration, driving voltage waveform and working gas(es). However, in order to exploit most of the advantages of these systems, the majority of the designs are based on dielectric-barrier discharges (DBDs) either in filamentary or glow regimes. A special category of the DBD-based atmospheric-pressure cold plasmas refers to the so-called plasma jets, where a carrier noble gas is guided by the dielectric barrier (usually a hollow cylinder) and left to flow up to the atmospheric air where a complicated hydrodynamic interplay takes place. Although it is now well established that these plasmas are generated due to ionizing waves reminding in many ways streamer propagation, they exhibit discrete characteristics which are better mirrored on the terms 'guided streamers' or 'plasma bullets'. These 'bullets' travel with supersonic velocities both inside the dielectric barrier and the channel formed by the noble gas during its penetration into the air. The present work is devoted to the interpretation of the electro-hydrodynamic effects that take place downstream of the dielectric barrier opening, i.e., in the noble gas-air mixing area where plasma bullet propagate under the influence of local electric fields in regions of variable noble gas concentration. Herein, we focus on the role of the local space charge and the residual ionic charge left behind after the bullet propagation in the gas flow field modification. The study communicates both experimental and numerical results, coupled in a comprehensive manner. The plasma bullets are here produced by a custom device having a quartz tube as a dielectric barrier and two external ring-type electrodes driven by sinusoidal high voltage at 10 kHz. Helium gas is fed to the tube and schlieren photography is employed for mapping the flow field downstream of the tube orifice. Mixture mass conservation equation, momentum conservation equation, energy conservation equation in terms of temperature and helium transfer equation are simultaneously solved, leading to the physical mechanisms that govern the experimental results. Namely, we deal with electro-hydrodynamic effects mainly due to momentum transfer from atomic ions to neutrals. The atomic ions are left behind as residual charge after the bullet propagation and gain energy from the locally created electric field. The electro-hydrodynamic force is eventually evaluated.Keywords: atmospheric-pressure plasmas, dielectric-barrier discharges, schlieren photography, electro-hydrodynamic force
Procedia PDF Downloads 137277 Deficient Multisensory Integration with Concomitant Resting-State Connectivity in Adult Attention Deficit/Hyperactivity Disorder (ADHD)
Authors: Marcel Schulze, Behrem Aslan, Silke Lux, Alexandra Philipsen
Abstract:
Objective: Patients with Attention Deficit/Hyperactivity Disorder (ADHD) often report that they are being flooded by sensory impressions. Studies investigating sensory processing show hypersensitivity for sensory inputs across the senses in children and adults with ADHD. Especially the auditory modality is affected by deficient acoustical inhibition and modulation of signals. While studying unimodal signal-processing is relevant and well-suited in a controlled laboratory environment, everyday life situations occur multimodal. A complex interplay of the senses is necessary to form a unified percept. In order to achieve this, the unimodal sensory modalities are bound together in a process called multisensory integration (MI). In the current study we investigate MI in an adult ADHD sample using the McGurk-effect – a well-known illusion where incongruent speech like phonemes lead in case of successful integration to a new perceived phoneme via late top-down attentional allocation . In ADHD neuronal dysregulation at rest e.g., aberrant within or between network functional connectivity may also account for difficulties in integrating across the senses. Therefore, the current study includes resting-state functional connectivity to investigate a possible relation of deficient network connectivity and the ability of stimulus integration. Method: Twenty-five ADHD patients (6 females, age: 30.08 (SD:9,3) years) and twenty-four healthy controls (9 females; age: 26.88 (SD: 6.3) years) were recruited. MI was examined using the McGurk effect, where - in case of successful MI - incongruent speech-like phonemes between visual and auditory modality are leading to a perception of a new phoneme. Mann-Whitney-U test was applied to assess statistical differences between groups. Echo-planar imaging-resting-state functional MRI was acquired on a 3.0 Tesla Siemens Magnetom MR scanner. A seed-to-voxel analysis was realized using the CONN toolbox. Results: Susceptibility to McGurk was significantly lowered for ADHD patients (ADHDMdn:5.83%, ControlsMdn:44.2%, U= 160.5, p=0.022, r=-0.34). When ADHD patients integrated phonemes, reaction times were significantly longer (ADHDMdn:1260ms, ControlsMdn:582ms, U=41.0, p<.000, r= -0.56). In functional connectivity medio temporal gyrus (seed) was negatively associated with primary auditory cortex, inferior frontal gyrus, precentral gyrus, and fusiform gyrus. Conclusion: MI seems to be deficient for ADHD patients for stimuli that need top-down attentional allocation. This finding is supported by stronger functional connectivity from unimodal sensory areas to polymodal, MI convergence zones for complex stimuli in ADHD patients.Keywords: attention-deficit hyperactivity disorder, audiovisual integration, McGurk-effect, resting-state functional connectivity
Procedia PDF Downloads 124276 The Construction Women Self in Law: A Case of Medico-Legal Jurisprudence Textbooks in Rape Cases
Authors: Rahul Ranjan
Abstract:
Using gender as a category to cull out historical analysis, feminist scholars have produced plethora of literature on the sexual symbolics and carnal practices of modern European empires. At a symbolic level, the penetration and conquest of faraway lands was charged with sexual significance and intrigue. The white male’s domination and possession of dark and fertile lands in Africa, Asia and the Americas offered, in Anne McClintock’s words, ‘a fantastic magic lantern of the mind onto which Europe projected its forbidden sexual desires and fears’. The politics of rape were also symbolically a question significant to the politics of empire. To the colonized subject, rape was a fearsome factor, a language that spoke of violent and voracious nature of imperial exploitation. The colonized often looked at rape as an act which colonizers used as tool of oppression. The rape as act of violence got encoded into the legal structure under the helm of Lord Macaulay in the so called ‘Age of Reform’ in 1860 under IPC (Indian penal code). Initially Lord Macaulay formed Indian Law Commission in 1837 in which he drafted a bill and defined the ‘crime of rape as sexual intercourse by a man to a woman against her will and without her consent , except in cases involving girls under nine years of age where consent was immaterial’. The modern English law of rape formulated under the colonial era introduced twofold issues to the forefront. On the one hand it deployed ‘technical experts’ who wrote textbooks of medical jurisprudence that were used as credential citation to make case more ‘objective’, while on the other hand the presumptions about barbaric subjects, the colonized women’s body that was docile which is prone to adultery reflected in cases. The untrustworthiness of native witness also remained an imperative for British jurists to put extra emphasis making ‘objective’ and ‘presumptuous’. This sort of formulation put women down on the pedestrian of justice because it disadvantaged her doubly through British legality and their thinking about the rape. The Imperial morality that acted as vanguards of women’s chastity coincided language of science propagated in the post-enlightenment which not only annulled non-conformist ideas but also made itself a hegemonic language, was often used as a tool and language in encoding of law. The medico-legal understanding of rape in the colonial India has its clear imprints in the post-colonial legality. The onus on the part of rape’s victim was dictated for the longest time and still continues does by widely referred idea that ‘there should signs, marks of resistance on the body of the victim’ otherwise it is likely to be considered consensual. Having said so, this paper looks at the textual continuity that had prolonged the colonial construct of women’s body and the self.Keywords: body, politics, textual construct, phallocentric
Procedia PDF Downloads 375275 Influence of Torrefied Biomass on Co-Combustion Behaviors of Biomass/Lignite Blends
Authors: Aysen Caliskan, Hanzade Haykiri-Acma, Serdar Yaman
Abstract:
Co-firing of coal and biomass blends is an effective method to reduce carbon dioxide emissions released by burning coals, thanks to the carbon-neutral nature of biomass. Besides, usage of biomass that is renewable and sustainable energy resource mitigates the dependency on fossil fuels for power generation. However, most of the biomass species has negative aspects such as low calorific value, high moisture and volatile matter contents compared to coal. Torrefaction is a promising technique in order to upgrade the fuel properties of biomass through thermal treatment. That is, this technique improves the calorific value of biomass along with serious reductions in the moisture and volatile matter contents. In this context, several woody biomass materials including Rhododendron, hybrid poplar, and ash-tree were subjected to torrefaction process in a horizontal tube furnace at 200°C under nitrogen flow. In this way, the solid residue obtained from torrefaction that is also called as 'biochar' was obtained and analyzed to monitor the variations taking place in biomass properties. On the other hand, some Turkish lignites from Elbistan, Adıyaman-Gölbaşı and Çorum-Dodurga deposits were chosen as coal samples since these lignites are of great importance in lignite-fired power stations in Turkey. These lignites were blended with the obtained biochars for which the blending ratio of biochars was kept at 10 wt% and the lignites were the dominant constituents in the fuel blends. Burning tests of the lignites, biomasses, biochars, and blends were performed using a thermogravimetric analyzer up to 900°C with a heating rate of 40°C/min under dry air atmosphere. Based on these burning tests, properties relevant to burning characteristics such as the burning reactivity and burnout yields etc. could be compared to justify the effects of torrefaction and blending. Besides, some characterization techniques including X-Ray Diffraction (XRD), Fourier Transform Infrared (FTIR) spectroscopy and Scanning Electron Microscopy (SEM) were also conducted for the untreated biomass and torrefied biomass (biochar) samples, lignites and their blends to examine the co-combustion characteristics elaborately. Results of this study revealed the fact that blending of lignite with 10 wt% biochar created synergistic behaviors during co-combustion in comparison to the individual burning of the ingredient fuels in the blends. Burnout and ignition performances of each blend were compared by taking into account the lignite and biomass structures and characteristics. The blend that has the best co-combustion profile and ignition properties was selected. Even though final burnouts of the lignites were decreased due to the addition of biomass, co-combustion process acts as a reasonable and sustainable solution due to its environmentally friendly benefits such as reductions in net carbon dioxide (CO2), SOx and hazardous organic chemicals derived from volatiles.Keywords: burnout performance, co-combustion, thermal analysis, torrefaction pretreatment
Procedia PDF Downloads 338274 3D Classification Optimization of Low-Density Airborne Light Detection and Ranging Point Cloud by Parameters Selection
Authors: Baha Eddine Aissou, Aichouche Belhadj Aissa
Abstract:
Light detection and ranging (LiDAR) is an active remote sensing technology used for several applications. Airborne LiDAR is becoming an important technology for the acquisition of a highly accurate dense point cloud. A classification of airborne laser scanning (ALS) point cloud is a very important task that still remains a real challenge for many scientists. Support vector machine (SVM) is one of the most used statistical learning algorithms based on kernels. SVM is a non-parametric method, and it is recommended to be used in cases where the data distribution cannot be well modeled by a standard parametric probability density function. Using a kernel, it performs a robust non-linear classification of samples. Often, the data are rarely linearly separable. SVMs are able to map the data into a higher-dimensional space to become linearly separable, which allows performing all the computations in the original space. This is one of the main reasons that SVMs are well suited for high-dimensional classification problems. Only a few training samples, called support vectors, are required. SVM has also shown its potential to cope with uncertainty in data caused by noise and fluctuation, and it is computationally efficient as compared to several other methods. Such properties are particularly suited for remote sensing classification problems and explain their recent adoption. In this poster, the SVM classification of ALS LiDAR data is proposed. Firstly, connected component analysis is applied for clustering the point cloud. Secondly, the resulting clusters are incorporated in the SVM classifier. Radial basic function (RFB) kernel is used due to the few numbers of parameters (C and γ) that needs to be chosen, which decreases the computation time. In order to optimize the classification rates, the parameters selection is explored. It consists to find the parameters (C and γ) leading to the best overall accuracy using grid search and 5-fold cross-validation. The exploited LiDAR point cloud is provided by the German Society for Photogrammetry, Remote Sensing, and Geoinformation. The ALS data used is characterized by a low density (4-6 points/m²) and is covering an urban area located in residential parts of the city Vaihingen in southern Germany. The class ground and three other classes belonging to roof superstructures are considered, i.e., a total of 4 classes. The training and test sets are selected randomly several times. The obtained results demonstrated that a parameters selection can orient the selection in a restricted interval of (C and γ) that can be further explored but does not systematically lead to the optimal rates. The SVM classifier with hyper-parameters is compared with the most used classifiers in literature for LiDAR data, random forest, AdaBoost, and decision tree. The comparison showed the superiority of the SVM classifier using parameters selection for LiDAR data compared to other classifiers.Keywords: classification, airborne LiDAR, parameters selection, support vector machine
Procedia PDF Downloads 146273 Validation of Mapping Historical Linked Data to International Committee for Documentation (CIDOC) Conceptual Reference Model Using Shapes Constraint Language
Authors: Ghazal Faraj, András Micsik
Abstract:
Shapes Constraint Language (SHACL), a World Wide Web Consortium (W3C) language, provides well-defined shapes and RDF graphs, named "shape graphs". These shape graphs validate other resource description framework (RDF) graphs which are called "data graphs". The structural features of SHACL permit generating a variety of conditions to evaluate string matching patterns, value type, and other constraints. Moreover, the framework of SHACL supports high-level validation by expressing more complex conditions in languages such as SPARQL protocol and RDF Query Language (SPARQL). SHACL includes two parts: SHACL Core and SHACL-SPARQL. SHACL Core includes all shapes that cover the most frequent constraint components. While SHACL-SPARQL is an extension that allows SHACL to express more complex customized constraints. Validating the efficacy of dataset mapping is an essential component of reconciled data mechanisms, as the enhancement of different datasets linking is a sustainable process. The conventional validation methods are the semantic reasoner and SPARQL queries. The former checks formalization errors and data type inconsistency, while the latter validates the data contradiction. After executing SPARQL queries, the retrieved information needs to be checked manually by an expert. However, this methodology is time-consuming and inaccurate as it does not test the mapping model comprehensively. Therefore, there is a serious need to expose a new methodology that covers the entire validation aspects for linking and mapping diverse datasets. Our goal is to conduct a new approach to achieve optimal validation outcomes. The first step towards this goal is implementing SHACL to validate the mapping between the International Committee for Documentation (CIDOC) conceptual reference model (CRM) and one of its ontologies. To initiate this project successfully, a thorough understanding of both source and target ontologies was required. Subsequently, the proper environment to run SHACL and its shape graphs were determined. As a case study, we performed SHACL over a CIDOC-CRM dataset after running a Pellet reasoner via the Protégé program. The applied validation falls under multiple categories: a) data type validation which constrains whether the source data is mapped to the correct data type. For instance, checking whether a birthdate is assigned to xsd:datetime and linked to Person entity via crm:P82a_begin_of_the_begin property. b) Data integrity validation which detects inconsistent data. For instance, inspecting whether a person's birthdate occurred before any of the linked event creation dates. The expected results of our work are: 1) highlighting validation techniques and categories, 2) selecting the most suitable techniques for those various categories of validation tasks. The next plan is to establish a comprehensive validation model and generate SHACL shapes automatically.Keywords: SHACL, CIDOC-CRM, SPARQL, validation of ontology mapping
Procedia PDF Downloads 251272 The Expression of the Social Experience in Film Narration: Cinematic ‘Free Indirect Discourse’ in the Dancing Hawk (1977) by Grzegorz Krolikiewicz
Authors: Robert Birkholc
Abstract:
One of the basic issues related to the creation of characters in media, such as literature and film, is the representation of the characters' thoughts, emotions, and perceptions. This paper is devoted to the social perspective (or the focalization) expressed in film narration. The aim of the paper is to show how social point of view of the hero –conditioned by his origin and the environment from which he comes– can be created by using non-verbal, purely audiovisual means of expression. The issue will be considered on the example of the little-known polish movie The Dancing Hawk (1977) by Grzegorz Królikiewicz, based on the novel by Julian Kawalec. The thesis of the paper is that the polish director uses a narrative figure, which is somewhat analogous to literary form of free indirect discourse. In literature, free indirect discourse is formally ‘spoken’ by the external narrator, but the narration is clearly filtered through the language and thoughts of the character. According to some scholars (such as Roy Pascal), the narrator in this form of speech does not cite the character's words, but uses his way of thinking and imitates his perspective – sometimes with a deep irony. Free indirect discourse is frequently used in Julian Kawalec’s novel. Through the linguistic stylization, the author tries to convey the socially determined perspective of a peasant who migrates to the big city after the Second World War. Grzegorz Królikiewicz expresses the same social experience by pure cinematic form in the adaptation of the book. Both Kawalec and Królikiewicz show the consequences of so-called ‘social advancement’ in Poland after 1945, when the communist party took over political power. On the example of the fate of the main character, Michał Toporny, the director presents the experience of peasants who left their villages and had to adapt to new, urban space. However, the paper is not focused on the historical topic itself, but on the audiovisual form of the movie. Although Królikiewicz doesn’t use frequently POV shots, the narration of The Dancing Hawk is filtered through the sensations of the main character, who feels uprooted and alienated in the new social space. The director captures the hero's feelings through very complex audiovisual procedures – high or low points of view (representing the ‘social position’), grotesque soundtrack, expressionist scenery, and associative editing. In this way, he manages to create the world from the perspective of a socially maladjusted and internally split subject. The Dancing Hawk is a successful attempt to adapt the subjective narration of the book to the ‘language’ of the cinema. Mieke Bal’s notion of focalization helps to describe ‘free indirect discourse’ as a transmedial figure of representing of the characters’ perceptions. However, the polysemiotic medium of the film also significantly transforms this figure of representation. The paper shows both the similarities and differences between literary and cinematic ‘free indirect discourse.’Keywords: film and literature, free indirect discourse, social experience, subjective narration
Procedia PDF Downloads 130271 Elaboration and Characterization of in-situ CrC- Ni(Al, Cr) Composites Elaborated from Ni and Cr₂AlC Precursors
Authors: A. Chiker, A. Benamor, A. Haddad, Y. Hadji, M. Hadji
Abstract:
Metal matrix composites (MMCs) have been of big interest for a few decades. Their major drawback lies in their enhanced mechanical performance over unreinforced alloys. They found ground in many engineering fields, such as aeronautics, aerospace, automotive, and other structural applications. One of the most used alloys as a matrix is nickel alloys, which meet the need for high-temperature mechanical properties; some attempts have been made to develop nickel base composites reinforced by high melt point and high modulus particulates. Among the carbides used as reinforcing particulates, chromium carbide is interesting for wear applications; it is widely used as a tribological coating material in high-temperature applications requiring high wear resistance and hardness. Moreover, a set of properties make it suitable for use in MMCs, such as toughness, the good corrosion and oxidation resistance of its three polymorphs -the cubic (Cr23C6), the hexagonal (Cr7C3), and the orthorhombic (Cr3C2)-, and it’s coefficient of thermal expansion that is almost equal to that of metals. The in-situ synthesis of CrC-reinforced Ni matrix composites could be achieved by the powder metallurgy route. To ensure the in-situ reactions during the sintering process, the use of phase precursors is necessary. Recently, new precursor materials have been proposed; these materials are called MAX phases. The MAX phases are thermodynamically stable nano-laminated materials displaying unusual and sometimes unique properties. These novel phases possess Mn+1AXn chemistry, where n is 1, 2, or 3, M is an early transition metal element, A is an A-group element, and X is C or N. Herein, the pressureless sintering method is used to elaborate Ni/Cr2AlC composites. Four composites were elaborated from 5, 10, 15 and 20 wt% of Cr2AlC MAX phase precursor which fully reacted with Ni-matrix at 1100 °C sintering temperature for 4 h in argon atmosphere. XRD results showed that Cr2AlC MAX phase was totally decomposed forming chromium carbide Cr7C3, and the released Al and Cr atoms diffused in Ni matrix giving rise to γ-Ni(Al,Cr) solid solution and γ’-Ni3(Al,Cr) intermetallic. Scanning Electron Microscopy (SEM) of the elaborated samples showed the presence of nanosized Cr7C3 reinforcing particles embedded in the Ni metal matrix, which have a direct impact on the tribological properties of the composites and their hardness. All the composites exhibited higher hardness than pure Ni; whereas adding 15 wt% of Cr2AlC gives the highest hardness (1.85 GPa). Using a ball-on-disc tribometer, dry sliding tests for the elaborated composites against 100Cr6 steel ball were studied under different applied loads. The microstructures and worn surface characteristics were then analyzed using SEM and Raman spectroscopy. The results show that all the composites exhibited better wear resistance compared to pure Ni, which could be explained by the formation of a lubricious tribo-layer during sliding and the good bonding between the Ni matrix and the reinforcing phases.Keywords: composites, microscopy, sintering, wear
Procedia PDF Downloads 68270 Treatment and Diagnostic Imaging Methods of Fetal Heart Function in Radiology
Authors: Mahdi Farajzadeh Ajirlou
Abstract:
Prior evidence of normal cardiac anatomy is desirable to relieve the anxiety of cases with a family history of congenital heart disease or to offer the option of early gestation termination or close follow-up should a cardiac anomaly be proved. Fetal heart discovery plays an important part in the opinion of the fetus, and it can reflect the fetal heart function of the fetus, which is regulated by the central nervous system. Acquisition of ventricular volume and inflow data would be useful to quantify more valve regurgitation and ventricular function to determine the degree of cardiovascular concession in fetal conditions at threat for hydrops fetalis. This study discusses imaging the fetal heart with transvaginal ultrasound, Doppler ultrasound, three-dimensional ultrasound (3DUS) and four-dimensional (4D) ultrasound, spatiotemporal image correlation (STIC), glamorous resonance imaging and cardiac catheterization. Doppler ultrasound (DUS) image is a kind of real- time image with a better imaging effect on blood vessels and soft tissues. DUS imaging can observe the shape of the fetus, but it cannot show whether the fetus is hypoxic or distressed. Spatiotemporal image correlation (STIC) enables the acquisition of a volume of data concomitant with the beating heart. The automated volume accession is made possible by the array in the transducer performing a slow single reach, recording a single 3D data set conforming to numerous 2D frames one behind the other. The volume accession can be done in a stationary 3D, either online 4D (direct volume scan, live 3D ultrasound or a so-called 4D (3D/ 4D)), or either spatiotemporal image correlation-STIC (off-line 4D, which is a circular volume check-up). Fetal cardiovascular MRI would appear to be an ideal approach to the noninvasive disquisition of the impact of abnormal cardiovascular hemodynamics on antenatal brain growth and development. Still, there are practical limitations to the use of conventional MRI for fetal cardiovascular assessment, including the small size and high heart rate of the mortal fetus, the lack of conventional cardiac gating styles to attend data accession, and the implicit corruption of MRI data due to motherly respiration and unpredictable fetal movements. Fetal cardiac MRI has the implicit to complement ultrasound in detecting cardiovascular deformations and extracardiac lesions. Fetal cardiac intervention (FCI), minimally invasive catheter interventions, is a new and evolving fashion that allows for in-utero treatment of a subset of severe forms of congenital heart deficiency. In special cases, it may be possible to modify the natural history of congenital heart disorders. It's entirely possible that future generations will ‘repair’ congenital heart deficiency in utero using nanotechnologies or remote computer-guided micro-robots that work in the cellular layer.Keywords: fetal, cardiac MRI, ultrasound, 3D, 4D, heart disease, invasive, noninvasive, catheter
Procedia PDF Downloads 37269 Monocoque Systems: The Reuniting of Divergent Agencies for Wood Construction
Authors: Bruce Wrightsman
Abstract:
Construction and design are inexorably linked. Traditional building methodologies, including those using wood, comprise a series of material layers differentiated and separated from each other. This results in the separation of two agencies of building envelope (skin) separate from the structure. However, from a material performance position reliant on additional materials, this is not an efficient strategy for the building. The merits of traditional platform framing are well known. However, its enormous effectiveness within wood-framed construction has seldom led to serious questioning and challenges in defining what it means to build. There are several downsides of using this method, which is less widely discussed. The first and perhaps biggest downside is waste. Second, its reliance on wood assemblies forming walls, floors and roofs conventionally nailed together through simple plate surfaces is structurally inefficient. It requires additional material through plates, blocking, nailers, etc., for stability that only adds to the material waste. In contrast, when we look back at the history of wood construction in airplane and boat manufacturing industries, we will see a significant transformation in the relationship of structure with skin. The history of boat construction transformed from indigenous wood practices of birch bark canoes to copper sheathing over wood to improve performance in the late 18th century and the evolution of merged assemblies that drives the industry today. In 1911, Swiss engineer Emile Ruchonnet designed the first wood monocoque structure for an airplane called the Cigare. The wing and tail assemblies consisted of thin, lightweight, and often fabric skin stretched tightly over a wood frame. This stressed skin has evolved into semi-monocoque construction, in which the skin merges with structural fins that take additional forces. It provides even greater strength with less material. The monocoque, which translates to ‘mono or single shell,’ is a structural system that supports loads and transfers them through an external enclosure system. They have largely existed outside the domain of architecture. However, this uniting of divergent systems has been demonstrated to be lighter, utilizing less material than traditional wood building practices. This paper will examine the role monocoque systems have played in the history of wood construction through lineage of boat and airplane building industries and its design potential for wood building systems in architecture through a case-study examination of a unique wood construction approach. The innovative approach uses a wood monocoque system comprised of interlocking small wood members to create thin shell assemblies for the walls, roof and floor, increasing structural efficiency and wasting less than 2% of the wood. The goal of the analysis is to expand the work of practice and the academy in order to foster deeper, more honest discourse regarding the limitations and impact of traditional wood framing.Keywords: wood building systems, material histories, monocoque systems, construction waste
Procedia PDF Downloads 77268 Graphene Metamaterials Supported Tunable Terahertz Fano Resonance
Authors: Xiaoyong He
Abstract:
The manipulation of THz waves is still a challenging task due to lack of natural materials interacted with it strongly. Designed by tailoring the characters of unit cells (meta-molecules), the advance of metamaterials (MMs) may solve this problem. However, because of Ohmic and radiation losses, the performance of MMs devices is subjected to the dissipation and low quality factor (Q-factor). This dilemma may be circumvented by Fano resonance, which arises from the destructive interference between a bright continuum mode and dark discrete mode (or a narrow resonance). Different from symmetric Lorentz spectral curve, Fano resonance indicates a distinct asymmetric line-shape, ultrahigh quality factor, steep variations in spectrum curves. Fano resonance is usually realized through symmetry breaking. However, if concentric double rings (DR) are placed closely to each other, the near-field coupling between them gives rise to two hybridized modes (bright and narrowband dark modes) because of the local asymmetry, resulting into the characteristic Fano line shape. Furthermore, from the practical viewpoint, it is highly desirable requirement that to achieve the modulation of Fano spectral curves conveniently, which is an important and interesting research topics. For current Fano systems, the tunable spectral curves can be realized by adjusting the geometrical structural parameters or magnetic fields biased the ferrite-based structure. But due to limited dispersion properties of active materials, it is still a tough work to tailor Fano resonance conveniently with the fixed structural parameters. With the favorable properties of extreme confinement and high tunability, graphene is a strong candidate to achieve this goal. The DR-structure possesses the excitation of so-called “trapped modes,” with the merits of simple structure and high quality of resonances in thin structures. By depositing graphene circular DR on the SiO2/Si/ polymer substrate, the tunable Fano resonance has been theoretically investigated in the terahertz regime, including the effects of graphene Fermi level, structural parameters and operation frequency. The results manifest that the obvious Fano peak can be efficiently modulated because of the strong coupling between incident waves and graphene ribbons. As Fermi level increases, the peak amplitude of Fano curve increases, and the resonant peak position shifts to high frequency. The amplitude modulation depth of Fano curves is about 30% if Fermi level changes in the scope of 0.1-1.0 eV. The optimum gap distance between DR is about 8-12 μm, where the value of figure of merit shows a peak. As the graphene ribbon width increases, the Fano spectral curves become broad, and the resonant peak denotes blue shift. The results are very helpful to develop novel graphene plasmonic devices, e.g. sensors and modulators.Keywords: graphene, metamaterials, terahertz, tunable
Procedia PDF Downloads 343267 Internal Concept of Integrated Health by Agrarian Society in Malagasy Highlands for the Last Century
Authors: O. R. Razanakoto, L. Temple
Abstract:
Living in a least developed country, the Malagasy society has a weak capacity to internalize progress, including health concerns. Since the arrival in the fifteenth century of Arabic script, called Sorabe, that was mainly dedicated to the aristocracy, until the colonial era beginning at the end of the nineteenth century and that has popularized the current usual script of the occidental civilization, the upcoming manuscripts that deal with apparent scientific or at least academic issue have been slowly established. So that, the Malagasy communities’ way of life is not well documented yet to allow a precise understanding of the major concerns, reason, and purpose of the existence of the farmers that compose them. A question arises, according to literature, how does Malagasy community that is dominated by agrarian society conceive the conservation of its wellbeing? This study aims to emphasize the scope and the limits of the « One Health » concept or of the Health Integrated Approach (HIA) that evolves at global scale, with regard to the specific context of local Malagasy smallholder farms. It is expected to identify how this society represents linked risks and the mechanisms between human health, animal health, plant health, and ecosystem health within the last 100 years. To do so, the framework to conduct systematic review for agricultural research has been deployed to access available literature. This task has been coupled with the reading of articles that are not indexed by online scientific search engine but that mention part of a history of agriculture and of farmers in Madagascar. This literature review has informed the interactions between human illnesses and those affecting animals and plants (breeded or wild) with any unexpected event (ecological or economic) that has modified the equilibrium of the ecosystem, or that has disturbed the livelihoods of agrarian communities. Besides, drivers that may either accentuate or attenuate the devasting effects of these illnesses and changes were revealed. The study has established that the reasons of human worries are not only physiological. Among the factors that regulate global health, food system and contemporary medicine have helped to the improvement of life expectancy from 55 to 63 years in Madagascar during the last 50 years. However, threats to global health are still occurring. New human or animal illnesses and livestock / plant pathology or enemies may also appear, whereas ancient illnesses that are supposed to have disappeared may be back. This study has highlighted how much important are the risks associated to the impact of unmanaged externalities that weaken community’s life. Many risks, and also solutions, come from abroad and have long term effects even though those happen as punctual event. Thus, a constructivist strategy is suggested to the « One Health » global concept throughout the record of local facts. This approach should facilitate the exploration of methodological pathways and the identification of relevant indicators for research related to HIA.Keywords: agrarian system, health integrated approach, history, madagascar, resilience, risk
Procedia PDF Downloads 108