Search results for: public order and security act
1456 Occupational Heat Stress Related Adverse Pregnancy Outcome: A Pilot Study in South India Workplaces
Authors: Rekha S., S. J. Nalini, S. Bhuvana, S. Kanmani, Vidhya Venugopal
Abstract:
Introduction: Pregnant women's occupational heat exposure has been linked to foetal abnormalities and pregnancy complications. The presence of heat in the workplace is expected to lead to Adverse Pregnancy Outcomes (APO), especially in tropical countries where temperatures are rising and workplace cooling interventions are minimal. For effective interventions, in-depth understanding and evidence about occupational heat stress and APO are required. Methodology: Approximately 800 pregnant women in and around Chennai who were employed in jobs requiring moderate to hard labour participated in the cohort research. During the study period (2014-2019), environmental heat exposures were measured using a Questemp WBGT monitor, and heat strain markers, such as Core Body Temperature (CBT) and Urine Specific Gravity (USG), were evaluated using an Infrared Thermometer and a refractometer, respectively. Using a valid HOTHAPS questionnaire, self-reported health symptoms were collected. In addition, a postpartum follow-up with the mothers was done to collect APO-related data. Major findings of the study: Approximately 47.3% of pregnant workers have workplace WBGTs over the safe manual work threshold value for moderate/heavy employment (Average WBGT of 26.6°C±1.0°C). About 12.5% of the workers had CBT levels above the usual range, and 24.8% had USG levels above 1.020, both of which suggested mild dehydration. Miscarriages (3%), stillbirths/preterm births (3.5%), and low birth weights (8.8%) were the most common unfavorable outcomes among pregnant employees. In addition, WBGT exposures above TLVs during all trimesters were associated with a 2.3-fold increased risk of adverse fetal/maternal outcomes (95% CI: 1.4-3.8), after adjusting for potential confounding variables including age, education, socioeconomic status, abortion history, stillbirth, preterm, LBW, and BMI. The study determined that WBGTs in the workplace had direct short- and long-term effects on the health of both the mother and the foetus. Despite the study's limited scope, the findings provided valuable insights and highlighted the need for future comprehensive cohort studies and extensive data in order to establish effective policies to protect vulnerable pregnant women from the dangers of heat stress and to promote reproductive health.Keywords: adverse outcome, heat stress, interventions, physiological strain, pregnant women
Procedia PDF Downloads 731455 Clustering-Based Computational Workload Minimization in Ontology Matching
Authors: Mansir Abubakar, Hazlina Hamdan, Norwati Mustapha, Teh Noranis Mohd Aris
Abstract:
In order to build a matching pattern for each class correspondences of ontology, it is required to specify a set of attribute correspondences across two corresponding classes by clustering. Clustering reduces the size of potential attribute correspondences considered in the matching activity, which will significantly reduce the computation workload; otherwise, all attributes of a class should be compared with all attributes of the corresponding class. Most existing ontology matching approaches lack scalable attributes discovery methods, such as cluster-based attribute searching. This problem makes ontology matching activity computationally expensive. It is therefore vital in ontology matching to design a scalable element or attribute correspondence discovery method that would reduce the size of potential elements correspondences during mapping thereby reduce the computational workload in a matching process as a whole. The objective of this work is 1) to design a clustering method for discovering similar attributes correspondences and relationships between ontologies, 2) to discover element correspondences by classifying elements of each class based on element’s value features using K-medoids clustering technique. Discovering attribute correspondence is highly required for comparing instances when matching two ontologies. During the matching process, any two instances across two different data sets should be compared to their attribute values, so that they can be regarded to be the same or not. Intuitively, any two instances that come from classes across which there is a class correspondence are likely to be identical to each other. Besides, any two instances that hold more similar attribute values are more likely to be matched than the ones with less similar attribute values. Most of the time, similar attribute values exist in the two instances across which there is an attribute correspondence. This work will present how to classify attributes of each class with K-medoids clustering, then, clustered groups to be mapped by their statistical value features. We will also show how to map attributes of a clustered group to attributes of the mapped clustered group, generating a set of potential attribute correspondences that would be applied to generate a matching pattern. The K-medoids clustering phase would largely reduce the number of attribute pairs that are not corresponding for comparing instances as only the coverage probability of attributes pairs that reaches 100% and attributes above the specified threshold can be considered as potential attributes for a matching. Using clustering will reduce the size of potential elements correspondences to be considered during mapping activity, which will in turn reduce the computational workload significantly. Otherwise, all element of the class in source ontology have to be compared with all elements of the corresponding classes in target ontology. K-medoids can ably cluster attributes of each class, so that a proportion of attribute pairs that are not corresponding would not be considered when constructing the matching pattern.Keywords: attribute correspondence, clustering, computational workload, k-medoids clustering, ontology matching
Procedia PDF Downloads 2501454 A User-Directed Approach to Optimization via Metaprogramming
Authors: Eashan Hatti
Abstract:
In software development, programmers often must make a choice between high-level programming and high-performance programs. High-level programming encourages the use of complex, pervasive abstractions. However, the use of these abstractions degrades performance-high performance demands that programs be low-level. In a compiler, the optimizer attempts to let the user have both. The optimizer takes high-level, abstract code as an input and produces low-level, performant code as an output. However, there is a problem with having the optimizer be a built-in part of the compiler. Domain-specific abstractions implemented as libraries are common in high-level languages. As a language’s library ecosystem grows, so does the number of abstractions that programmers will use. If these abstractions are to be performant, the optimizer must be extended with new optimizations to target them, or these abstractions must rely on existing general-purpose optimizations. The latter is often not as effective as needed. The former presents too significant of an effort for the compiler developers, as they are the only ones who can extend the language with new optimizations. Thus, the language becomes more high-level, yet the optimizer – and, in turn, program performance – falls behind. Programmers are again confronted with a choice between high-level programming and high-performance programs. To investigate a potential solution to this problem, we developed Peridot, a prototype programming language. Peridot’s main contribution is that it enables library developers to easily extend the language with new optimizations themselves. This allows the optimization workload to be taken off the compiler developers’ hands and given to a much larger set of people who can specialize in each problem domain. Because of this, optimizations can be much more effective while also being much more numerous. To enable this, Peridot supports metaprogramming designed for implementing program transformations. The language is split into two fragments or “levels”, one for metaprogramming, the other for high-level general-purpose programming. The metaprogramming level supports logic programming. Peridot’s key idea is that optimizations are simply implemented as metaprograms. The meta level supports several specific features which make it particularly suited to implementing optimizers. For instance, metaprograms can automatically deduce equalities between the programs they are optimizing via unification, deal with variable binding declaratively via higher-order abstract syntax, and avoid the phase-ordering problem via non-determinism. We have found that this design centered around logic programming makes optimizers concise and easy to write compared to their equivalents in functional or imperative languages. Overall, implementing Peridot has shown that its design is a viable solution to the problem of writing code which is both high-level and performant.Keywords: optimization, metaprogramming, logic programming, abstraction
Procedia PDF Downloads 881453 The Significance of Picture Mining in the Fashion and Design as a New Research Method
Authors: Katsue Edo, Yu Hiroi
Abstract:
T Increasing attention has been paid to using pictures and photographs in research since the beginning of the 21th century in social sciences. Meanwhile we have been studying the usefulness of Picture mining, which is one of the new ways for a these picture using researches. Picture Mining is an explorative research analysis method that takes useful information from pictures, photographs and static or moving images. It is often compared with the methods of text mining. The Picture Mining concept includes observational research in the broad sense, because it also aims to analyze moving images (Ochihara and Edo 2013). In the recent literature, studies and reports using pictures are increasing due to the environmental changes. These are identified as technological and social changes (Edo et.al. 2013). Low price digital cameras and i-phones, high information transmission speed, low costs for information transferring and high performance and resolution of the cameras of mobile phones have changed the photographing behavior of people. Consequently, there is less resistance in taking and processing photographs for most of the people in the developing countries. In these studies, this method of collecting data from respondents is often called as ‘participant-generated photography’ or ‘respondent-generated visual imagery’, which focuses on the collection of data and its analysis (Pauwels 2011, Snyder 2012). But there are few systematical and conceptual studies that supports it significance of these methods. We have discussed in the recent years to conceptualize these picture using research methods and formalize theoretical findings (Edo et. al. 2014). We have identified the most efficient fields of Picture mining in the following areas inductively and in case studies; 1) Research in Consumer and Customer Lifestyles. 2) New Product Development. 3) Research in Fashion and Design. Though we have found that it will be useful in these fields and areas, we must verify these assumptions. In this study we will focus on the field of fashion and design, to determine whether picture mining methods are really reliable in this area. In order to do so we have conducted an empirical research of the respondents’ attitudes and behavior concerning pictures and photographs. We compared the attitudes and behavior of pictures toward fashion to meals, and found out that taking pictures of fashion is not as easy as taking meals and food. Respondents do not often take pictures of fashion and upload their pictures online, such as Facebook and Instagram, compared to meals and food because of the difficulty of taking them. We concluded that we should be more careful in analyzing pictures in the fashion area for there still might be some kind of bias existing even if the environment of pictures have drastically changed in these years.Keywords: empirical research, fashion and design, Picture Mining, qualitative research
Procedia PDF Downloads 3631452 The Use of Technology in Theatrical Performances as a Tool of Audience’S Engagement
Authors: Chrysoula Bousiouta
Abstract:
Throughout the history of theatre, technology has played an important role both in influencing the relationship between performance and audience and offering different kinds of experiences. The use of technology dates back in ancient times, when the introduction of artifacts, such as “Deus ex machine” in ancient Greek theatre, started. Taking into account the key techniques and experiences used throughout history, this paper investigates how technology, through new media, influences contemporary theatre. In the context of this research, technology is defined as projections, audio environments, video-projections, sensors, tele-connections, all alongside with the performance, challenging audience’s participation. The theoretical framework of the research covers, except for the history of theatre, the theory of “experience economy” that took over the service and goods economy. The research is based on the qualitative and comparative analysis of two case studies, Contact Theatre in Manchester (United Kingdom) and Bios in Athens (Greece). The data selection includes desk research and is complemented with semi structured interviews. Building on the results of the research one could claim that the intended experience of modern/contemporary theatre is that of engagement. In this context, technology -as defined above- plays a leading role in creating it. This experience passes through and exists in the middle of the realms of entertainment, education, estheticism and escapism. Furthermore, it is observed that nowadays, theatre is not only about acting but also about performing; it is that one where the performances are unfinished without the participation of the audience. Both case studies try to achieve the experience of engagement through practices that promote the attraction of attention, the increase of imagination, the interaction, the intimacy and the true activity. These practices are achieved through the script, the scenery, the language and the environment of a performance. Contact and Bios consider technology as an intimate tool in order to accomplish the above, and they make an extended use of it. The research completes a notable record of technological techniques that modern theatres use. The use of technology, inside or outside the limits of film technique’s, helps to rivet the attention of the audience, to make performances enjoyable, to give the sense of the “unfinished” or to be used for things that take place around the spectators and force them to take action, being spect-actors. The advantage of technology is that it can be used as a hook for interaction in all stages of a performance. Further research on the field could involve exploring alternative ways of binding technology and theatre or analyzing how the performance is perceived through the use of technological artifacts.Keywords: experience of engagement, interactive theatre, modern theatre, performance, technology
Procedia PDF Downloads 2511451 The Impact of Artificial Intelligence on Autism Attitudes
Authors: Sara Asham Mahrous Kamel
Abstract:
A descriptive statistical analysis of the data showed that the most important factor evoking negative attitudes among teachers is student behavior. have been presented as useful models for understanding the risk factors and protective factors associated with the emergence of autistic traits. Although these "syndrome" forms of autism reach clinical thresholds, they appear to be distinctly different from the idiopathic or "non-syndrome" autism phenotype. Most teachers reported that kindergartens did not prepare them for the educational needs of children with autism, particularly in relation to non-verbal skills. The study is important and points the way for improving teacher inclusion education in Thailand. Inclusive education for students with autism is still in its infancy in Thailand. Although the number of autistic children in schools has increased significantly since the Thai government introduced the Education Regulations for Persons with Disabilities Act in 2008, there is a general lack of services for autistic students and their families. This quantitative study used the Teaching Skills and Readiness Scale for Students with Autism (APTSAS) to test the attitudes and readiness of 110 elementary school teachers when teaching students with autism in general education classrooms. To uncover the true nature of these co morbidities, it is necessary to expand the definition of autism to include the cognitive features of the disorder, and then apply this expanded conceptualization to examine patterns of autistic syndromes. This study used various established eye-tracking paradigms to assess the visual and attention performance of children with DS and FXS who meet the autism thresholds defined in the Social Communication Questionnaire. To study whether the autistic profiles of these children are associated with visual orientation difficulties ("sticky attention"), decreased social attention, and increased visual search performance, all of which are hallmarks of the idiopathic autistic child phenotype. Data will be collected from children with DS and FXS, aged 6 to 10 years, and two control groups matched for age and intellectual ability (i.e., children with idiopathic autism).In order to enable a comparison of visual attention profiles, cross-sectional analyzes of developmental trajectories are carried out. Significant differences in the visual-attentive processes underlying the presentation of autism in children with FXS and DS have been suggested, supporting the concept of syndrome specificity. The study provides insights into the complex heterogeneity associated with autism syndrome symptoms and autism itself, with clinical implications for the utility of autism intervention programs in DS and FXS populations.Keywords: attitude, autism, teachers, sports activities, movement skills, motor skills
Procedia PDF Downloads 581450 The Effect of Acute Consumption of a Nutritional Supplement Derived from Vegetable Extracts Rich in Nitrate on Athletic Performance
Authors: Giannis Arnaoutis, Dimitra Efthymiopoulou, Maria-Foivi Nikolopoulou, Yannis Manios
Abstract:
AIM: Nitrate-containing supplements have been used extensively as ergogenic in many sports. However, extract fractions from plant-based nutritional sources high in nitrate and their effect on athletic performance, has not been systematically investigated. The purpose of the present study was to examine the possible effect of acute consumption of a “smart mixture” from beetroot and rocket on exercise capacity. MATERIAL & METHODS: 12 healthy, nonsmoking, recreationally active, males (age: 25±4 years, % fat: 15.5±5.7, Fat Free Mass: 65.8±5.6 kg, VO2 max: 45.46.1 mL . kg -1 . min -1) participated in a double-blind, placebo-controlled trial study, in a randomized and counterbalanced order. Eligibility criteria for participation in this study included normal physical examination, and absence of any metabolic, cardiovascular, or renal disease. All participants completed a time to exhaustion cycling test at 75% of their maximum power output, twice. The subjects consumed either capsules containing 360 mg of nitrate in total or placebo capsules, in the morning, under fasted state. After 3h of passive recovery the performance test followed. Blood samples were collected upon arrival of the participants and 3 hours after the consumption of the corresponding capsules. Time until exhaustion, pre- and post-test lactate concentrations, and rate of perceived exertion for the same time points were assessed. RESULTS: Paired-sample t-test analysis found a significant difference in time to exhaustion between the trial with the nitrate consumption versus placebo [16.1±3.0 Vs 13.5±2.6 min, p=0.04] respectively. No significant differences were observed for the concentrations of lactic acid as well as for the values in the Borg scale between the two trials (p>0.05). CONCLUSIONS: Based on the results of the present study, it appears that a nutritional supplement derived from vegetable extracts rich in nitrate, improves athletic performance in recreationally active young males. However, the precise mechanism is not clear and future studies are needed. Acknowledgment: This research has been co‐financed by the European Regional Development Fund of the European Union and Greek national funds through the Operational Program Competitiveness, Entrepreneurship and Innovation, under the call RESEARCH – CREATE – INNOVATE (project code:T2EDK-00843).Keywords: sports performance, ergogenic supplements, nitrate, extract fractions
Procedia PDF Downloads 671449 Technology for Good: Deploying Artificial Intelligence to Analyze Participant Response to Anti-Trafficking Education
Authors: Ray Bryant
Abstract:
3Strands Global Foundation (3SGF), a non-profit with a mission to mobilize communities to combat human trafficking through prevention education and reintegration programs, launched a groundbreaking study that calls out the usage and benefits of artificial intelligence in the war against human trafficking. Having gathered more than 30,000 stories from counselors and school staff who have gone through its PROTECT Prevention Education program, 3SGF sought to develop a methodology to measure the effectiveness of the training, which helps educators and school staff identify physical signs and behaviors indicating a student is being victimized. The program further illustrates how to recognize and respond to trauma and teaches the steps to take to report human trafficking, as well as how to connect victims with the proper professionals. 3SGF partnered with Levity, a leader in no-code Artificial Intelligence (AI) automation, to create the research study utilizing natural language processing, a branch of artificial intelligence, to measure the effectiveness of their prevention education program. By applying the logic created for the study, the platform analyzed and categorized each story. If the story, directly from the educator, demonstrated one or more of the desired outcomes; Increased Awareness, Increased Knowledge, or Intended Behavior Change, a label was applied. The system then added a confidence level for each identified label. The study results were generated with a 99% confidence level. Preliminary results show that of the 30,000 stories gathered, it became overwhelmingly clear that a significant majority of the participants now have increased awareness of the issue, demonstrated better knowledge of how to help prevent the crime, and expressed an intention to change how they approach what they do daily. In addition, it was observed that approximately 30% of the stories involved comments by educators expressing they wish they’d had this knowledge sooner as they can think of many students they would have been able to help. Objectives Of Research: To solve the problem of needing to analyze and accurately categorize more than 30,000 data points of participant feedback in order to evaluate the success of a human trafficking prevention program by using AI and Natural Language Processing. Methodologies Used: In conjunction with our strategic partner, Levity, we have created our own NLP analysis engine specific to our problem. Contributions To Research: The intersection of AI and human rights and how to utilize technology to combat human trafficking.Keywords: AI, technology, human trafficking, prevention
Procedia PDF Downloads 601448 Demographic Shrinkage and Reshaping Regional Policy of Lithuania in Economic Geographic Context
Authors: Eduardas Spiriajevas
Abstract:
Since the end of the 20th century, when Lithuania regained its independence, a process of demographic shrinkage started. Recently, it affects the efficiency of implementation of actions related to regional development policy and geographic scopes of created value added in the regions. The demographic structures of human resources reflect onto the regions and their economic geographic environment. Due to reshaping economies and state reforms on restructuration of economic branches such as agriculture and industry, it affects the economic significance of services’ sector. These processes influence the competitiveness of labor market and its demographic characteristics. Such vivid consequences are appropriate for the structures of human migrations, which affected the processes of demographic ageing of human resources in the regions, especially in peripheral ones. These phenomena of modern times induce the demographic shrinkage of society and its economic geographic characteristics in the actions of regional development and in regional policy. The internal and external migrations of population captured numerous regional economic disparities, and influenced on territorial density and concentration of population of the country and created the economies of spatial unevenness in such small geographically compact country as Lithuania. The processes of territorial reshaping of distribution of population create new regions and their economic environment, which is not corresponding to the main principles of regional policy and its power to create the well-being and to promote the attractiveness for economic development. These are the new challenges of national regional policy and it should be researched in a systematic way of taking into consideration the analytical approaches of regional economy in the context of economic geographic research methods. A comparative territorial analysis according to administrative division of Lithuania in relation to retrospective approach and introduction of method of location quotients, both give the results of economic geographic character with cartographic representations using the tools of spatial analysis provided by technologies of Geographic Information Systems. A set of these research methods provide the new spatially evidenced based results, which must be taken into consideration in reshaping of national regional policy in economic geographic context. Due to demographic shrinkage and increasing differentiation of economic developments within the regions, an input of economic geographic dimension is inevitable. In order to sustain territorial balanced economic development, there is a need to strengthen the roles of regional centers (towns) and to empower them with new economic functionalities for revitalization of peripheral regions, and to increase their economic competitiveness and social capacities on national scale.Keywords: demographic shrinkage, economic geography, Lithuania, regions
Procedia PDF Downloads 1611447 Purchasing Decision-Making in Supply Chain Management: A Bibliometric Analysis
Authors: Ahlem Dhahri, Waleed Omri, Audrey Becuwe, Abdelwahed Omri
Abstract:
In industrial processes, decision-making ranges across different scales, from process control to supply chain management. The purchasing decision-making process in the supply chain is presently gaining more attention as a critical contributor to the company's strategic success. Given the scarcity of thorough summaries in the prior studies, this bibliometric analysis aims to adopt a meticulous approach to achieve quantitative knowledge on the constantly evolving subject of purchasing decision-making in supply chain management. Through bibliometric analysis, we examine a sample of 358 peer-reviewed articles from the Scopus database. VOSviewer and Gephi software were employed to analyze, combine, and visualize the data. Data analytic techniques, including citation network, page-rank analysis, co-citation, and publication trends, have been used to identify influential works and outline the discipline's intellectual structure. The outcomes of this descriptive analysis highlight the most prominent articles, authors, journals, and countries based on their citations and publications. The findings from the research illustrate an increase in the number of publications, exhibiting a slightly growing trend in this field. Co-citation analysis coupled with content analysis of the most cited articles identified five research themes mentioned as follows integrating sustainability into the supplier selection process, supplier selection under disruption risks assessment and mitigation strategies, Fuzzy MCDM approaches for supplier evaluation and selection, purchasing decision in vendor problems, decision-making techniques in supplier selection and order lot sizing problems. With the help of a graphic timeline, this exhaustive map of the field illustrates a visual representation of the evolution of publications that demonstrate a gradual shift from research interest in vendor selection problems to integrating sustainability in the supplier selection process. These clusters offer insights into a wide variety of purchasing methods and conceptual frameworks that have emerged; however, they have not been validated empirically. The findings suggest that future research would emerge with a greater depth of practical and empirical analysis to enrich the theories. These outcomes provide a powerful road map for further study in this area.Keywords: bibliometric analysis, citation analysis, co-citation, Gephi, network analysis, purchasing, SCM, VOSviewer
Procedia PDF Downloads 861446 Virtual Reality and Other Real-Time Visualization Technologies for Architecture Energy Certifications
Authors: Román Rodríguez Echegoyen, Fernando Carlos López Hernández, José Manuel López Ujaque
Abstract:
Interactive management of energy certification ratings has remained on the sidelines of the evolution of virtual reality (VR) despite related advances in architecture in other areas such as BIM and real-time working programs. This research studies to what extent VR software can help the stakeholders to better understand energy efficiency parameters in order to obtain reliable ratings assigned to the parts of the building. To evaluate this hypothesis, the methodology has included the construction of a software prototype. Current energy certification systems do not follow an intuitive data entry system; neither do they provide a simple or visual verification of the technical values included in the certification by manufacturers or other users. This software, by means of real-time visualization and a graphical user interface, proposes different improvements to the current energy certification systems that ease the understanding of how the certification parameters work in a building. Furthermore, the difficulty of using current interfaces, which are not friendly or intuitive for the user, means that untrained users usually get a poor idea of the grounds for certification and how the program works. In addition, the proposed software allows users to add further information, such as financial and CO₂ savings, energy efficiency, and an explanatory analysis of results for the least efficient areas of the building through a new visual mode. The software also helps the user to evaluate whether or not an investment to improve the materials of an installation is worth the cost of the different energy certification parameters. The evaluated prototype (named VEE-IS) shows promising results when it comes to representing in a more intuitive and simple manner the energy rating of the different elements of the building. Users can also personalize all the inputs necessary to create a correct certification, such as floor materials, walls, installations, or other important parameters. Working in real-time through VR allows for efficiently comparing, analyzing, and improving the rated elements, as well as the parameters that we must enter to calculate the final certification. The prototype also allows for visualizing the building in efficiency mode, which lets us move over the building to analyze thermal bridges or other energy efficiency data. This research also finds that the visual representation of energy efficiency certifications makes it easy for the stakeholders to examine improvements progressively, which adds value to the different phases of design and sale.Keywords: energetic certification, virtual reality, augmented reality, sustainability
Procedia PDF Downloads 1891445 Modification of Aliphatic-Aromatic Copolyesters with Polyether Block for Segmented Copolymers with Elastothemoplastic Properties
Authors: I. Irska, S. Paszkiewicz, D. Pawlikowska, E. Piesowicz, A. Linares, T. A. Ezquerra
Abstract:
Due to the number of advantages such as high tensile strength, sensitivity to hydrolytic degradation, and biocompatibility poly(lactic acid) (PLA) is one of the most common polyesters for biomedical and pharmaceutical applications. However, PLA is a rigid, brittle polymer with low heat distortion temperature and slow crystallization rate. In order to broaden the range of PLA applications, it is necessary to improve these properties. In recent years a number of new strategies have been evolved to obtain PLA-based materials with improved characteristics, including manipulation of crystallinity, plasticization, blending, and incorporation into block copolymers. Among the other methods, synthesis of aliphatic-aromatic copolyesters has been attracting considerable attention as they may combine the mechanical performance of aromatic polyesters with biodegradability known from aliphatic ones. Given the need for highly flexible biodegradable polymers, in this contribution, a series of aromatic-aliphatic based on poly(butylene terephthalate) and poly(lactic acid) (PBT-b-PLA) copolyesters exhibiting superior mechanical properties were copolymerized with an additional poly(tetramethylene oxide) (PTMO) soft block. The structure and properties of both series were characterized by means of attenuated total reflectance – Fourier transform infrared spectroscopy (ATR-FTIR), nuclear magnetic resonance spectroscopy (¹H NMR), differential scanning calorimetry (DSC), wide-angle X-ray scattering (WAXS) and dynamic mechanical, thermal analysis (DMTA). Moreover, the related changes in tensile properties have been evaluated and discussed. Lastly, the viscoelastic properties of synthesized poly(ester-ether) copolymers were investigated in detail by step cycle tensile tests. The block lengths decreased with the advance of treatment, and the block-random diblock terpolymers of (PBT-ran-PLA)-b-PTMO were obtained. DSC and DMTA analysis confirmed unambiguously that synthesized poly(ester-ether) copolymers are microphase-separated systems. The introduction of polyether co-units resulted in a decrease in crystallinity degree and melting temperature. X-ray diffraction patterns revealed that only PBT blocks are able to crystallize. The mechanical properties of (PBT-ran-PLA)-b-PTMO copolymers are a result of a unique arrangement of immiscible hard and soft blocks, providing both strength and elasticity.Keywords: aliphatic-aromatic copolymers, multiblock copolymers, phase behavior, thermoplastic elastomers
Procedia PDF Downloads 1401444 An Improved Adaptive Dot-Shape Beamforming Algorithm Research on Frequency Diverse Array
Authors: Yanping Liao, Zenan Wu, Ruigang Zhao
Abstract:
Frequency diverse array (FDA) beamforming is a technology developed in recent years, and its antenna pattern has a unique angle-distance-dependent characteristic. However, the beam is always required to have strong concentration, high resolution and low sidelobe level to form the point-to-point interference in the concentrated set. In order to eliminate the angle-distance coupling of the traditional FDA and to make the beam energy more concentrated, this paper adopts a multi-carrier FDA structure based on proposed power exponential frequency offset to improve the array structure and frequency offset of the traditional FDA. The simulation results show that the beam pattern of the array can form a dot-shape beam with more concentrated energy, and its resolution and sidelobe level performance are improved. However, the covariance matrix of the signal in the traditional adaptive beamforming algorithm is estimated by the finite-time snapshot data. When the number of snapshots is limited, the algorithm has an underestimation problem, which leads to the estimation error of the covariance matrix to cause beam distortion, so that the output pattern cannot form a dot-shape beam. And it also has main lobe deviation and high sidelobe level problems in the case of limited snapshot. Aiming at these problems, an adaptive beamforming technique based on exponential correction for multi-carrier FDA is proposed to improve beamforming robustness. The steps are as follows: first, the beamforming of the multi-carrier FDA is formed under linear constrained minimum variance (LCMV) criteria. Then the eigenvalue decomposition of the covariance matrix is performed to obtain the diagonal matrix composed of the interference subspace, the noise subspace and the corresponding eigenvalues. Finally, the correction index is introduced to exponentially correct the small eigenvalues of the noise subspace, improve the divergence of small eigenvalues in the noise subspace, and improve the performance of beamforming. The theoretical analysis and simulation results show that the proposed algorithm can make the multi-carrier FDA form a dot-shape beam at limited snapshots, reduce the sidelobe level, improve the robustness of beamforming, and have better performance.Keywords: adaptive beamforming, correction index, limited snapshot, multi-carrier frequency diverse array, robust
Procedia PDF Downloads 1311443 The Connection between De Minimis Rule and the Effect on Trade
Authors: Pedro Mario Gonzalez Jimenez
Abstract:
The novelties introduced by the last Notice on agreements of minor importance tighten the application of the ‘De minimis’ safe harbour in the European Union. However, the undetermined legal concept of effect on trade between the Member States becomes importance at the same time. Therefore, the current analysis that the jurist should carry out in the European Union to determine if an agreement appreciably restrict competition under Article 101 of the Treaty on the Functioning of the European Union is double. Hence, it is necessary to know how to balance the significance in competition and the significance in effect on trade between the Member States. It is a crucial issue due to the negative delimitation of restriction of competition affects the positive one. The methodology of this research is rather simple. Beginning with a historical approach to the ‘De Minimis Rule’, their main problems and uncertainties will be found. So, after the analysis of normative documents and the jurisprudence of the Court of Justice of the European Union some proposals of ‘Lege ferenda’ will be offered. These proposals try to overcome the contradictions and questions that currently exist in the European Union as a consequence of the current legal regime of agreements of minor importance. The main findings of this research are the followings: Firstly, the effect on trade is another way to analyze the importance of an agreement different from the ‘De minimis rule’. In point of fact, this concept is singularly adapted to go through agreements that have as object the prevention, restriction or distortion of competition, as it is observed in the most famous European Union case-law. Thanks to the effect on trade, as long as the proper requirements are met there is no a restriction of competition under article 101 of the Treaty on the Functioning of the European Union, even if the agreement had an anti-competitive object. These requirements are an aggregate market share lower than 5% on any of the relevant markets affected by the agreement and turnover lower than 40 million of Euros. Secondly, as the Notice itself says ‘it is also intended to give guidance to the courts and competition authorities of the Member States in their application of Article 101 of the Treaty, but it has no binding force for them’. This reality makes possible the existence of different statements among the different Member States and a confusing perception of what a restriction of competition is. Ultimately, damage on trade between the Member States could be observed for this reason. The main conclusion is that the significant effect on trade between Member States is irrelevant in agreements that restrict competition because of their effects but crucial in agreements that restrict competition because of their object. Thus, the Member States should propose the incorporation of a similar concept in their legal orders in order to apply the content of the Notice. Otherwise, the significance of the restrictive agreement on competition would not be properly assessed.Keywords: De minimis rule, effect on trade, minor importance agreements, safe harbour
Procedia PDF Downloads 1831442 The Role of Risk Attitudes and Networks on the Migration Decision: Empirical Evidence from the United States
Authors: Tamanna Rimi
Abstract:
A large body of literature has discussed the determinants of migration decision. However, the potential role of individual risk attitudes on migration decision has so far been overlooked. The research on migration literature has studied how the expected income differential influences migration flows for a risk neutral individual. However, migration takes place when there is no expected income differential or even the variability of income appears as lower than in the current location. This migration puzzle motivates a recent trend in the literature that analyzes how attitudes towards risk influence the decision to migrate. However, the significance of risk attitudes on migration decision has been addressed mostly in a theoretical perspective in the mainstream migration literature. The efficient outcome of labor market and overall economy are largely influenced by migration in many countries. Therefore, attitudes towards risk as a determinant of migration should get more attention in empirical studies. To author’s best knowledge, this is the first study that has examined the relationship between relative risk aversion and migration decision in US market. This paper considers movement across United States as a means of migration. In addition, this paper also explores the network effect due to the increasing size of one’s own ethnic group to a source location on the migration decision and how attitudes towards risk vary with network effect. Two ethnic groups (i.e. Asian and Hispanic) have been considered in this regard. For the empirical estimation, this paper uses two sources of data: 1) U.S. census data for social, economic, and health research, 2010 (IPUMPS) and 2) University of Michigan Health and Retirement Study, 2010 (HRS). In order to measure relative risk aversion, this study uses the ‘Two Sample Two-Stage Instrumental Variable (TS2SIV)’ technique. This is a similar method of Angrist (1990) and Angrist and Kruegers’ (1992) ‘Two Sample Instrumental Variable (TSIV)’ technique. Using a probit model, the empirical investigation yields the following results: (i) risk attitude has a significantly large impact on migration decision where more risk averse people are less likely to migrate; (ii) the impact of risk attitude on migration varies by other demographic characteristics such as age and sex; (iii) people with higher concentration of same ethnic households living in a particular place are expected to migrate less from their current place; (iv) the risk attitudes on migration vary with network effect. The overall findings of this paper relating risk attitude, migration decision and network effect can be a significant contribution addressing the gap between migration theory and empirical study in migration literature.Keywords: migration, network effect, risk attitude, U.S. market
Procedia PDF Downloads 1641441 Historical Development of Negative Emotive Intensifiers in Hungarian
Authors: Martina Katalin Szabó, Bernadett Lipóczi, Csenge Guba, István Uveges
Abstract:
In this study, an exhaustive analysis was carried out about the historical development of negative emotive intensifiers in the Hungarian language via NLP methods. Intensifiers are linguistic elements which modify or reinforce a variable character in the lexical unit they apply to. Therefore, intensifiers appear with other lexical items, such as adverbs, adjectives, verbs, infrequently with nouns. Due to the complexity of this phenomenon (set of sociolinguistic, semantic, and historical aspects), there are many lexical items which can operate as intensifiers. The group of intensifiers are admittedly one of the most rapidly changing elements in the language. From a linguistic point of view, particularly interesting are a special group of intensifiers, the so-called negative emotive intensifiers, that, on their own, without context, have semantic content that can be associated with negative emotion, but in particular cases, they may function as intensifiers (e.g.borzasztóanjó ’awfully good’, which means ’excellent’). Despite their special semantic features, negative emotive intensifiers are scarcely examined in literature based on large Historical corpora via NLP methods. In order to become better acquainted with trends over time concerning the intensifiers, The exhaustively analysed a specific historical corpus, namely the Magyar TörténetiSzövegtár (Hungarian Historical Corpus). This corpus (containing 3 millions text words) is a collection of texts of various genres and styles, produced between 1772 and 2010. Since the corpus consists of raw texts and does not contain any additional information about the language features of the data (such as stemming or morphological analysis), a large amount of manual work was required to process the data. Thus, based on a lexicon of negative emotive intensifiers compiled in a previous phase of the research, every occurrence of each intensifier was queried, and the results were stored in a separate data frame. Then, basic linguistic processing (POS-tagging, lemmatization etc.) was carried out automatically with the ‘magyarlanc’ NLP-toolkit. Finally, the frequency and collocation features of all the negative emotive words were automatically analyzed in the corpus. Outcomes of the research revealed in detail how these words have proceeded through grammaticalization over time, i.e., they change from lexical elements to grammatical ones, and they slowly go through a delexicalization process (their negative content diminishes over time). What is more, it was also pointed out which negative emotive intensifiers are at the same stage in this process in the same time period. Giving a closer look to the different domains of the analysed corpus, it also became certain that during this process, the pragmatic role’s importance increases: the newer use expresses the speaker's subjective, evaluative opinion at a certain level.Keywords: historical corpus analysis, historical linguistics, negative emotive intensifiers, semantic changes over time
Procedia PDF Downloads 2331440 Method for Controlling the Groundwater Polluted by the Surface Waters through Injection Wells
Authors: Victorita Radulescu
Abstract:
Introduction: The optimum exploitation of agricultural land in the presence of an aquifer polluted by the surface sources requires close monitoring of groundwater level in both periods of intense irrigation and in absence of the irrigations, in times of drought. Currently in Romania, in the south part of the country, the Baragan area, many agricultural lands are confronted with the risk of groundwater pollution in the absence of systematic irrigation, correlated with the climate changes. Basic Methods: The non-steady flow of the groundwater from an aquifer can be described by the Bousinesq’s partial differential equation. The finite element method was used, applied to the porous media needed for the water mass balance equation. By the proper structure of the initial and boundary conditions may be modeled the flow in drainage or injection systems of wells, according to the period of irrigation or prolonged drought. The boundary conditions consist of the groundwater levels required at margins of the analyzed area, in conformity to the reality of the pollutant emissaries, following the method of the double steps. Major Findings/Results: The drainage condition is equivalent to operating regimes on the two or three rows of wells, negative, as to assure the pollutant transport, modeled with the variable flow in groups of two adjacent nodes. In order to obtain the level of the water table, in accordance with the real constraints, are needed, for example, to be restricted its top level below of an imposed value, required in each node. The objective function consists of a sum of the absolute values of differences of the infiltration flow rates, increased by a large penalty factor when there are positive values of pollutant. In these conditions, a balanced structure of the pollutant concentration is maintained in the groundwater. The spatial coordinates represent the modified parameters during the process of optimization and the drainage flows through wells. Conclusions: The presented calculation scheme was applied to an area having a cross-section of 50 km between two emissaries with various levels of altitude and different values of pollution. The input data were correlated with the measurements made in-situ, such as the level of the bedrock, the grain size of the field, the slope, etc. This method of calculation can also be extended to determine the variation of the groundwater in the aquifer following the flood wave propagation in envoys.Keywords: environmental protection, infiltrations, numerical modeling, pollutant transport through soils
Procedia PDF Downloads 1561439 Bed Evolution under One-Episode Flushing in a Truck Sewer in Paris, France
Authors: Gashin Shahsavari, Gilles Arnaud-Fassetta, Alberto Campisano, Roberto Bertilotti, Fabien Riou
Abstract:
Sewer deposits have been identified as a major cause of dysfunctions in combined sewer systems regarding sewer management, which induces different negative consequents resulting in poor hydraulic conveyance, environmental damages as well as worker’s health. In order to overcome the problematics of sedimentation, flushing has been considered as the most operative and cost-effective way to minimize the sediments impacts and prevent such challenges. Flushing, by prompting turbulent wave effects, can modify the bed form depending on the hydraulic properties and geometrical characteristics of the conduit. So far, the dynamics of the bed-load during high-flow events in combined sewer systems as a complex environment is not well understood, mostly due to lack of measuring devices capable to work in the “hostile” in combined sewer system correctly. In this regards, a one-episode flushing issue from an opening gate valve with weir function was carried out in a trunk sewer in Paris to understanding its cleansing efficiency on the sediments (thickness: 0-30 cm). During more than 1h of flushing within 5 m distance in downstream of this flushing device, a maximum flowrate and a maximum level of water have been recorded at 5 m in downstream of the gate as 4.1 m3/s and 2.1 m respectively. This paper is aimed to evaluate the efficiency of this type of gate for around 1.1 km (from the point -50 m to +1050 m in downstream from the gate) by (i) determining bed grain-size distribution and sediments evolution through the sewer channel, as well as their organic matter content, and (ii) identifying sections that exhibit more changes in their texture after the flush. For the first one, two series of sampling were taken from the sewer length and then analyzed in laboratory, one before flushing and second after, at same points among the sewer channel. Hence, a non-intrusive sampling instrument has undertaken to extract the sediments smaller than the fine gravels. The comparison between sediments texture after the flush operation and the initial state, revealed the most modified zones by the flush effect, regarding the sewer invert slope and hydraulic parameters in the zone up to 400 m from the gate. At this distance, despite the increase of sediment grain-size rages, D50 (median grain-size) varies between 0.6 mm and 1.1 mm compared to 0.8 mm and 10 mm before and after flushing, respectively. Overall, regarding the sewer channel invert slope, results indicate that grains smaller than sands (< 2 mm) are more transported to downstream along about 400 m from the gate: in average 69% before against 38% after the flush with more dispersion of grain-sizes distributions. Furthermore, high effect of the channel bed irregularities on the bed material evolution has been observed after the flush.Keywords: bed-load evolution, combined sewer systems, flushing efficiency, sediments transport
Procedia PDF Downloads 4041438 The Impact of Technology and Artificial Intelligence on Children in Autism
Authors: Dina Moheb Rashid Michael
Abstract:
A descriptive statistical analysis of the data showed that the most important factor evoking negative attitudes among teachers is student behavior. have been presented as useful models for understanding the risk factors and protective factors associated with the emergence of autistic traits. Although these "syndrome" forms of autism reach clinical thresholds, they appear to be distinctly different from the idiopathic or "non-syndrome" autism phenotype. Most teachers reported that kindergartens did not prepare them for the educational needs of children with autism, particularly in relation to non-verbal skills. The study is important and points the way for improving teacher inclusion education in Thailand. Inclusive education for students with autism is still in its infancy in Thailand. Although the number of autistic children in schools has increased significantly since the Thai government introduced the Education Regulations for Persons with Disabilities Act in 2008, there is a general lack of services for autistic students and their families. This quantitative study used the Teaching Skills and Readiness Scale for Students with Autism (APTSAS) to test the attitudes and readiness of 110 elementary school teachers when teaching students with autism in general education classrooms. To uncover the true nature of these co morbidities, it is necessary to expand the definition of autism to include the cognitive features of the disorder, and then apply this expanded conceptualization to examine patterns of autistic syndromes. This study used various established eye-tracking paradigms to assess the visual and attention performance of children with DS and FXS who meet the autism thresholds defined in the Social Communication Questionnaire. To study whether the autistic profiles of these children are associated with visual orientation difficulties ("sticky attention"), decreased social attention, and increased visual search performance, all of which are hallmarks of the idiopathic autistic child phenotype. Data will be collected from children with DS and FXS, aged 6 to 10 years, and two control groups matched for age and intellectual ability (i.e., children with idiopathic autism).In order to enable a comparison of visual attention profiles, cross-sectional analyzes of developmental trajectories are carried out. Significant differences in the visual-attentive processes underlying the presentation of autism in children with FXS and DS have been suggested, supporting the concept of syndrome specificity. The study provides insights into the complex heterogeneity associated with autism syndrome symptoms and autism itself, with clinical implications for the utility of autism intervention programs in DS and FXS populations.Keywords: attitude, autism, teachers, sports activities, movement skills, motor skills
Procedia PDF Downloads 581437 Population Centralization in Urban Area and Metropolitans in Developing Countries: A Case Study of Urban Centralization in Iran
Authors: Safar Ghaedrahmati, Leila Soltani
Abstract:
Population centralization in urban area and metropolitans, especially in developing countries such as Iran increase metropolitan's problems. For few decades, the population of cities in developing countries, including Iran had a higher growth rate than the total growth rate of countries’ population. While in developed countries, the development of the big cities began decades ago and generally allowed for controlled and planned urban expansion, the opposite is the case in developing countries, where rapid urbanization process is characterized by an unplanned existing urban expansion. The developing metropolitan cities have enormous difficulties in coping both with the natural population growth and the urban physical expansion. Iranian cities are usually the heart of economic and cultural changes that have occurred after the Islamic revolution in 1979. These cities are increasingly having impacts via political–economical arrangement and chiefly by urban management structures. Structural features have led to the population growth of cities and urbanization (in number, population and physical frame) and the main problems in them. On the other hand, the lack of birth control policies and the deceptive attractions of cities, particularly big cities, and the birth rate has shot up, something which has occurred mainly in rural regions and small cities. The population of Iran has increased rapidly since 1956. The 1956 and 1966 decennial censuses counted the population of Iran at 18.9 million and 25.7 million, respectively, with a 3.1% annual growth rate during the 1956–1966 period. The 1976 and 1986 decennial censuses counted Iran’s population at 33.7 and 49.4 million, respectively, a 2.7% and 3.9% annual growth rate during the 1966–1976 and 1976–1986 periods. The 1996 count put Iran’s population at 60 million, a 1.96% annual growth rate from 1986–1996 and the 2006 count put Iran population at 72 million. A recent major policy of urban economic and industrial decentralization is a persistent program of the government. The policy has been identified as a result of the massive growth of Tehran in the recent years, up to 9 million by 2010. Part of the growth of the capitally resulted from the lack of economic opportunities elsewhere and in order to redress the developing primacy of Tehran and the domestic pressures which it is undergoing, the policy of decentralization is to be implemented as quickly as possible. Type of research is applied and method of data collection is documentary and methods of analysis are; population analysis with urban system analysis and urban distribution systemKeywords: population centralization, cities of Iran, urban centralization, urban system
Procedia PDF Downloads 3001436 Geochemical Study of Natural Bitumen, Condensate and Gas Seeps from Sousse Area, Central Tunisia
Authors: Belhaj Mohamed, M. Saidi, N. Boucherab, N. Ouertani, I. Bouazizi, M. Ben Jrad
Abstract:
Natural hydrocarbon seepage has helped petroleum exploration as a direct indicator of gas and/or oil subsurface accumulations. Surface macro-seeps are generally an indication of a fault in an active Petroleum Seepage System belonging to a Total Petroleum System. This paper describes a case study in which multiple analytical techniques were used to identify and characterize trace petroleum-related hydrocarbons and other volatile organic compounds in groundwater samples collected from Sousse aquifer (Central Tunisia). The analytical techniques used for analyses of water samples included gas chromatography-mass spectrometry (GC-MS), capillary GC with flame-ionization detection, Compund Specific Isotope Analysis, Rock Eval Pyrolysis. The objective of the study was to confirm the presence of gasoline and other petroleum products or other volatile organic pollutants in those samples in order to assess the respective implication of each of the potentially responsible parties to the contamination of the aquifer. In addition, the degree of contamination at different depths in the aquifer was also of interest. The oil and gas seeps have been investigated using biomarker and stable carbon isotope analyses to perform oil-oil and oil-source rock correlations. The seepage gases are characterized by high CH4 content, very low δ13CCH4 values (-71,9 ‰) and high C1/C1–5 ratios (0.95–1.0), light deuterium–hydrogen isotope ratios (-198 ‰) and light δ13CC2 and δ13CCO2 values (-23,8‰ and-23,8‰ respectively) indicating a thermogenic origin with the contribution of the biogenic gas. An organic geochemistry study was carried out on the more ten oil seep samples. This study includes light hydrocarbon and biomarkers analyses (hopanes, steranes, n-alkanes, acyclic isoprenoids, and aromatic steroids) using GC and GC-MS. The studied samples show at least two distinct families, suggesting two different types of crude oil origins: the first oil seeps appears to be highly mature, showing evidence of chemical and/or biological degradation and was derived from a clay-rich source rock deposited in suboxic conditions. It has been sourced mainly by the lower Fahdene (Albian) source rocks. The second oil seeps was derived from a carbonate-rich source rock deposited in anoxic conditions, well correlated with the Bahloul (Cenomanian-Turonian) source rock.Keywords: biomarkers, oil and gas seeps, organic geochemistry, source rock
Procedia PDF Downloads 4461435 An Examination of Earnings Management by Publicly Listed Targets Ahead of Mergers and Acquisitions
Authors: T. Elrazaz
Abstract:
This paper examines accrual and real earnings management by publicly listed targets around mergers and acquisitions. Prior literature shows that earnings management around mergers and acquisitions can have a significant economic impact because of the associated wealth transfers among stakeholders. More importantly, acting on behalf of their shareholders or pursuing their self-interests, managers of both targets and acquirers may be equally motivated to manipulate earnings prior to an acquisition to generate higher gains for their shareholders or themselves. Building on the grounds of information asymmetry, agency conflicts, stewardship theory, and the revelation principle, this study addresses the question of whether takeover targets employ accrual and real earnings management in the periods prior to the announcement of Mergers and Acquisitions (M&A). Additionally, this study examines whether acquirers are able to detect targets’ earnings management, and in response, adjust the acquisition premium paid in order not to face the risk of overpayment. This study uses an aggregate accruals approach in estimating accrual earnings management as proxied by estimated abnormal accruals. Additionally, real earnings management is proxied for by employing widely used models in accounting and finance literature. The results of this study indicate that takeover targets manipulate their earnings using accruals in the second year with an earnings release prior to the announcement of the M&A. Moreover, in partitioning the sample of targets according to the method of payment used in the deal, the results are restricted only to targets of stock-financed deals. These results are consistent with the argument that targets of cash-only or mixed-payment deals do not have the same strong motivations to manage their earnings as their stock-financed deals counterparts do additionally supporting the findings of prior studies that the method of payment in takeovers is value relevant. The findings of this study also indicate that takeover targets manipulate earnings upwards through cutting discretionary expenses the year prior to the acquisition while they do not do so by manipulating sales or production costs. Moreover, in partitioning the sample of targets according to the method of payment used in the deal, the results are restricted only to targets of stock-financed deals, providing further robustness to the results derived under the accrual-based models. Finally, this study finds evidence suggesting that acquirers are fully aware of the accrual-based techniques employed by takeover targets and can unveil such manipulation practices. These results are robust to alternative accrual and real earnings management proxies, as well as controlling for the method of payment in the deal.Keywords: accrual earnings management, acquisition premium, real earnings management, takeover targets
Procedia PDF Downloads 1181434 Modeling and Implementation of a Hierarchical Safety Controller for Human Machine Collaboration
Authors: Damtew Samson Zerihun
Abstract:
This paper primarily describes the concept of a hierarchical safety control (HSC) in discrete manufacturing to up-hold productivity with human intervention and machine failures using a systematic approach, through increasing the system availability and using additional knowledge on machines so as to improve the human machine collaboration (HMC). It also highlights the implemented PLC safety algorithm, in applying this generic concept to a concrete pro-duction line using a lab demonstrator called FATIE (Factory Automation Test and Integration Environment). Furthermore, the paper describes a model and provide a systematic representation of human-machine collabora-tion in discrete manufacturing and to this end, the Hierarchical Safety Control concept is proposed. This offers a ge-neric description of human-machine collaboration based on Finite State Machines (FSM) that can be applied to vari-ous discrete manufacturing lines instead of using ad-hoc solutions for each line. With its reusability, flexibility, and extendibility, the Hierarchical Safety Control scheme allows upholding productivity while maintaining safety with reduced engineering effort compared to existing solutions. The approach to the solution begins with a successful partitioning of different zones around the Integrated Manufacturing System (IMS), which are defined by operator tasks and the risk assessment, used to describe the location of the human operator and thus to identify the related po-tential hazards and trigger the corresponding safety functions to mitigate it. This includes selective reduced speed zones and stop zones, and in addition with the hierarchical safety control scheme and advanced safety functions such as safe standstill and safe reduced speed are used to achieve the main goals in improving the safe Human Ma-chine Collaboration and increasing the productivity. In a sample scenarios, It is shown that an increase of productivity in the order of 2.5% is already possible with a hi-erarchical safety control, which consequently under a given assumptions, a total sum of 213 € could be saved for each intervention, compared to a protective stop reaction. Thereby the loss is reduced by 22.8%, if occasional haz-ard can be refined in a hierarchical way. Furthermore, production downtime due to temporary unavailability of safety devices can be avoided with safety failover that can save millions per year. Moreover, the paper highlights the proof of the development, implementation and application of the concept on the lab demonstrator (FATIE), where it is realized on the new safety PLCs, Drive Units, HMI as well as Safety devices in addition to the main components of the IMS.Keywords: discrete automation, hierarchical safety controller, human machine collaboration, programmable logical controller
Procedia PDF Downloads 3691433 Graduates Construction of Knowledge and Ability to Act on Employable Opportunities
Authors: Martabolette Stecher
Abstract:
Introductory: How is knowledge and ability to act on employable opportunities constructed among students and graduates at higher educations? This question have been drawn much attention by researchers, governments and universities in Denmark, since there has been an increases in the rate of unemployment among graduates from higher education. The fact that more than ten thousand graduates from higher education without the opportunity to get a job in these years has a tremendous impact upon the social economy in Denmark. Every time a student graduate from higher education and become unemployed, it is possible to trace upon the person´s chances to get a job many years ahead. This means that the tremendous rate of graduate unemployment implies a decrease in employment and lost prosperity in Denmark within a billion Danish Kroner scale. Basic methodologies: The present study investigates the construction of knowledge and ability to act upon employable opportunities among students and graduates at higher educations in Denmark in a literature review as well as a preliminary study of students from Aarhus University. 15 students from the candidate of drama have been engaging in an introductory program at the beginning of their candidate study, which included three workshops focusing upon the more personal matters of their studies and life. They have reflected upon this process during the intervention and afterwards in a semi-structured interview. Concurrently a thorough literature review has delivered key concepts for the exploration of the research question. Major findings of the study: It is difficult to find one definition of what employability encompasses, hence the overall picture of how to incorporate the concept is difficult. The present theory of employability has been focusing upon the competencies, which students and graduates are going to develop in order to become employable. In recent years there has been an emphasis upon the mechanism which supports graduates to trust themselves and to develop their self-efficacy in terms of getting a sustainable job. However, there has been little or no focus in the literature upon the idea of how students and graduates from higher education construct knowledge about and ability to act upon employable opportunities involving network of actors both material and immaterial network and meaningful relations for students and graduates in developing their enterprising behavior to achieve employment. The Act-network-theory combined with theory of entrepreneurship education suggests an alternative strategy to focus upon when explaining sustainable ways of creating employability among graduates. The preliminary study also supports this theory suggesting that it is difficult to emphasize a single or several factors of importance rather highlighting the effect of a multitude network. Concluding statement: This study is the first step of a ph.d.-study investigating this problem in Denmark and the USA in the period 2015 – 2019.Keywords: employablity, graduates, action, opportunities
Procedia PDF Downloads 1991432 Preliminary Study of the Hydrothermal Polymetallic Ore Deposit at the Karancs Mountain, North-East Hungary
Authors: Eszter Kulcsar, Agnes Takacs, Gabriella B. Kiss, Peter Prakfalvi
Abstract:
The Karancs Mountain is part of the Miocene Inner Carpathian Volcanic Belt and is located in N-NE Hungary, along the Hungarian-Slovakian border. The 14 Ma old andesitic-dacitic units are surrounded by Oligocene sedimentary units (sandstone, siltstone). The host rocks of the mineralisation are siliceous and/or argillaceous volcanic units, quartz veins, hydrothermal breccia, and strongly silicified vuggy rocks, found in the various altered volcanic units. The hydrothermal breccia consists of highly silicified vuggy quartz clasts in quartz matrix. The hydrothermal alteration of the host units shows structural control at the deeper levels. The main ore minerals are galena, pyrite, marcasite, sphalerite, hematite, magnetite, arsenopyrite, anglesite and argentite The mineralisation was first mentioned in 1944 and the first exploration took place between 1961 and 1962 in the area. The first ore geological studies were performed between 1984-1985. The exploration programme was limited only to surface sampling; no drilling programme was performed. Petrographical and preliminary fluid inclusion studies were performed on calcite samples from a galena-bearing vein. Despite the early discovery of the mineralisation, no detailed description is available, thus its size, characteristics, and origin have remained unknown. The aim of this study is to examine the mineralisation, describe the characteristics in detail and to test the possible gold content of the various quartz veins and breccias. Finally, we also investigate the potential relation of the hydrothermal mineralisation to the surrounding similar mineralisations with similar ages (e.g. W-Mátra Mountains in Hungary, Banska Bystrica, Banska Stiavnica in Slovakia) in order to place the mineralisation within the volcanic-hydrothermal evolution of the Miocene Inner Carpathian Belt. As first steps, the study includes field mapping, traditional petrological and ore microscopy; X-ray diffraction analysis; SEM-EDS and EMPA studies on ore minerals, to obtain mineral chemical information. Fluid inclusion petrography and microthermometry and micro-Raman-spectroscopy studies are also planned on quartz-hosted inclusions to investigate the physical and chemical properties of the ore-forming fluid.Keywords: epithermal, Karancs Mountain, Hungary, Miocene Inner Carpathian volcanic belt, polimetallic ore deposit
Procedia PDF Downloads 1331431 Exploring the Potential of Mobile Learning in Distance Higher Education: A Case Study of the University of Jammu, Jammu, and Kashmir
Authors: Darshana Sharma
Abstract:
Distance Education has emerged as a viable alternative to serve the higher educational needs of the socially and economically disadvantaged people of the remote, rural areas of Jammu region. The University of Jammu is a National Accreditation, and Assessment Council accredited, A+ university and has been accorded graded autonomy by the University Grants Commission. It is a dual mode university offering academic programmes through the regular departments and through the Directorate of Distance Education. The Directorate of Distance Education, University of Jammu still uses printed study material as a mode of instructional delivery. The development of technologies has assured increased interaction and communication for distance learners throughout the distance open learning institutions. Though it is tempting and convenient to adopt technology already being used by others, it may not prove effective for the simple reason that two institutions may be unlike in some respect. The use of technology must be conceived in view of the needs of the learners; geographical socio-economic-cultural and technological contexts and financial, administrative and academic resources of the institution. Mobile learning (m-learning) is a novel approach to knowledge acquisition and dissemination and is gaining global attention. It has evolved as one of the useful channels of distance learning promoting interaction between learners and teachers. It is felt that the Directorate of Distance Education, University of Jammu also needs to adopt new technologies to provide more effective academic and information support to distance learners in order to keep them motivated and also to develop self-learning skills. The chief objective of the research on which this paper is based was to measure the opinion of the distance learners of the DDE, the University of Jammu about the merits of mobile learning. It also explores their preferences for implementing mobile learning. The survey research design of descriptive research has been used. The data was collected from 400 distance learners enrolled with undergraduate and post-graduate programmes using self-constructed questionnaire containing five-point Likert scale items arranging from strongly agree, agree, indifferent, disagree and strongly disagree. Percentages were used to analyze the data. The findings lead to conclude that mobile learning has a great potential for the DDE for reaching out to the rural, remotely located distance learners of the Jammu region and also to improve the teaching-learning environment. The paper also finds out the challenges in the implementation of mobile learning in the region and further makes suggestions for effective implementation of mobile learning in DDE, University of Jammu.Keywords: directorate of distance education, mobile learning, national accreditation and assessment council, university of Jammu
Procedia PDF Downloads 1231430 Detection of Acrylamide Using Liquid Chromatography-Tandem Mass Spectrometry and Quantitative Risk Assessment in Selected Food from Saudi Market
Authors: Sarah A. Alotaibi, Mohammed A. Almutairi, Abdullah A. Alsayari, Adibah M. Almutairi, Somaiah K. Almubayedh
Abstract:
Concerns over the presence of acrylamide in food date back to 2002, when Swedish scientists stated that, in carbohydrate-rich foods, amounts of acrylamide were formed when cooked at high temperatures. Similar findings were reported by other researchers which, consequently, caused major international efforts to investigate dietary exposure and the subsequent health complications in order to properly manage this issue. Due to this issue, in this work, we aim to determine the acrylamide level in different foods (coffee, potato chips, biscuits, and baby food) commonly consumed by the Saudi population. In a total of forty-three samples, acrylamide was detected in twenty-three samples at levels of 12.3 to 2850 µg/kg. In reference to the food groups, the highest concentration of acrylamide was found in coffee samples (<12.3-2850 μg/kg), followed by potato chips (655-1310 μg/kg), then biscuits (23.5-449 μg/kg), whereas the lowest acrylamide level was observed in baby food (<14.75 – 126 μg/kg). Most coffee, biscuits and potato chips products contain high amount of acrylamide content and also the most commonly consumed product. Saudi adults had a mean exposure of acrylamide for coffee, potato, biscuit, and cereal (0.07439, 0.04794, 0.01125, 0.003371 µg/kg-b.w/day), respectively. On the other hand, exposure to acrylamide in Saudi infants and children to the same types of food was (0.1701, 0.1096, 0.02572, 0.00771 µg/kg-b.w/day), respectively. Most groups have a percentile that exceeds the tolerable daily intake (TDI) cancer value (2.6 µg/kg-b.w/day). Overall, the MOE results show that the Saudi population is at high risk of acrylamide-related disease in all food types, and there is a chance of cancer risk in all age groups (all values ˂10,000). Furthermore, it was found that in non-cancer risks, the acrylamide in all tested foods was within the safe limit (˃125), except for potato chips, in which there is a risk for diseases in the population. With potato and coffee as raw materials, additional studies were conducted to assess different factors, including temperature, cocking time, and additives affecting the acrylamide formation in fried potato and roasted coffee, by systematically varying processing temperatures and time values, a mitigation of acrylamide content was achieved when lowering the temperature and decreasing the cooking time. Furthermore, it was shown that the combination of the addition of chitosan and NaCl had a large impact on the formation.Keywords: risk assessment, dietary exposure, MOA, acrylamide, hazard
Procedia PDF Downloads 581429 Lessons Learned from Push-Plus Implementation in Northern Nigeria
Authors: Aisha Giwa, Mohammed-Faosy Adeniran, Olufunke Femi-Ojo
Abstract:
Four decades ago, the World Health Organization (WHO) launched the Expanded Programme on Immunization (EPI). The EPI blueprint laid out the technical and managerial functions necessary to routinely vaccinate children with a limited number of vaccines, providing protection against diphtheria, tetanus, whooping cough, measles, polio, and tuberculosis, and to prevent maternal and neonatal tetanus by vaccinating women of childbearing age with tetanus toxoid. Despite global efforts, the Routine Immunization (RI) coverage in two of the World Health Organization (WHO) regions; the African Region and the South-East Asia Region, still remains short of its targets. As a result, the WHO Regional Director for Africa declared 2012 as the year for intensifying RI in these regions and this also coincided with the declaration of polio as a programmatic emergency by the WHO Executive Board. In order to intensify routine immunization, the National Routine Immunization Strategic Plan (2013-2015) stated that its core priority is to ensure 100% adequacy and availability of vaccines for safe immunization. To achieve 100% availability, the “PUSH System” and then “Push-Plus” were adopted for vaccine distribution, which replaced the inefficient “PULL” method. The NPHCDA plays the key role in coordinating activities in area advocacy, capacity building, engagement of 3PL for the state as well as monitoring and evaluation of the vaccine delivery process. eHealth Africa (eHA) is a player as a 3PL service provider engaged by State Primary Health Care Boards (SPHCDB) to ensure vaccine availability through Vaccine Direct Delivery (VDD) project which is essential to successful routine immunization services. The VDD project ensures the availability and adequate supply of high-quality vaccines and immunization-related materials to last-mile facilities. eHA’s commitment to the VDD project saw the need for an assessment of the project vis-a-vis the overall project performance, evaluation of a process for necessary improvement suggestions as well as general impact across Kano State (Where eHA had transitioned to the state), Bauchi State (currently manage delivery to all LGAs except 3 LGAs currently being managed by the state), Sokoto State (eHA currently covers all LGAs) and Zamfara State (Currently, in-sourced and managed solely by the state).Keywords: cold chain logistics, health supply chain system strengthening, logistics management information system, vaccine delivery traceability and accountability
Procedia PDF Downloads 3171428 Comparative Comparison (Cost-Benefit Analysis) of the Costs Caused by the Earthquake and Costs of Retrofitting Buildings in Iran
Authors: Iman Shabanzadeh
Abstract:
Earthquake is known as one of the most frequent natural hazards in Iran. Therefore, policy making to improve the strengthening of structures is one of the requirements of the approach to prevent and reduce the risk of the destructive effects of earthquakes. In order to choose the optimal policy in the face of earthquakes, this article tries to examine the cost of financial damages caused by earthquakes in the building sector and compare it with the costs of retrofitting. In this study, the results of adopting the scenario of "action after the earthquake" and the policy scenario of "strengthening structures before the earthquake" have been collected, calculated and finally analyzed by putting them together. Methodologically, data received from governorates and building retrofitting engineering companies have been used. The scope of the study is earthquakes occurred in the geographical area of Iran, and among them, eight earthquakes have been specifically studied: Miane, Ahar and Haris, Qator, Momor, Khorasan, Damghan and Shahroud, Gohran, Hormozgan and Ezgole. The main basis of the calculations is the data obtained from retrofitting companies regarding the cost per square meter of building retrofitting and the data of the governorate regarding the power of earthquake destruction, the realized costs for the reconstruction and construction of residential units. The estimated costs have been converted to the value of 2021 using the time value of money method to enable comparison and aggregation. The cost-benefit comparison of the two policies of action after the earthquake and retrofitting before the earthquake in the eight earthquakes investigated shows that the country has suffered five thousand billion Tomans of losses due to the lack of retrofitting of buildings against earthquakes. Based on the data of the Budget Law's of Iran, this figure was approximately twice the budget of the Ministry of Roads and Urban Development and five times the budget of the Islamic Revolution Housing Foundation in 2021. The results show that the policy of retrofitting structures before an earthquake is significantly more optimal than the competing scenario. The comparison of the two policy scenarios examined in this study shows that the policy of retrofitting buildings before an earthquake, on the one hand, prevents huge losses, and on the other hand, by increasing the number of earthquake-resistant houses, it reduces the amount of earthquake destruction. In addition to other positive effects of retrofitting, such as the reduction of mortality due to earthquake resistance of buildings and the reduction of other economic and social effects caused by earthquakes. These are things that can prove the cost-effectiveness of the policy scenario of "strengthening structures before earthquakes" in Iran.Keywords: disaster economy, earthquake economy, cost-benefit analysis, resilience
Procedia PDF Downloads 631427 Challenging Weak Central Coherence: An Exploration of Neurological Evidence from Visual Processing and Linguistic Studies in Autism Spectrum Disorder
Authors: Jessica Scher Lisa, Eric Shyman
Abstract:
Autism spectrum disorder (ASD) is a neuro-developmental disorder that is characterized by persistent deficits in social communication and social interaction (i.e. deficits in social-emotional reciprocity, nonverbal communicative behaviors, and establishing/maintaining social relationships), as well as by the presence of repetitive behaviors and perseverative areas of interest (i.e. stereotyped or receptive motor movements, use of objects, or speech, rigidity, restricted interests, and hypo or hyperactivity to sensory input or unusual interest in sensory aspects of the environment). Additionally, diagnoses of ASD require the presentation of symptoms in the early developmental period, marked impairments in adaptive functioning, and a lack of explanation by general intellectual impairment or global developmental delay (although these conditions may be co-occurring). Over the past several decades, many theories have been developed in an effort to explain the root cause of ASD in terms of atypical central cognitive processes. The field of neuroscience is increasingly finding structural and functional differences between autistic and neurotypical individuals using neuro-imaging technology. One main area this research has focused upon is in visuospatial processing, with specific attention to the notion of ‘weak central coherence’ (WCC). This paper offers an analysis of findings from selected studies in order to explore research that challenges the ‘deficit’ characterization of a weak central coherence theory as opposed to a ‘superiority’ characterization of strong local coherence. The weak central coherence theory has long been both supported and refuted in the ASD literature and has most recently been increasingly challenged by advances in neuroscience. The selected studies lend evidence to the notion of amplified localized perception rather than deficient global perception. In other words, WCC may represent superiority in ‘local processing’ rather than a deficit in global processing. Additionally, the right hemisphere and the specific area of the extrastriate appear to be key in both the visual and lexicosemantic process. Overactivity in the striate region seems to suggest inaccuracy in semantic language, which lends itself to support for the link between the striate region and the atypical organization of the lexicosemantic system in ASD.Keywords: autism spectrum disorder, neurology, visual processing, weak coherence
Procedia PDF Downloads 130