Search results for: risk models
123 Application of a Theoretical Framework as a Context for a Travel Behavior Change Policy Intervention
Authors: F. Moghtaderi, M. Burke, J. Troelsen
Abstract:
There has been a significant decline in active travel and a massive increase in the use of car dependent travel in many countries during the past two decades. Evidential risks for people’s physical and mental health problems are correlated with this increased use of motorized travel. These health related problems range from overweight and obesity to increased air pollution. In response to these rising concerns health professionals, traffic planers, local authorities and others have introduced a variety of initiatives to counterbalance the dominance of cars for daily journeys. However, the nature of travel behavior change interventions, which aim to reduce car use, are very complex and challenging regarding their interactions with human behavior. To change travel behavior at least two aspects have to be taken into consideration. First, how to alter attitudes and perceptions toward the sustainable and healthy modes of travel, in competition with experiences of private car use. And second, how to make these behavior change processes irreversible and sustainable. There are no comprehensive models available to guide policy interventions to increase the level of success of travel behavior change interventions across both these dimensions. A comprehensive theoretical framework is required in the effort to optimize how to facilitate and guide the processes of data collection and analysis to achieve the best possible guidelines for policy makers. Regarding the gaps in the travel behavior change research literature, this paper attempted to identify and suggest a multidimensional framework in order to facilitate planning the implemented travel behavior change interventions. A structured mixed-method model is suggested to improve the analytic power of the results according to the complexity of human behavior. In order to recognize people’s attitudes towards a specific travel mode, the Theory of Planned Behavior (TPB) was operationalized. But in order to capture decision making processes the Transtheoretical model of Behavior Change (TTM) was also used. Consequently, the combination of these two theories (TTM and TPB) has resulted in a synthesis with appropriate concepts to identify and design an implemented travel behavior change interventions.
Keywords: Behavior change theories, Theoretical framework, Travel behavior change interventions.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2871122 Suicide Conceptualization in Adolescents through Semantic Networks
Authors: K. P. Valdés García, E. I. Rodríguez Fonseca, L. G. Juárez Cantú
Abstract:
Suicide is a global, multidimensional and dynamic problem of mental health, which requires a constant study for its understanding and prevention. When research of this phenomenon is done, it is necessary to consider the different characteristics it may have because of the individual and sociocultural variables, the importance of this consideration is related to the generation of effective treatments and interventions. Adolescents are a vulnerable population due to the characteristics of the development stage. The investigation was carried out with the objective of identifying and describing the conceptualization of adolescents of suicide, and in this process, we find possible differences between men and women. The study was carried out in Saltillo, Coahuila, Mexico. The sample was composed of 418 volunteer students aged between 11 and 18 years. The ethical aspects of the research were reviewed and considered in all the processes of the investigation with the participants, their parents and the schools to which they belonged, psychological attention was offered to the participants and preventive workshops were carried in the educational institutions. Natural semantic networks were the instrument used, since this hybrid method allows to find and analyze the social concept of a phenomenon; in this case, the word suicide was used as an evocative stimulus and participants were asked to evoke at least five words and a maximum 10 that they thought were related to suicide, and then hierarchize them according to the closeness with the construct. The subsequent analysis was carried with Excel, yielding the semantic weights, affective loads and the distances between each of the semantic fields established according to the words reported by the subjects. The results showed similarities in the conceptualization of suicide in adolescents, men and women. Seven semantic fields were generated; the words were related in the discourse analysis: 1) death, 2) possible triggering factors, 3) associated moods, 4) methods used to carry it out, 5) psychological symptomatology that could affect, 6) words associated with a rejection of suicide, and finally, 7) specific objects to carry it out. One of the necessary aspects to consider in the investigations of complex issues such as suicide is to have a diversity of instruments and techniques that adjust to the characteristics of the population and that allow to understand the phenomena from the social constructs and not only theoretical. The constant study of suicide is a pressing need, the loss of a life from emotional difficulties that can be solved through psychiatry and psychological methods requires governments and professionals to pay attention and work with the risk population.
Keywords: Adolescents, semantic networks, speech analysis, suicide.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 762121 Joint Training Offer Selection and Course Timetabling Problems: Models and Algorithms
Authors: Gianpaolo Ghiani, Emanuela Guerriero, Emanuele Manni, Alessandro Romano
Abstract:
In this article, we deal with a variant of the classical course timetabling problem that has a practical application in many areas of education. In particular, in this paper we are interested in high schools remedial courses. The purpose of such courses is to provide under-prepared students with the skills necessary to succeed in their studies. In particular, a student might be under prepared in an entire course, or only in a part of it. The limited availability of funds, as well as the limited amount of time and teachers at disposal, often requires schools to choose which courses and/or which teaching units to activate. Thus, schools need to model the training offer and the related timetabling, with the goal of ensuring the highest possible teaching quality, by meeting the above-mentioned financial, time and resources constraints. Moreover, there are some prerequisites between the teaching units that must be satisfied. We first present a Mixed-Integer Programming (MIP) model to solve this problem to optimality. However, the presence of many peculiar constraints contributes inevitably in increasing the complexity of the mathematical model. Thus, solving it through a general-purpose solver may be performed for small instances only, while solving real-life-sized instances of such model requires specific techniques or heuristic approaches. For this purpose, we also propose a heuristic approach, in which we make use of a fast constructive procedure to obtain a feasible solution. To assess our exact and heuristic approaches we perform extensive computational results on both real-life instances (obtained from a high school in Lecce, Italy) and randomly generated instances. Our tests show that the MIP model is never solved to optimality, with an average optimality gap of 57%. On the other hand, the heuristic algorithm is much faster (in about the 50% of the considered instances it converges in approximately half of the time limit) and in many cases allows achieving an improvement on the objective function value obtained by the MIP model. Such an improvement ranges between 18% and 66%.Keywords: Heuristic, MIP model, Remedial course, School, Timetabling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1637120 Emerging VC Industry: Do Market Expectations Play the Most Important Role in Project Selection? Evidence on Russian Data
Authors: I. Rodionov, A. Semenov, E. Gosteva, O. Sokolova
Abstract:
The venture capital becomes more and more advanced and effective source of the innovation project financing, connected with a high-risk level. In the developed countries, it plays a key role in transforming innovation projects into successful businesses and creating the prosperity of the modern economy. In Russia, there are many necessary preconditions for creation of the effective venture investment system: the network of the public institutes for innovation financing operates; there is a significant number of the small and medium-sized enterprises, capable to sell production with good market potential. However, the current system does not confirm the necessary level of efficiency in practice that can be substantially explained by the absence of the accurate plan of action to form the national venture model and by the lack of experience of successful venture deals with profitable exits in Russian economy. This paper studies the influence of various factors on the venture industry development by the example of the IT-sector in Russia. The choice of the sector is based on the fact, that this segment is the main driver of the venture capital market growth in Russia, and the necessary set of data exists. The size of investment of the second round is used as the dependent variable. To analyse the influence of the previous round, such determinant as the volume of the previous (first) round investments is used. There is also used a dummy variable in regression to examine that the participation of an investor with high reputation and experience in the previous round can influence the size of the next investment round. The regression analysis of short-term interrelations between studied variables reveals prevailing influence of the volume of the first round investments on the venture investments volume of the second round. The most important determinant of the value of the second-round investment is the value of first–round investment, so it means that the most competitive on the Russian market are the start-up teams that can attract more money on the start, and the target market growth is not the factor of crucial importance. This supports the point of view that VC in Russia is driven by endogenous factors and not by exogenous ones that are based on global market growth.Keywords: Venture industry, venture investment, determinants of the venture sector development, IT-sector.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1560119 Using the Monte Carlo Simulation to Predict the Assembly Yield
Authors: C. Chahin, M. C. Hsu, Y. H. Lin, C. Y. Huang
Abstract:
Electronics Products that achieve high levels of integrated communications, computing and entertainment, multimedia features in small, stylish and robust new form factors are winning in the market place. Due to the high costs that an industry may undergo and how a high yield is directly proportional to high profits, IC (Integrated Circuit) manufacturers struggle to maximize yield, but today-s customers demand miniaturization, low costs, high performance and excellent reliability making the yield maximization a never ending research of an enhanced assembly process. With factors such as minimum tolerances, tighter parameter variations a systematic approach is needed in order to predict the assembly process. In order to evaluate the quality of upcoming circuits, yield models are used which not only predict manufacturing costs but also provide vital information in order to ease the process of correction when the yields fall below expectations. For an IC manufacturer to obtain higher assembly yields all factors such as boards, placement, components, the material from which the components are made of and processes must be taken into consideration. Effective placement yield depends heavily on machine accuracy and the vision of the system which needs the ability to recognize the features on the board and component to place the device accurately on the pads and bumps of the PCB. There are currently two methods for accurate positioning, using the edge of the package and using solder ball locations also called footprints. The only assumption that a yield model makes is that all boards and devices are completely functional. This paper will focus on the Monte Carlo method which consists in a class of computational algorithms (information processed algorithms) which depends on repeated random samplings in order to compute the results. This method utilized in order to recreate the simulation of placement and assembly processes within a production line.
Keywords: Monte Carlo simulation, placement yield, PCBcharacterization, electronics assembly
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2168118 Predictive Factors of Exercise Behaviors of Junior High School Students in Chonburi Province
Authors: Tanida Julvanichpong
Abstract:
Exercise has been regarded as a necessary and important aspect to enhance physical performance and psychology health. Body weight statistics of students in junior high school students in Chonburi Province beyond a standard risk of obesity. Promoting exercise among Junior high school students in Chonburi Province, essential knowledge concerning factors influencing exercise is needed. Therefore, this study aims to (1) determine the levels of perceived exercise behavior, exercise behavior in the past, perceived barriers to exercise, perceived benefits of exercise, perceived self-efficacy to exercise, feelings associated with exercise behavior, influence of the family to exercise, influence of friends to exercise, and the perceived influence of the environment on exercise. (2) examine the predicting ability of each of the above factors while including personal factors (sex, educational level) for exercise behavior. Pender’s Health Promotion Model was used as a guide for the study. Sample included 652 students in junior high schools, Chonburi Provience. The samples were selected by Multi-Stage Random Sampling. Data Collection has been done by using self-administered questionnaires. Data were analyzed using descriptive statistics, Pearson’s product moment correlation coefficient, Eta, and stepwise multiple regression analysis. The research results showed that: 1. Perceived benefits of exercise, influence of teacher, influence of environmental, feelings associated with exercise behavior were at a high level. Influence of the family to exercise, exercise behavior, exercise behavior in the past, perceived self-efficacy to exercise and influence of friends were at a moderate level. Perceived barriers to exercise were at a low level. 2. Exercise behavior was positively significant related to perceived benefits of exercise, influence of the family to exercise, exercise behavior in the past, perceived self-efficacy to exercise, influence of friends, influence of teacher, influence of environmental and feelings associated with exercise behavior (p < .01, respectively) and was negatively significant related to educational level and perceived barriers to exercise (p < .01, respectively). Exercise behavior was significant related to sex (Eta = 0.243, p=.000). 3. Exercise behavior in the past, influence of the family to exercise significantly contributed 60.10 percent of the variance to the prediction of exercise behavior in male students (p < .01). Exercise behavior in the past, perceived self-efficacy to exercise, perceived barriers to exercise, and educational level significantly contributed 52.60 percent of the variance to the prediction of exercise behavior in female students (p < .01).
Keywords: Predictive factors, exercise behaviors, junior high school.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1180117 Automatic Distance Compensation for Robust Voice-based Human-Computer Interaction
Authors: Randy Gomez, Keisuke Nakamura, Kazuhiro Nakadai
Abstract:
Distant-talking voice-based HCI system suffers from performance degradation due to mismatch between the acoustic speech (runtime) and the acoustic model (training). Mismatch is caused by the change in the power of the speech signal as observed at the microphones. This change is greatly influenced by the change in distance, affecting speech dynamics inside the room before reaching the microphones. Moreover, as the speech signal is reflected, its acoustical characteristic is also altered by the room properties. In general, power mismatch due to distance is a complex problem. This paper presents a novel approach in dealing with distance-induced mismatch by intelligently sensing instantaneous voice power variation and compensating model parameters. First, the distant-talking speech signal is processed through microphone array processing, and the corresponding distance information is extracted. Distance-sensitive Gaussian Mixture Models (GMMs), pre-trained to capture both speech power and room property are used to predict the optimal distance of the speech source. Consequently, pre-computed statistic priors corresponding to the optimal distance is selected to correct the statistics of the generic model which was frozen during training. Thus, model combinatorics are post-conditioned to match the power of instantaneous speech acoustics at runtime. This results to an improved likelihood in predicting the correct speech command at farther distances. We experiment using real data recorded inside two rooms. Experimental evaluation shows voice recognition performance using our method is more robust to the change in distance compared to the conventional approach. In our experiment, under the most acoustically challenging environment (i.e., Room 2: 2.5 meters), our method achieved 24.2% improvement in recognition performance against the best-performing conventional method.
Keywords: Human Machine Interaction, Human Computer Interaction, Voice Recognition, Acoustic Model Compensation, Acoustic Speech Enhancement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1888116 Comparative Parametric Analysis on the Dynamic Response of Fibre Composite Beams with Debonding
Authors: Indunil Jayatilake, Warna Karunasena
Abstract:
Fiber Reinforced Polymer (FRP) composites enjoy an array of applications ranging from aerospace, marine and military to automobile, recreational and civil industry due to their outstanding properties. A structural glass fiber reinforced polymer (GFRP) composite sandwich panel made from E-glass fiber skin and a modified phenolic core has been manufactured in Australia for civil engineering applications. One of the major mechanisms of damage in FRP composites is skin-core debonding. The presence of debonding is of great concern not only because it severely affects the strength but also it modifies the dynamic characteristics of the structure, including natural frequency and vibration modes. This paper deals with the investigation of the dynamic characteristics of a GFRP beam with single and multiple debonding by finite element based numerical simulations and analyses using the STRAND7 finite element (FE) software package. Three-dimensional computer models have been developed and numerical simulations were done to assess the dynamic behavior. The FE model developed has been validated with published experimental, analytical and numerical results for fully bonded as well as debonded beams. A comparative analysis is carried out based on a comprehensive parametric investigation. It is observed that the reduction in natural frequency is more affected by single debonding than the equally sized multiple debonding regions located symmetrically to the single debonding position. Thus it is revealed that a large single debonding area leads to more damage in terms of natural frequency reduction than isolated small debonding zones of equivalent area, appearing in the GFRP beam. Furthermore, the extents of natural frequency shifts seem mode-dependent and do not seem to have a monotonous trend of increasing with the mode numbers.
Keywords: Debonding, dynamic response, finite element modelling, FRP beams.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 527115 Discrepant Views of Social Competence and Links with Social Phobia
Authors: Pamela-Zoe Topalli, Niina Junttila, Päivi M. Niemi, Klaus Ranta
Abstract:
Adolescents’ biased perceptions about their social competence (SC), whether negatively or positively, serve to influence their socioemotional adjustment such as early feelings of social phobia (nowadays referred to as Social Anxiety Disorder-SAD). Despite the importance of biased self-perceptions in adolescents’ psychosocial adjustment, the extent to which discrepancies between self- and others’ evaluations of one’s SC are linked to social phobic symptoms remains unclear in the literature. This study examined the perceptual discrepancy profiles between self- and peers’ as well as between self- and teachers’ evaluations of adolescents’ SC and the interrelations of these profiles with self-reported social phobic symptoms. The participants were 390 3rd graders (15 years old) of Finnish lower secondary school (50.8% boys, 49.2% girls). In contrast with variable-centered approaches that have mainly been used by previous studies when focusing on this subject, this study used latent profile analysis (LPA), a person-centered approach which can provide information regarding risk profiles by capturing the heterogeneity within a population and classifying individuals into groups. LPA revealed the following five classes of discrepancy profiles: i) extremely negatively biased perceptions of SC, ii) negatively biased perceptions of SC, iii) quite realistic perceptions of SC, iv) positively biased perceptions of SC, and v) extremely positively biased perceptions of SC. Adolescents with extremely negatively biased perceptions and negatively biased perceptions of their own SC reported the highest number of social phobic symptoms. Adolescents with quite realistic, positively biased and extremely positively biased perceptions reported the lowest number of socio-phobic symptoms. The results point out the negatively and the extremely negatively biased perceptions as possible contributors to social phobic symptoms. Moreover, the association of quite realistic perceptions with low number of social phobic symptoms indicates its potential protective power against social phobia. Finally, positively and extremely positively biased perceptions of SC are negatively associated with social phobic symptoms in this study. However, the profile of extremely positively biased perceptions might be linked as well with the existence of externalizing problems such as antisocial behavior (e.g. disruptive impulsivity). The current findings highlight the importance of considering discrepancies between self- and others’ perceptions of one’s SC in clinical and research efforts. Interventions designed to prevent or moderate social phobic symptoms need to take into account individual needs rather than aiming for uniform treatment. Implications and future directions are discussed.
Keywords: Adolescence, latent profile analysis, perceptual discrepancies, social competence, social phobia.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 907114 Comparison of Data Reduction Algorithms for Image-Based Point Cloud Derived Digital Terrain Models
Authors: M. Uysal, M. Yilmaz, I. Tiryakioğlu
Abstract:
Digital Terrain Model (DTM) is a digital numerical representation of the Earth's surface. DTMs have been applied to a diverse field of tasks, such as urban planning, military, glacier mapping, disaster management. In the expression of the Earth' surface as a mathematical model, an infinite number of point measurements are needed. Because of the impossibility of this case, the points at regular intervals are measured to characterize the Earth's surface and DTM of the Earth is generated. Hitherto, the classical measurement techniques and photogrammetry method have widespread use in the construction of DTM. At present, RADAR, LiDAR, and stereo satellite images are also used for the construction of DTM. In recent years, especially because of its superiorities, Airborne Light Detection and Ranging (LiDAR) has an increased use in DTM applications. A 3D point cloud is created with LiDAR technology by obtaining numerous point data. However recently, by the development in image mapping methods, the use of unmanned aerial vehicles (UAV) for photogrammetric data acquisition has increased DTM generation from image-based point cloud. The accuracy of the DTM depends on various factors such as data collection method, the distribution of elevation points, the point density, properties of the surface and interpolation methods. In this study, the random data reduction method is compared for DTMs generated from image based point cloud data. The original image based point cloud data set (100%) is reduced to a series of subsets by using random algorithm, representing the 75, 50, 25 and 5% of the original image based point cloud data set. Over the ANS campus of Afyon Kocatepe University as the test area, DTM constructed from the original image based point cloud data set is compared with DTMs interpolated from reduced data sets by Kriging interpolation method. The results show that the random data reduction method can be used to reduce the image based point cloud datasets to 50% density level while still maintaining the quality of DTM.
Keywords: DTM, unmanned aerial vehicle, UAV, random, Kriging.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 812113 A Coupled Model for Two-Phase Simulation of a Heavy Water Pressure Vessel Reactor
Authors: Damian Ramajo, Santiago Corzo, Norberto Nigro
Abstract:
A Multi-dimensional computational fluid dynamics (CFD) two-phase model was developed with the aim to simulate the in-core coolant circuit of a pressurized heavy water reactor (PHWR) of a commercial nuclear power plant (NPP). Due to the fact that this PHWR is a Reactor Pressure Vessel type (RPV), three-dimensional (3D) detailed modelling of the large reservoirs of the RPV (the upper and lower plenums and the downcomer) were coupled with an in-house finite volume one-dimensional (1D) code in order to model the 451 coolant channels housing the nuclear fuel. Regarding the 1D code, suitable empirical correlations for taking into account the in-channel distributed (friction losses) and concentrated (spacer grids, inlet and outlet throttles) pressure losses were used. A local power distribution at each one of the coolant channels was also taken into account. The heat transfer between the coolant and the surrounding moderator was accurately calculated using a two-dimensional theoretical model. The implementation of subcooled boiling and condensation models in the 1D code along with the use of functions for representing the thermal and dynamic properties of the coolant and moderator (heavy water) allow to have estimations of the in-core steam generation under nominal flow conditions for a generic fission power distribution. The in-core mass flow distribution results for steady state nominal conditions are in agreement with the expected from design, thus getting a first assessment of the coupled 1/3D model. Results for nominal condition were compared with those obtained with a previous 1/3D single-phase model getting more realistic temperature patterns, also allowing visualize low values of void fraction inside the upper plenum. It must be mentioned that the current results were obtained by imposing prescribed fission power functions from literature. Therefore, results are showed with the aim of point out the potentiality of the developed model.Keywords: CFD, PHWR, Thermo-hydraulic, Two-phase flow.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2716112 Online Think–Pair–Share in a Third-Age ICT Course
Authors: Daniele Traversaro
Abstract:
Problem: Senior citizens have been facing a challenging reality as a result of strict public health measures designed to protect people from the COVID-19 outbreak. These include the risk of social isolation due to the inability of the elderly to integrate with technology. Never before have Information and Communication Technology (ICT) skills become essential for their everyday life. Although third-age ICT education and lifelong learning are widely supported by universities and governments, there is a lack of literature on which teaching strategy/methodology to adopt in an entirely online ICT course aimed at third-age learners. This contribution aims to present an application of the Think-Pair-Share (TPS) learning method in an ICT third-age virtual classroom with an intergenerational approach to conducting online group labs and review activities. Research Question: Is collaborative learning suitable and effective, in terms of student engagement and learning outcomes, in an online ICT course for the elderly? Methods: In the TPS strategy a problem is posed by the teacher, students have time to think about it individually, and then they work in pairs (or small groups) to solve the problem and share their ideas with the entire class. We performed four experiments in the ICT course of the University of the Third Age of Genova (University of Genova, Italy) on the Microsoft Teams platform. The study cohort consisted of 26 students over the age of 45. Data were collected through online questionnaires. Two have been proposed, one at the end of the first activity and another at the end of the course. They consisted of five and three close-ended questions, respectively. The answers were on a Likert scale (from 1 to 4) except two questions (which asked the number of correct answers given individually and in groups) and the field for free comments/suggestions. Results: Groups achieve better results than individual students (with scores greater than one order of magnitude) and most students found TPS helpful to work in groups and interact with their peers. Insights: From these early results, it appears that TPS is suitable for an online third-age ICT classroom and useful for promoting discussion and active learning. Despite this, our work has several limitations. First of all, the results highlight the need for more data to be able to perform a statistical analysis in order to determine the effectiveness of this methodology in terms of student engagement and learning outcomes as future direction.
Keywords: Collaborative learning, information technology education, lifelong learning, older adult education, think-pair-share.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 649111 Early-Warning Lights Classification Management System for Industrial Parks in Taiwan
Authors: Yu-Min Chang, Kuo-Sheng Tsai, Hung-Te Tsai, Chia-Hsin Li
Abstract:
This paper presents the early-warning lights classification management system for industrial parks promoted by the Taiwan Environmental Protection Administration (EPA) since 2011, including the definition of each early-warning light, objectives, action program and accomplishments. All of the 151 industrial parks in Taiwan were classified into four early-warning lights, including red, orange, yellow and green, for carrying out respective pollution management according to the monitoring data of soil and groundwater quality, regulatory compliance, and regulatory listing of control site or remediation site. The Taiwan EPA set up a priority list for high potential polluted industrial parks and investigated their soil and groundwater qualities based on the results of the light classification and pollution potential assessment. In 2011-2013, there were 44 industrial parks selected and carried out different investigation, such as the early warning groundwater well networks establishment and pollution investigation/verification for the red and orange-light industrial parks and the environmental background survey for the yellow-light industrial parks. Among them, 22 industrial parks were newly or continuously confirmed that the concentrations of pollutants exceeded those in soil or groundwater pollution control standards. Thus, the further investigation, groundwater use restriction, listing of pollution control site or remediation site, and pollutant isolation measures were implemented by the local environmental protection and industry competent authorities; the early warning lights of those industrial parks were proposed to adjust up to orange or red-light. Up to the present, the preliminary positive effect of the soil and groundwater quality management system for industrial parks has been noticed in several aspects, such as environmental background information collection, early warning of pollution risk, pollution investigation and control, information integration and application, and inter-agency collaboration. Finally, the work and goal of self-initiated quality management of industrial parks will be carried out on the basis of the inter-agency collaboration by the classified lights system of early warning and management as well as the regular announcement of the status of each industrial park.
Keywords: Industrial park, soil and groundwater quality management, early-warning lights classification, SOP for reporting and treatment of monitored abnormal events.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1994110 Applying Participatory Design for the Reuse of Deserted Community Spaces
Authors: Wei-Chieh Yeh, Yung-Tang Shen
Abstract:
The concept of community building started in 1994 in Taiwan. After years of development, it fostered the notion of active local resident participation in community issues as co-operators, instead of minions. Participatory design gives participants more control in the decision-making process, helps to reduce the friction caused by arguments and assists in bringing different parties to consensus. This results in an increase in the efficiency of projects run in the community. Therefore, the participation of local residents is key to the success of community building. This study applied participatory design to develop plans for the reuse of deserted spaces in the community from the first stage of brainstorming for design ideas, making creative models to be employed later, through to the final stage of construction. After conducting a series of participatory designed activities, it aimed to integrate the different opinions of residents, develop a sense of belonging and reach a consensus. Besides this, it also aimed at building the residents’ awareness of their responsibilities for the environment and related issues of sustainable development. By reviewing relevant literature and understanding the history of related studies, the study formulated a theory. It took the “2012-2014 Changhua County Community Planner Counseling Program” as a case study to investigate the implementation process of participatory design. Research data are collected by document analysis, participants’ observation and in-depth interviews. After examining the three elements of “Design Participation”, “Construction Participation”, and” Follow–up Maintenance Participation” in the case, the study emerged with a promising conclusion: Maintenance works were carried out better compared to common public works. Besides this, maintenance costs were lower. Moreover, the works that residents were involved in were more creative. Most importantly, the community characteristics could be easy be recognized.
Keywords: Participatory design, Deserted spaces, Community building, Reuse.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1301109 Computer Models of the Vestibular Head Tilt Response, and Their Relationship to EVestG and Meniere's Disease
Authors: Daniel Heibert, Brian Lithgow, Kerry Hourigan
Abstract:
This paper attempts to explain response components of Electrovestibulography (EVestG) using a computer simulation of a three-canal model of the vestibular system. EVestG is a potentially new diagnostic method for Meniere's disease. EVestG is a variant of Electrocochleography (ECOG), which has been used as a standard method for diagnosing Meniere's disease - it can be used to measure the SP/AP ratio, where an SP/AP ratio greater than 0.4-0.5 is indicative of Meniere-s Disease. In EVestG, an applied head tilt replaces the acoustic stimulus of ECOG. The EVestG output is also an SP/AP type plot, where SP is the summing potential, and AP is the action potential amplitude. AP is thought of as being proportional to the size of a population of afferents in an excitatory neural firing state. A simulation of the fluid volume displacement in the vestibular labyrinth in response to various types of head tilts (ipsilateral, backwards and horizontal rotation) was performed, and a simple neural model based on these simulations developed. The simple neural model shows that the change in firing rate of the utricle is much larger in magnitude than the change in firing rates of all three semi-circular canals following a head tilt (except in a horizontal rotation). The data suggests that the change in utricular firing rate is a minimum 2-3 orders of magnitude larger than changes in firing rates of the canals during ipsilateral/backward tilts. Based on these results, the neural response recorded by the electrode in our EVestG recordings is expected to be dominated by the utricle in ipsilateral/backward tilts (It is important to note that the effect of the saccule and efferent signals were not taken into account in this model). If the utricle response dominates the EVestG recordings as the modeling results suggest, then EVestG has the potential to diagnose utricular hair cell damage due to a viral infection (which has been cited as one possible cause of Meniere's Disease).
Keywords: Diagnostic, endolymph hydrops, Meniere's disease, modeling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1520108 Exploring Socio-Economic Barriers of Green Entrepreneurship in Iran and Their Interactions Using Interpretive Structural Modeling
Authors: Younis Jabarzadeh, Rahim Sarvari, Negar Ahmadi Alghalandis
Abstract:
Entrepreneurship at both individual and organizational level is one of the most driving forces in economic development and leads to growth and competition, job generation and social development. Especially in developing countries, the role of entrepreneurship in economic and social prosperity is more emphasized. But the effect of global economic development on the environment is undeniable, especially in negative ways, and there is a need to rethink current business models and the way entrepreneurs act to introduce new businesses to address and embed environmental issues in order to achieve sustainable development. In this paper, green or sustainable entrepreneurship is addressed in Iran to identify challenges and barriers entrepreneurs in the economic and social sectors face in developing green business solutions. Sustainable or green entrepreneurship has been gaining interest among scholars in recent years and addressing its challenges and barriers need much more attention to fill the gap in the literature and facilitate the way those entrepreneurs are pursuing. This research comprised of two main phases: qualitative and quantitative. At qualitative phase, after a thorough literature review, fuzzy Delphi method is utilized to verify those challenges and barriers by gathering a panel of experts and surveying them. In this phase, several other contextually related factors were added to the list of identified barriers and challenges mentioned in the literature. Then, at the quantitative phase, Interpretive Structural Modeling is applied to construct a network of interactions among those barriers identified at the previous phase. Again, a panel of subject matter experts comprised of academic and industry experts was surveyed. The results of this study can be used by policymakers in both the public and industry sector, to introduce more systematic solutions to eliminate those barriers and help entrepreneurs overcome challenges of sustainable entrepreneurship. It also contributes to the literature as the first research in this type which deals with the barriers of sustainable entrepreneurship and explores their interaction.
Keywords: Green entrepreneurship, barriers, Fuzzy Delphi Method, interpretive structural modeling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1417107 Identifying Game Variables from Students’ Surveys for Prototyping Games for Learning
Authors: N. Ismail, O. Thammajinda, U. Thongpanya
Abstract:
Games-based learning (GBL) has become increasingly important in teaching and learning. This paper explains the first two phases (analysis and design) of a GBL development project, ending up with a prototype design based on students’ and teachers’ perceptions. The two phases are part of a full cycle GBL project aiming to help secondary school students in Thailand in their study of Comprehensive Sex Education (CSE). In the course of the study, we invited 1,152 students to complete questionnaires and interviewed 12 secondary school teachers in focus groups. This paper found that GBL can serve students in their learning about CSE, enabling them to gain understanding of their sexuality, develop skills, including critical thinking skills and interact with others (peers, teachers, etc.) in a safe environment. The objectives of this paper are to outline the development of GBL variables from the research question(s) into the developers’ flow chart, to be responsive to the GBL beneficiaries’ preferences and expectations, and to help in answering the research questions. This paper details the steps applied to generate GBL variables that can feed into a game flow chart to develop a GBL prototype. In our approach, we detailed two models: (1) Game Elements Model (GEM) and (2) Game Object Model (GOM). There are three outcomes of this research – first, to achieve the objectives and benefits of GBL in learning, game design has to start with the research question(s) and the challenges to be resolved as research outcomes. Second, aligning the educational aims with engaging GBL end users (students) within the data collection phase to inform the game prototype with the game variables is essential to address the answer/solution to the research question(s). Third, for efficient GBL to bridge the gap between pedagogy and technology and in order to answer the research questions via technology (i.e. GBL) and to minimise the isolation between the pedagogists “P” and technologist “T”, several meetings and discussions need to take place within the team.
Keywords: Games-based learning, design, engagement, pedagogy, preferences, prototype, variables.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 743106 Contextual SenSe Model: Word Sense Disambiguation Using Sense and Sense Value of Context Surrounding the Target
Authors: Vishal Raj, Noorhan Abbas
Abstract:
Ambiguity in NLP (Natural Language Processing) refers to the ability of a word, phrase, sentence, or text to have multiple meanings. This results in various kinds of ambiguities such as lexical, syntactic, semantic, anaphoric and referential. This study is focused mainly on solving the issue of Lexical ambiguity. Word Sense Disambiguation (WSD) is an NLP technique that aims to resolve lexical ambiguity by determining the correct meaning of a word within a given context. Most WSD solutions rely on words for training and testing, but we have used lemma and Part of Speech (POS) tokens of words for training and testing. Lemma adds generality and POS adds properties of word into token. We have designed a method to create an affinity matrix to calculate the affinity between any pair of lemma_POS (a token where lemma and POS of word are joined by underscore) of given training set. Additionally, we have devised an algorithm to create the sense clusters of tokens using affinity matrix under hierarchy of POS of lemma. Furthermore, three different mechanisms to predict the sense of target word using the affinity/similarity value are devised. Each contextual token contributes to the sense of target word with some value and whichever sense gets higher value becomes the sense of target word. So, contextual tokens play a key role in creating sense clusters and predicting the sense of target word, hence, the model is named Contextual SenSe Model (CSM). CSM exhibits a noteworthy simplicity and explication lucidity in contrast to contemporary deep learning models characterized by intricacy, time-intensive processes, and challenging explication. CSM is trained on SemCor training data and evaluated on SemEval test dataset. The results indicate that despite the naivety of the method, it achieves promising results when compared to the Most Frequent Sense (MFS) model.
Keywords: Word Sense Disambiguation, WSD, Contextual SenSe Model, Most Frequent Sense, part of speech, POS, Natural Language Processing, NLP, OOV, out of vocabulary, ELMo, Embeddings from Language Model, BERT, Bidirectional Encoder Representations from Transformers, Word2Vec, lemma_POS, Algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 426105 A Framework for an Automated Decision Support System for Selecting Safety-Conscious Contractors
Authors: Rawan A. Abdelrazeq, Ahmed M. Khalafallah, Nabil A. Kartam
Abstract:
Selection of competent contractors for construction projects is usually accomplished through competitive bidding or negotiated contracting in which the contract bid price is the basic criterion for selection. The evaluation of contractor’s safety performance is still not a typical criterion in the selection process, despite the existence of various safety prequalification procedures. There is a critical need for practical and automated systems that enable owners and decision makers to evaluate contractor safety performance, among other important contractor selection criteria. These systems should ultimately favor safety-conscious contractors to be selected by the virtue of their past good safety records and current safety programs. This paper presents an exploratory sequential mixed-methods approach to develop a framework for an automated decision support system that evaluates contractor safety performance based on a multitude of indicators and metrics that have been identified through a comprehensive review of construction safety research, and a survey distributed to domain experts. The framework is developed in three phases: (1) determining the indicators that depict contractor current and past safety performance; (2) soliciting input from construction safety experts regarding the identified indicators, their metrics, and relative significance; and (3) designing a decision support system using relational database models to integrate the identified indicators and metrics into a system that assesses and rates the safety performance of contractors. The proposed automated system is expected to hold several advantages including: (1) reducing the likelihood of selecting contractors with poor safety records; (2) enhancing the odds of completing the project safely; and (3) encouraging contractors to exert more efforts to improve their safety performance and practices in order to increase their bid winning opportunities which can lead to significant safety improvements in the construction industry. This should prove useful to decision makers and researchers, alike, and should help improve the safety record of the construction industry.Keywords: Construction safety, contractor selection, decision support system, relational database.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1586104 Motor Coordination and Body Mass Index in Primary School Children
Authors: Ingrid Ruzbarska, Martin Zvonar, Piotr Oleśniewicz, Julita Markiewicz-Patkowska, Krzysztof Widawski, Daniel Puciato
Abstract:
Obese children will probably become obese adults, consequently exposed to an increased risk of comorbidity and premature mortality. Body weight may be indirectly determined by continuous development of coordination and motor skills. The level of motor skills and abilities is an important factor that promotes physical activity since early childhood. The aim of the study is to thoroughly understand the internal relations between motor coordination abilities and the somatic development of prepubertal children and to determine the effect of excess body weight on motor coordination by comparing the motor ability levels of children with different body mass index (BMI) values. The data were collected from 436 children aged 7–10 years, without health limitations, fully participating in school physical education classes. Body height was measured with portable stadiometers (Harpenden, Holtain Ltd.), and body mass—with a digital scale (HN-286, Omron). Motor coordination was evaluated with the Kiphard-Schilling body coordination test, Körperkoordinationstest für Kinder. The normality test by Shapiro-Wilk was used to verify the data distribution. The correlation analysis revealed a statistically significant negative association between the dynamic balance and BMI, as well as between the motor quotient and BMI (p<0.01) for both boys and girls. The results showed no effect of gender on the difference in the observed trends. The analysis of variance proved statistically significant differences between normal weight children and their overweight or obese counterparts. Coordination abilities probably play an important role in preventing or moderating the negative trajectory leading to childhood overweight and obesity. At this age, the development of coordination abilities should become a key strategy, targeted at long-term prevention of obesity and the promotion of an active lifestyle in adulthood. Motor performance is essential for implementing a healthy lifestyle in childhood already. Physical inactivity apparently results in motor deficits and a sedentary lifestyle in children, which may be accompanied by excess energy intake and overweight.
Keywords: Childhood, KTK test, Physical education, Psychomotor competence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1366103 Computer Modeling and Plant-Wide Dynamic Simulation for Industrial Flare Minimization
Authors: Sujing Wang, Song Wang, Jian Zhang, Qiang Xu
Abstract:
Flaring emissions during abnormal operating conditions such as plant start-ups, shut-downs, and upsets in chemical process industries (CPI) are usually significant. Flare minimization can help to save raw material and energy for CPI plants, and to improve local environmental sustainability. In this paper, a systematic methodology based on plant-wide dynamic simulation is presented for CPI plant flare minimizations under abnormal operating conditions. Since off-specification emission sources are inevitable during abnormal operating conditions, to significantly reduce flaring emission in a CPI plant, they must be either recycled to the upstream process for online reuse, or stored somewhere temporarily for future reprocessing, when the CPI plant manufacturing returns to stable operation. Thus, the off-spec products could be reused instead of being flared. This can be achieved through the identification of viable design and operational strategies during normal and abnormal operations through plant-wide dynamic scheduling, simulation, and optimization. The proposed study includes three stages of simulation works: (i) developing and validating a steady-state model of a CPI plant; (ii) transiting the obtained steady-state plant model to the dynamic modeling environment; and refining and validating the plant dynamic model; and (iii) developing flare minimization strategies for abnormal operating conditions of a CPI plant via a validated plant-wide dynamic model. This cost-effective methodology has two main merits: (i) employing large-scale dynamic modeling and simulations for industrial flare minimization, which involves various unit models for modeling hundreds of CPI plant facilities; (ii) dealing with critical abnormal operating conditions of CPI plants such as plant start-up and shut-down. Two virtual case studies on flare minimizations for start-up operation (over 50% of emission savings) and shut-down operation (over 70% of emission savings) of an ethylene plant have been employed to demonstrate the efficacy of the proposed study.
Keywords: Flare minimization, large-scale modeling and simulation, plant shut-down, plant start-up.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1739102 Heterogeneous-Resolution and Multi-Source Terrain Builder for CesiumJS WebGL Virtual Globe
Authors: Umberto Di Staso, Marco Soave, Alessio Giori, Federico Prandi, Raffaele De Amicis
Abstract:
The increasing availability of information about earth surface elevation (Digital Elevation Models DEM) generated from different sources (remote sensing, Aerial Images, Lidar) poses the question about how to integrate and make available to the most than possible audience this huge amount of data. In order to exploit the potential of 3D elevation representation the quality of data management plays a fundamental role. Due to the high acquisition costs and the huge amount of generated data, highresolution terrain surveys tend to be small or medium sized and available on limited portion of earth. Here comes the need to merge large-scale height maps that typically are made available for free at worldwide level, with very specific high resolute datasets. One the other hand, the third dimension increases the user experience and the data representation quality, unlocking new possibilities in data analysis for civil protection, real estate, urban planning, environment monitoring, etc. The open-source 3D virtual globes, which are trending topics in Geovisual Analytics, aim at improving the visualization of geographical data provided by standard web services or with proprietary formats. Typically, 3D Virtual globes like do not offer an open-source tool that allows the generation of a terrain elevation data structure starting from heterogeneous-resolution terrain datasets. This paper describes a technological solution aimed to set up a so-called “Terrain Builder”. This tool is able to merge heterogeneous-resolution datasets, and to provide a multi-resolution worldwide terrain services fully compatible with CesiumJS and therefore accessible via web using traditional browser without any additional plug-in.Keywords: Terrain builder, WebGL, virtual globe, CesiumJS, tiled map service, TMS, height-map, regular grid, Geovisual analytics, DTM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2403101 Specification Requirements for a Combined Dehumidifier/Cooling Panel: A Global Scale Analysis
Authors: Damien Gondre, Hatem Ben Maad, Abdelkrim Trabelsi, Frédéric Kuznik, Joseph Virgone
Abstract:
The use of a radiant cooling solution would enable to lower cooling needs which is of great interest when the demand is initially high (hot climate). But, radiant systems are not naturally compatibles with humid climates since a low-temperature surface leads to condensation risks as soon as the surface temperature is close to or lower than the dew point temperature. A radiant cooling system combined to a dehumidification system would enable to remove humidity for the space, thereby lowering the dew point temperature. The humidity removal needs to be especially effective near the cooled surface. This requirement could be fulfilled by a system using a single desiccant fluid for the removal of both excessive heat and moisture. This task aims at providing an estimation of the specification requirements of such system in terms of cooling power and dehumidification rate required to fulfill comfort issues and to prevent any condensation risk on the cool panel surface. The present paper develops a preliminary study on the specification requirements, performances and behavior of a combined dehumidifier/cooling ceiling panel for different operating conditions. This study has been carried using the TRNSYS software which allows nodal calculations of thermal systems. It consists of the dynamic modeling of heat and vapor balances of a 5m x 3m x 2.7m office space. In a first design estimation, this room is equipped with an ideal heating, cooling, humidification and dehumidification system so that the room temperature is always maintained in between 21◦C and 25◦C with a relative humidity in between 40% and 60%. The room is also equipped with a ventilation system that includes a heat recovery heat exchanger and another heat exchanger connected to a heat sink. Main results show that the system should be designed to meet a cooling power of 42W.m−2 and a desiccant rate of 45 gH2O.h−1. In a second time, a parametric study of comfort issues and system performances has been achieved on a more realistic system (that includes a chilled ceiling) under different operating conditions. It enables an estimation of an acceptable range of operating conditions. This preliminary study is intended to provide useful information for the system design.Keywords: Dehumidification, nodal calculation, radiant cooling panel, system sizing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 736100 Demonstration of Land Use Changes Simulation Using Urban Climate Model
Authors: Barbara Vojvodikova, Katerina Jupova, Iva Ticha
Abstract:
Cities in their historical evolution have always adapted their internal structure to the needs of society (for example protective city walls during classicism era lost their defense function, became unnecessary, were demolished and gave space for new features such as roads, museums or parks). Today it is necessary to modify the internal structure of the city in order to minimize the impact of climate changes on the environment of the population. This article discusses the results of the Urban Climate model owned by VITO, which was carried out as part of a project from the European Union's Horizon grant agreement No 730004 Pan-European Urban Climate Services Climate-Fit city. The use of the model was aimed at changes in land use and land cover in cities related to urban heat islands (UHI). The task of the application was to evaluate possible land use change scenarios in connection with city requirements and ideas. Two pilot areas in the Czech Republic were selected. One is Ostrava and the other Hodonín. The paper provides a demonstration of the application of the model for various possible future development scenarios. It contains an assessment of the suitability or inappropriateness of scenarios of future development depending on the temperature increase. Cities that are preparing to reconstruct the public space are interested in eliminating proposals that would lead to an increase in temperature stress as early as in the assignment phase. If they have evaluation on the unsuitability of some type of design, they can limit it into the proposal phases. Therefore, especially in the application of models on Local level - in 1 m spatial resolution, it was necessary to show which type of proposals would create a significant temperature island in its implementation. Such a type of proposal is considered unsuitable. The model shows that the building itself can create a shady place and thus contribute to the reduction of the UHI. If it sensitively approaches the protection of existing greenery, this new construction may not pose a significant problem. More massive interventions leading to the reduction of existing greenery create a new heat island space.
Keywords: Heat islands, land use, urban climate model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 84699 Improved Computational Efficiency of Machine Learning Algorithms Based on Evaluation Metrics to Control the Spread of Coronavirus in the UK
Authors: Swathi Ganesan, Nalinda Somasiri, Rebecca Jeyavadhanam, Gayathri Karthick
Abstract:
The COVID-19 crisis presents a substantial and critical hazard to worldwide health. Since the occurrence of the disease in late January 2020 in the UK, the number of infected people confirmed to acquire the illness has increased tremendously across the country, and the number of individuals affected is undoubtedly considerably high. The purpose of this research is to figure out a predictive machine learning (ML) archetypal that could forecast the COVID-19 cases within the UK. This study concentrates on the statistical data collected from 31st January 2020 to 31st March 2021 in the United Kingdom. Information on total COVID-19 cases registered, new cases encountered on a daily basis, total death registered, and patients’ death per day due to Coronavirus is collected from World Health Organization (WHO). Data preprocessing is carried out to identify any missing values, outliers, or anomalies in the dataset. The data are split into 8:2 ratio for training and testing purposes to forecast future new COVID-19 cases. Support Vector Machine (SVM), Random Forest (RF), and linear regression (LR) algorithms are chosen to study the model performance in the prediction of new COVID-19 cases. From the evaluation metrics such as r-squared value and mean squared error, the statistical performance of the model in predicting the new COVID-19 cases is evaluated. RF outperformed the other two ML algorithms with a training accuracy of 99.47% and testing accuracy of 98.26% when n = 30. The mean square error obtained for RF is 4.05e11, which is lesser compared to the other predictive models used for this study. From the experimental analysis, RF algorithm can perform more effectively and efficiently in predicting the new COVID-19 cases, which could help the health sector to take relevant control measures for the spread of the virus.
Keywords: COVID-19, machine learning, supervised learning, unsupervised learning, linear regression, support vector machine, random forest.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17698 Energy Efficiency Approach to Reduce Costs of Ownership of Air Jet Weaving
Authors: Corrado Grassi, Achim Schröter, Yves Gloy, Thomas Gries
Abstract:
Air jet weaving is the most productive, but also the most energy consuming weaving method. Increasing energy costs and environmental impact are constantly a challenge for the manufacturers of weaving machines. Current technological developments concern with low energy costs, low environmental impact, high productivity, and constant product quality. The high degree of energy consumption of the method can be ascribed to the high need of compressed air. An energy efficiency method is applied to the air jet weaving technology. Such method identifies and classifies the main relevant energy consumers and processes from the exergy point of view and it leads to the identification of energy efficiency potentials during the weft insertion process. Starting from the design phase, energy efficiency is considered as the central requirement to be satisfied. The initial phase of the method consists of an analysis of the state of the art of the main weft insertion components in order to point out a prioritization of the high demanding energy components and processes. The identified major components are investigated to reduce the high demand of energy of the weft insertion process. During the interaction of the flow field coming from the relay nozzles within the profiled reed, only a minor part of the stream is really accelerating the weft yarn, hence resulting in large energy inefficiency. Different tools such as FEM analysis, CFD simulation models and experimental analysis are used in order to design a more energy efficient design of the involved components in the filling insertion. A different concept for the metal strip of the profiled reed is developed. The developed metal strip allows a reduction of the machine energy consumption. Based on a parametric and aerodynamic study, the designed reed transmits higher values of the flow power to the filling yarn. The innovative reed fulfills both the requirement of raising energy efficiency and the compliance with the weaving constraints.
Keywords: Air jet weaving, aerodynamic simulation, energy efficiency, experimental measurements, power costs, weft insertion.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 151497 Material Concepts and Processing Methods for Electrical Insulation
Authors: R. Sekula
Abstract:
Epoxy composites are broadly used as an electrical insulation for the high voltage applications since only such materials can fulfill particular mechanical, thermal, and dielectric requirements. However, properties of the final product are strongly dependent on proper manufacturing process with minimized material failures, as too large shrinkage, voids and cracks. Therefore, application of proper materials (epoxy, hardener, and filler) and process parameters (mold temperature, filling time, filling velocity, initial temperature of internal parts, gelation time), as well as design and geometric parameters are essential features for final quality of the produced components. In this paper, an approach for three-dimensional modeling of all molding stages, namely filling, curing and post-curing is presented. The reactive molding simulation tool is based on a commercial CFD package, and include dedicated models describing viscosity and reaction kinetics that have been successfully implemented to simulate the reactive nature of the system with exothermic effect. Also a dedicated simulation procedure for stress and shrinkage calculations, as well as simulation results are presented in the paper. Second part of the paper is dedicated to recent developments on formulations of functional composites for electrical insulation applications, focusing on thermally conductive materials. Concepts based on filler modifications for epoxy electrical composites have been presented, including the results of the obtained properties. Finally, having in mind tough environmental regulations, in addition to current process and design aspects, an approach for product re-design has been presented focusing on replacement of epoxy material with the thermoplastic one. Such “design-for-recycling” method is one of new directions associated with development of new material and processing concepts of electrical products and brings a lot of additional research challenges. For that, one of the successful products has been presented to illustrate the presented methodology.
Keywords: Curing, epoxy insulation, numerical simulations, recycling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 164096 Fetal and Infant Mortality in Botucatu City, São Paulo State, Brazil: Evaluation of Maternal - Infant Health Care
Authors: Noda L. M., Salvador I. C, C. M. L. G. Parada, Fonseca C. R. B.
Abstract:
In Brazil, neonatal mortality rate is considered incompatible with the country development conditions, and has been a Public Health concern. Reduction in infant mortality rates has also been part of the Millennium Development Goals, a commitment made by countries, members of the Organization of United Nations (OUN), including Brazil. Fetal mortality rate is considered a highly sensitive indicator of health care quality. Suitable actions, such as good quality and access to health services may contribute positively towards reduction in these fetal and neonatal rates. With appropriate antenatal follow-up and health care during gestation and delivery, some death causes could be reduced or even prevented by means of early diagnosis and intervention, as well as changes in risk factors and interventions. Objectives: To study the quality of maternal and infant health care based on fetal and neonatal mortality, as well as the possible actions to prevent those deaths in Botucatu (Brazil). Methods: Classification of prevention according to the International Classification of Diseases and the modified Wigglesworth´s classification. In order to evaluate adequacy, indicators of quality of antenatal and delivery care were established by the authors. Results: Considering fetal deaths, 56.7% of them occurred before delivery, which reveals possible shortcomings in antenatal care, and 38.2% of them were a result of intra- labor changes, which could be prevented or reduced by adequate obstetric management. These findings were different from those in the group of early neonatal deaths which were also studied. Adequacy of health services showed that antenatal and childbirth care was appropriate for 24% and 33.3% of pregnant women, respectively, which corroborates the results of prevention. These results revealed that shortcomings in obstetric and antenatal care could be the causes of deaths in the study. Early and late neonatal deaths have similar characteristics: 76% could be prevented or reduced mainly by adequate newborn care (52.9%) and adequate health care for gestational women (11.7%). When adequacy of care was evaluated, childbirth and newborn care was adequate in 25.8% and antenatal care was adequate in 16.1%. In conclusion, direct relationship was found between adequacy and quality of care rendered to pregnant women and newborns, and fetal and infant mortality. Moreover, our findings highlight that deaths could be prevented by an adequate obstetric and neonatal management.
Keywords: Fetal Mortality, Infant Mortality, Maternal-Child Health Services, Program Evaluation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 507295 A Multi-Criteria Decision Method for the Recruitment of Academic Personnel Based on the Analytical Hierarchy Process and the Delphi Method in a Neutrosophic Environment
Authors: Antonios Paraskevas, Michael Madas
Abstract:
For a university to maintain its international competitiveness in education, it is essential to recruit qualitative academic staff as it constitutes its most valuable asset. This selection demonstrates a significant role in achieving strategic objectives, particularly by emphasizing a firm commitment to exceptional student experience and innovative teaching and learning practices of high quality. In this vein, the appropriate selection of academic staff establishes a very important factor of competitiveness, efficiency and reputation of an academic institute. Within this framework, our work demonstrates a comprehensive methodological concept that emphasizes on the multi-criteria nature of the problem and on how decision makers could utilize our approach in order to proceed to the appropriate judgment. The conceptual framework introduced in this paper is built upon a hybrid neutrosophic method based on the Neutrosophic Analytical Hierarchy Process (N-AHP), which uses the theory of neutrosophy sets and is considered suitable in terms of significant degree of ambiguity and indeterminacy observed in decision-making process. To this end, our framework extends the N-AHP by incorporating the Neutrosophic Delphi Method (N-DM). By applying the N-DM, we can take into consideration the importance of each decision-maker and their preferences per evaluation criterion. To the best of our knowledge, the proposed model stands out within the realm of related literature as one of the few studies to employ N-DM in the context of academic staff selection. As a case study, it was decided to use our method to a real problem of academic personnel selection, having as main goal to enhance the algorithm proposed in previous scholars’ work, and thus taking care of the inherit ineffectiveness which becomes apparent in traditional multi-criteria decision-making methods when dealing with situations alike. As a further result, we prove that our method demonstrates greater applicability and reliability when compared to other decision models.
Keywords: Analytical Hierarchy Process, Delphi Method, Multi-criteria decision making methods, neutrosophic set theory, personnel recruitment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5294 Education Quality Development for Excellence Performance with Higher Education by Using COBIT 5
Authors: Kemkanit Sanyanunthana
Abstract:
The purpose of this research is to study the management system of information technology which supports the education of five private universities in Thailand, according to the case studies which have been developing their qualities and standards of management and education by service provision of information technology to support the excellence performance. The concept to connect information technology with a suitable system has been created by information technology administrators for development, as a system that can be used throughout the organizations to help reach the utmost benefits of using all resources. Hence, the researcher as a person who has been performing these duties within higher education is interested to do this research by selecting the Control Objective for Information and Related Technology 5 (COBIT 5) for the Malcolm Baldrige National Quality Award (MBNQA) of America, or the National Award which applies the concept of Total Quality Management (TQM) to the organization evaluation. Such evaluation is called the Education Criteria for Performance Excellence (EdPEx) focuses on studying and comparing education quality development for excellent performance using COBIT 5 in terms of information technology to study the problems and obstacles of the investigation process for an information technology system, which is considered as an instrument to drive all organizations to reach the excellence performance of the information technology, and to be the model of evaluation and analysis of the process to be in accordance with the strategic plans of the information technology in the universities. This research is conducted in the form of descriptive and survey research according to the case studies. The data collection were carried out by using questionnaires through the administrators working related to the information technology field, and the research documents related to the change management as the main study. The research can be concluded that the performance based on the APO domain process (ALIGN, PLAN AND ORGANISE) of the COBIT 5 standard frame, which emphasizes concordant governance and management of strategic plans for the organizations, could reach only 95%. This might be because of some restrictions such as organizational cultures; therefore, the researcher has studied and analyzed the management of information technology in universities as a whole, under the organizational structures, to reach the performance in accordance with the overall APO domain which would affect the determined strategic plans to be able to develop based on the excellence performance of information technology, and to apply the risk management system at the organizational level into every performance process which would develop the work effectiveness for the resources management of information technology to reach the utmost benefits.
Keywords: COBIT 5, APO, EdPEx Criteria, MBNQA.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1491