Search results for: computer assisted classification
100 Correlation between the Levels of Some Inflammatory Cytokines/Haematological Parameters and Khorana Scores of Newly Diagnosed Ambulatory Cancer Patients
Authors: Angela O. Ugwu, Sunday Ocheni
Abstract:
Background: Cancer-associated thrombosis (CAT) is a cause of morbidity and mortality among cancer patients. Several risk factors for developing venous thromboembolism (VTE) also coexist with cancer patients, such as chemotherapy and immobilization, thus contributing to the higher risk of VTE in cancer patients when compared to non-cancer patients. This study aimed to determine if there is any correlation between levels of some inflammatory cytokines/haematological parameters and Khorana scores of newly diagnosed chemotherapy naïve ambulatory cancer patients (CNACP). Methods: This was a cross-sectional analytical study carried out from June 2021 to May 2022. Eligible newly diagnosed cancer patients 18 years and above (case group) were enrolled consecutively from the adult Oncology Clinics of the University of Nigeria Teaching Hospital, Ituku/Ozalla (UNTH). The control group was blood donors at UNTH Ituku/Ozalla, Enugu blood bank, and healthy members of the Medical and Dental Consultants Association of Nigeria (MDCAN), UNTH Chapter. Blood samples collected from the participants were assayed for IL-6, TNF-Alpha, and haematological parameters such as haemoglobin, white blood cell count (WBC), and platelet count. Data were entered into an Excel worksheet and were then analyzed using Statistical Package for Social Sciences (SPSS) computer software version 21.0 for windows. A P value of < 0.05 was considered statistically significant. Results: A total of 200 participants (100 cases and 100 controls) were included in the study. The overall mean age of the participants was 47.42 ±15.1 (range 20-76). The sociodemographic characteristics of the two groups, including age, sex, educational level, body mass index (BMI), and occupation, were similar (P > 0.05). Following One Way ANOVA, there were significant differences between the mean levels of interleukin-6 (IL-6) (p = 0.036) and tumor necrotic factor-α (TNF-α) (p = 0.001) in the three Khorana score groups of the case group. Pearson’s correlation analysis showed a significant positive correlation between the Khorana scores and IL-6 (r=0.28, p = 0.031), TNF-α (r= 0.254, p= 0.011), and PLR (r= 0.240, p=0.016). The mean serum levels of IL-6 were significantly higher in CNACP than in the healthy controls [8.98 (8-12) pg/ml vs. 8.43 (2-10) pg/ml, P=0.0005]. There were also significant differences in the mean levels of the haemoglobin (Hb) level (P < 0.001)); white blood cell (WBC) count ((P < 0.001), and platelet (PL) count (P = 0.005) between the two groups of participants. Conclusion: There is a significant positive correlation between the serum levels of IL-6, TNF-α, and PLR and the Khorana scores of CNACP. The mean serum levels of IL-6, TNF-α, PLR, WBC, and PL count were significantly higher in CNACP than in the healthy controls. Ambulatory cancer patients with high-risk Khorana scores may benefit from anti-inflammatory drugs because of the positive correlation with inflammatory cytokines. Recommendations: Ambulatory cancer patients with 2 Khorana scores may benefit from thromboprophylaxis since they have higher Khorana scores. A multicenter study with a heterogeneous population and larger sample size is recommended in the future to further elucidate the relationship between IL-6, TNF-α, PLR, and the Khorana scores among cancer patients in the Nigerian population.Keywords: thromboprophylaxis, cancer, Khorana scores, inflammatory cytokines, haematological parameters
Procedia PDF Downloads 8299 Technology and the Need for Integration in Public Education
Authors: Eric Morettin
Abstract:
Cybersecurity and digital literacy are pressing issues among Canadian citizens, yet formal education does not provide today’s students with the necessary knowledge and skills needed to adapt to these challenging issues within the physical and digital labor-market. Canada’s current education systems do not highlight the importance of these respective fields, aside from using technology for learning management systems and alternative methods of assignment completion. Educators are not properly trained to integrate technology into the compulsory courses within public education, to better prepare their learners in these topics and Canada’s digital economy. ICTC addresses these gaps in education and training through cross-Canadian educational programming in digital literacy and competency, cybersecurity and coding which is bridged with Canada’s provincially regulated K-12 curriculum guidelines. After analyzing Canada’s provincial education, it is apparent that there are gaps in learning related to technology, as well as inconsistent educational outcomes that do not adequately represent the current Canadian and global economies. Presently only New Brunswick, Nova Scotia, Ontario, and British Columbia offer curriculum guidelines for cybersecurity, computer programming, and digital literacy. The remaining provinces do not address these skills in their curriculum guidelines. Moreover, certain courses across some provinces not being updated since the 1990’s. The three territories respectfully take curriculum strands from other provinces and use them as their foundation in education. Yukon uses all British Columbia curriculum. Northwest Territories and Nunavut respectfully use a hybrid of Alberta and Saskatchewan curriculum as their foundation of learning. Education that is provincially regulated does not allow for consistency across the country’s educational outcomes and what Canada’s students will achieve – especially when curriculum outcomes have not been updated to reflect present day society. Through this, ICTC has aligned Canada’s provincially regulated curriculum and created opportunities for focused education in the realm of technology to better serve Canada’s present learners and teachers; while addressing inequalities and applicability within curriculum strands and outcomes across the country. As a result, lessons, units, and formal assessment strategies, have been created to benefit students and teachers in this interdisciplinary, cross-curricular, practice - as well as meeting their compulsory education requirements and developing skills and literacy in cyber education. Teachers can access these lessons and units through ICTC’s website, as well as receive professional development regarding the assessment and implementation of these offerings from ICTC’s education coordinators, whose combines experience exceeds 50 years of teaching in public, private, international, and Indigenous schools. We encourage you to take this opportunity that will benefit students and educators, and will bridge the learning and curriculum gaps in Canadian education to better reflect the ever-changing public, social, and career landscape that all citizens are a part of. Students are the future, and we at ICTC strive to ensure their futures are bright and prosperous.Keywords: cybersecurity, education, curriculum, teachers
Procedia PDF Downloads 8298 Impact of Pharmacist-Led Care on Glycaemic Control in Patients with Type 2 Diabetes: A Randomised-Controlled Trial
Authors: Emmanuel A. David, Rebecca O. Soremekun, Roseline I. Aderemi-Williams
Abstract:
Background: The complexities involved in the management of diabetes mellitus require a multi-dimensional, multi-professional collaborative and continuous care by health care providers and a substantial self-care by the patients in order to achieve desired treatment outcomes. The effect of pharmacists’ care in the management of diabetes in resource-endowed nations is well documented in literature, but randomised-controlled assessment of the impact of pharmacist-led care among patients with diabetes in resource-limited settings like Nigeria and sub-Saharan Africa countries is scarce. Objective: To evaluate the impact of Pharmacist-led care on glycaemic control in patients with uncontrolled type 2 diabetes, using a randomised-controlled study design Methods: This study employed a prospective randomised controlled design, to assess the impact of pharmacist-led care on glycaemic control of 108 poorly controlled type 2 diabetic patients. A total of 200 clinically diagnosed type 2 diabetes patients were purposively selected using fasting blood glucose ≥ 7mmol/L and tested for long term glucose control using Glycated haemoglobin measure. One hundred and eight (108) patients with ≥ 7% Glycated haemoglobin were recruited for the study and assigned unique identification numbers. They were further randomly allocated to intervention and usual care groups using computer generated random numbers, with each group containing 54 subjects. Patients in the intervention group received pharmacist-structured intervention, including education, periodic phone calls, adherence counselling, referral and 6 months follow-up, while patients in usual care group only kept clinic appointments with their physicians. Data collected at baseline and six months included socio-demographic characteristics, fasting blood glucose, Glycated haemoglobin, blood pressure, lipid profile. With an intention to treat analysis, Mann-Whitney U test was used to compared median change from baseline in the primary outcome (Glycated haemoglobin) and secondary outcomes measure, effect size was computed and proportion of patients that reached target laboratory parameter were compared in both arms. Results: All enrolled participants (108) completed the study, 54 in each study. Mean age was 51±11.75 and majority were female (68.5%). Intervention patients had significant reduction in Glycated haemoglobin (-0.75%; P<0.001; η2 = 0.144), with greater proportion attaining target laboratory parameter after 6 months of care compared to usual care group (Glycated haemoglobin: 42.6% vs 20.8%; P=0.02). Furthermore, patients who received pharmacist-led care were about 3 times more likely to have better glucose control (AOR 2.718, 95%CI: 1.143-6.461) compared to usual care group. Conclusion: Pharmacist-led care significantly improved glucose control in patients with uncontrolled type 2 diabetes mellitus and should be integrated in the routine management of diabetes patients, especially in resource-limited settings.Keywords: glycaemic control , pharmacist-led care, randomised-controlled trial , type 2 diabetes mellitus
Procedia PDF Downloads 12097 Implementation of Deep Neural Networks for Pavement Condition Index Prediction
Authors: M. Sirhan, S. Bekhor, A. Sidess
Abstract:
In-service pavements deteriorate with time due to traffic wheel loads, environment, and climate conditions. Pavement deterioration leads to a reduction in their serviceability and structural behavior. Consequently, proper maintenance and rehabilitation (M&R) are necessary actions to keep the in-service pavement network at the desired level of serviceability. Due to resource and financial constraints, the pavement management system (PMS) prioritizes roads most in need of maintenance and rehabilitation action. It recommends a suitable action for each pavement based on the performance and surface condition of each road in the network. The pavement performance and condition are usually quantified and evaluated by different types of roughness-based and stress-based indices. Examples of such indices are Pavement Serviceability Index (PSI), Pavement Serviceability Ratio (PSR), Mean Panel Rating (MPR), Pavement Condition Rating (PCR), Ride Number (RN), Profile Index (PI), International Roughness Index (IRI), and Pavement Condition Index (PCI). PCI is commonly used in PMS as an indicator of the extent of the distresses on the pavement surface. PCI values range between 0 and 100; where 0 and 100 represent a highly deteriorated pavement and a newly constructed pavement, respectively. The PCI value is a function of distress type, severity, and density (measured as a percentage of the total pavement area). PCI is usually calculated iteratively using the 'Paver' program developed by the US Army Corps. The use of soft computing techniques, especially Artificial Neural Network (ANN), has become increasingly popular in the modeling of engineering problems. ANN techniques have successfully modeled the performance of the in-service pavements, due to its efficiency in predicting and solving non-linear relationships and dealing with an uncertain large amount of data. Typical regression models, which require a pre-defined relationship, can be replaced by ANN, which was found to be an appropriate tool for predicting the different pavement performance indices versus different factors as well. Subsequently, the objective of the presented study is to develop and train an ANN model that predicts the PCI values. The model’s input consists of percentage areas of 11 different damage types; alligator cracking, swelling, rutting, block cracking, longitudinal/transverse cracking, edge cracking, shoving, raveling, potholes, patching, and lane drop off, at three severity levels (low, medium, high) for each. The developed model was trained using 536,000 samples and tested on 134,000 samples. The samples were collected and prepared by The National Transport Infrastructure Company. The predicted results yielded satisfactory compliance with field measurements. The proposed model predicted PCI values with relatively low standard deviations, suggesting that it could be incorporated into the PMS for PCI determination. It is worth mentioning that the most influencing variables for PCI prediction are damages related to alligator cracking, swelling, rutting, and potholes.Keywords: artificial neural networks, computer programming, pavement condition index, pavement management, performance prediction
Procedia PDF Downloads 13796 Remote Radiation Mapping Based on UAV Formation
Authors: Martin Arguelles Perez, Woosoon Yim, Alexander Barzilov
Abstract:
High-fidelity radiation monitoring is an essential component in the enhancement of the situational awareness capabilities of the Department of Energy’s Office of Environmental Management (DOE-EM) personnel. In this paper, multiple units of unmanned aerial vehicles (UAVs) each equipped with a cadmium zinc telluride (CZT) gamma-ray sensor are used for radiation source localization, which can provide vital real-time data for the EM tasks. To achieve this goal, a fully autonomous system of multicopter-based UAV swarm in 3D tetrahedron formation is used for surveying the area of interest and performing radiation source localization. The CZT sensor used in this study is suitable for small-size multicopter UAVs due to its small size and ease of interfacing with the UAV’s onboard electronics for high-resolution gamma spectroscopy enabling the characterization of radiation hazards. The multicopter platform with a fully autonomous flight feature is suitable for low-altitude applications such as radiation contamination sites. The conventional approach uses a single UAV mapping in a predefined waypoint path to predict the relative location and strength of the source, which can be time-consuming for radiation localization tasks. The proposed UAV swarm-based approach can significantly improve its ability to search for and track radiation sources. In this paper, two approaches are developed using (a) 2D planar circular (3 UAVs) and (b) 3D tetrahedron formation (4 UAVs). In both approaches, accurate estimation of the gradient vector is crucial for heading angle calculation. Each UAV carries the CZT sensor; the real-time radiation data are used for the calculation of a bulk heading vector for the swarm to achieve a UAV swarm’s source-seeking behavior. Also, a spinning formation is studied for both cases to improve gradient estimation near a radiation source. In the 3D tetrahedron formation, a UAV located closest to the source is designated as a lead unit to maintain the tetrahedron formation in space. Such a formation demonstrated a collective and coordinated movement for estimating a gradient vector for the radiation source and determining an optimal heading direction of the swarm. The proposed radiation localization technique is studied by computer simulation and validated experimentally in the indoor flight testbed using gamma sources. The technology presented in this paper provides the capability to readily add/replace radiation sensors to the UAV platforms in the field conditions enabling extensive condition measurement and greatly improving situational awareness and event management. Furthermore, the proposed radiation localization approach allows long-term measurements to be efficiently performed at wide areas of interest to prevent disasters and reduce dose risks to people and infrastructure.Keywords: radiation, unmanned aerial system(UAV), source localization, UAV swarm, tetrahedron formation
Procedia PDF Downloads 9995 The Role of Emotional Intelligence in the Manager's Psychophysiological Activity during a Performance-Review Discussion
Authors: Mikko Salminen, Niklas Ravaja
Abstract:
Emotional intelligence (EI) consists of skills for monitoring own emotions and emotions of others, skills for discriminating different emotions, and skills for using this information in thinking and actions. EI enhances, for example, work outcomes and organizational climate. We suggest that the role and manifestations of EI should also be studied in real leadership situations, especially during the emotional, social interaction. Leadership is essentially a process to influence others for reaching a certain goal. This influencing happens by managerial processes and computer-mediated communication (e.g. e-mail) but also by face-to-face, where facial expressions have a significant role in conveying emotional information. Persons with high EI are typically perceived more positively, and they have better social skills. We hypothesize, that during social interaction high EI enhances the ability to detect other’s emotional state and controlling own emotional expressions. We suggest, that emotionally intelligent leader’s experience less stress during social leadership situations, since they have better skills in dealing with the related emotional work. Thus the high-EI leaders would be more able to enjoy these situations, but also be more efficient in choosing appropriate expressions for building constructive dialogue. We suggest, that emotionally intelligent leaders show more positive emotional expressions than low-EI leaders. To study these hypotheses we observed performance review discussions of 40 leaders (24 female) with 78 (45 female) of their followers. Each leader held a discussion with two followers. Psychophysiological methods were chosen because they provide objective and continuous data from the whole duration of the discussions. We recorded sweating of the hands (electrodermal activation) by electrodes placed to the fingers of the non-dominant hand to assess the stress-related physiological arousal of the leaders. In addition, facial electromyography was recorded from cheek (zygomaticus major, activated during e.g. smiling) and periocular (orbicularis oculi, activated during smiling) muscles using electrode pairs placed on the left side of the face. Leader’s trait EI was measured with a 360 questionnaire, filled by each leader’s followers, peers, managers and by themselves. High-EI leaders had less sweating of the hands (p = .007) than the low-EI leaders. It is thus suggested that the high-EI leaders experienced less physiological stress during the discussions. Also, high scores in the factor “Using of emotions” were related to more facial muscle activation indicating positive emotional expressions (cheek muscle: p = .048; periocular muscle: p = .076, almost statistically significant). The results imply that emotionally intelligent managers are positively relaxed during s social leadership situations such as a performance review discussion. The current study also highlights the importance of EI in face-to-face social interaction, given the central role facial expressions have in interaction situations. The study also offers new insight to the biological basis of trait EI. It is suggested that the identification, forming, and intelligently using of facial expressions are skills that could be trained during leadership development courses.Keywords: emotional intelligence, leadership, performance review discussion, psychophysiology, social interaction
Procedia PDF Downloads 24594 Design of a Human-in-the-Loop Aircraft Taxiing Optimisation System Using Autonomous Tow Trucks
Authors: Stefano Zaninotto, Geoffrey Farrugia, Johan Debattista, Jason Gauci
Abstract:
The need to reduce fuel and noise during taxi operations in the airports with a scenario of constantly increasing air traffic has resulted in an effort by the aerospace industry to move towards electric taxiing. In fact, this is one of the problems that is currently being addressed by SESAR JU and two main solutions are being proposed. With the first solution, electric motors are installed in the main (or nose) landing gear of the aircraft. With the second solution, manned or unmanned electric tow trucks are used to tow aircraft from the gate to the runway (or vice-versa). The presence of the tow trucks results in an increase in vehicle traffic inside the airport. Therefore, it is important to design the system in a way that the workload of Air Traffic Control (ATC) is not increased and the system assists ATC in managing all ground operations. The aim of this work is to develop an electric taxiing system, based on the use of autonomous tow trucks, which optimizes aircraft ground operations while keeping ATC in the loop. This system will consist of two components: an optimization tool and a Graphical User Interface (GUI). The optimization tool will be responsible for determining the optimal path for arriving and departing aircraft; allocating a tow truck to each taxiing aircraft; detecting conflicts between aircraft and/or tow trucks; and proposing solutions to resolve any conflicts. There are two main optimization strategies proposed in the literature. With centralized optimization, a central authority coordinates and makes the decision for all ground movements, in order to find a global optimum. With the second strategy, called decentralized optimization or multi-agent system, the decision authority is distributed among several agents. These agents could be the aircraft, the tow trucks, and taxiway or runway intersections. This approach finds local optima; however, it scales better with the number of ground movements and is more robust to external disturbances (such as taxi delays or unscheduled events). The strategy proposed in this work is a hybrid system combining aspects of these two approaches. The GUI will provide information on the movement and status of each aircraft and tow truck, and alert ATC about any impending conflicts. It will also enable ATC to give taxi clearances and to modify the routes proposed by the system. The complete system will be tested via computer simulation of various taxi scenarios at multiple airports, including Malta International Airport, a major international airport, and a fictitious airport. These tests will involve actual Air Traffic Controllers in order to evaluate the GUI and assess the impact of the system on ATC workload and situation awareness. It is expected that the proposed system will increase the efficiency of taxi operations while reducing their environmental impact. Furthermore, it is envisaged that the system will facilitate various controller tasks and improve ATC situation awareness.Keywords: air traffic control, electric taxiing, autonomous tow trucks, graphical user interface, ground operations, multi-agent, route optimization
Procedia PDF Downloads 13093 Psycho-Social Associates of Deliberate Self-Harm in Rural Sri Lanka
Authors: P. H. G. J. Pushpakumara, A. M. P. Adikari, S. U. B. Tennakoon, Ranil Abeysinghe, Andrew Dawson
Abstract:
Introduction: Deliberate Self-harm (DSH) is a global public health problem. Since 1950, suicide rates in Sri Lanka are among the highest national rates in the world. It has become an increasingly common response to emotional distress in young adults. However, it remains unclear the reason for this occurrence. Objectives: The descriptive component of this study was conducted to identify of epidemiological pattern of DSH and suicide in Kurunegala District (KD). Assessment of association between DSH socio-cultural, economical and psychological factors were the objectives of the case control component. Methods: Prospective data collection of DSH and suicide was conducted at all (46) hospitals and all (28) police stations in the KD for thirty six months, from 1st January 2011, as the descriptive component. Case control component was conducted at T.H. Kurunegala (THK) for eighteen months duration, from 1st July 2011. Cases (n=439) were randomly selected from a block of 7 consecutively admitted consenting DSP patients using a computer program. Age, sex and residential divisional secretariat division one to one matched, individuals were randomly selected as controls from patients presented to Out Patient Department. Structured Clinical Interview for DSM-IV-TR Axis I and II Disorders was used to diagnose psychiatric disorders. Validated tools were used to measure other constructs. Results: Suicide incidences in KD were, 21.6, 20.7 and 24.3 per 100,000 population in 2011- 2013 (Male:female ratio 5.7, 4.4 and 6.4). 60% of suicides were due to poisoning. DSP incidences were 205.4, 248.3 and 202.5 per 100,000 population in 2011- 2013. Highest age standardized male DSP incidence reported in 20-24 years (769.6/100,000) and female in 15-19 years (1304.0/100,000). Bing married (age >25 years), monthly family income less than Rs.30,000, not achieving G.C.E (O/L) qualifications, a school drop-out, not in a permanent position in occupation, being a manual and an own account worker, were significantly associated with DSP. Perceiving the quality of relationship as bad or very bad with parents, spouse/ girlfriend/ boyfriend and sibling as associated with 8, 40 and 10.5 times higher risk respectively. Feeling and experiences of neglect, other emotional abuses, feeling of insecurity with the family, in child hood, and having a contact history carried an excess risk for DSP. Cases were less likely to seek help. Further, they had significantly lower scores for life skills and life skills application ability. 25.6% DSH patients had DSM TR axis-I and/or TR axis-II disorder. The presence of psychiatric disorder carried 7.7 (95% CI 4.3 – 13.8) times higher risk for DSP. Conclusion: In general, pattern of DSH and suicide is, unique, different from developed, upper and middle income and lower and middle income countries. It is a learned way of expressing emotions in difficult situations of vulnerable people.Keywords: deliberate self-harm, help-seeking, life-skills, mental- health, psychological, social, suicide
Procedia PDF Downloads 22692 An Adaptable Semi-Numerical Anisotropic Hyperelastic Model for the Simulation of High Pressure Forming
Authors: Daniel Tscharnuter, Eliza Truszkiewicz, Gerald Pinter
Abstract:
High-quality surfaces of plastic parts can be achieved in a very cost-effective manner using in-mold processes, where e.g. scratch resistant or high gloss polymer films are pre-formed and subsequently receive their support structure by injection molding. The pre-forming may be done by high-pressure forming. In this process, a polymer sheet is heated and subsequently formed into the mold by pressurized air. Due to the heat transfer to the cooled mold the polymer temperature drops below its glass transition temperature. This ensures that the deformed microstructure is retained after depressurizing, giving the sheet its final formed shape. The development of a forming process relies heavily on the experience of engineers and trial-and-error procedures. Repeated mold design and testing cycles are however both time- and cost-intensive. It is, therefore, desirable to study the process using reliable computer simulations. Through simulations, the construction of the mold and the effect of various process parameters, e.g. temperature levels, non-uniform heating or timing and magnitude of pressure, on the deformation of the polymer sheet can be analyzed. Detailed knowledge of the deformation is particularly important in the forming of polymer films with integrated electro-optical functions. Care must be taken in the placement of devices, sensors and electrical and optical paths, which are far more sensitive to deformation than the polymers. Reliable numerical prediction of the deformation of the polymer sheets requires sophisticated material models. Polymer films are often either transversely isotropic or orthotropic due to molecular orientations induced during manufacturing. The anisotropic behavior affects the resulting strain field in the deformed film. For example, parts of the same shape but different strain fields may be created by varying the orientation of the film with respect to the mold. The numerical simulation of the high-pressure forming of such films thus requires material models that can capture the nonlinear anisotropic mechanical behavior. There are numerous commercial polymer grades for the engineers to choose from when developing a new part. The effort required for comprehensive material characterization may be prohibitive, especially when several materials are candidates for a specific application. We, therefore, propose a class of models for compressible hyperelasticity, which may be determined from basic experimental data and which can capture key features of the mechanical response. Invariant-based hyperelastic models with a reduced number of invariants are formulated in a semi-numerical way, such that the models are determined from a single uniaxial tensile tests for isotropic materials, or two tensile tests in the principal directions for transversely isotropic or orthotropic materials. The simulation of the high pressure forming of an orthotropic polymer film is finally done using an orthotropic formulation of the hyperelastic model.Keywords: hyperelastic, anisotropic, polymer film, thermoforming
Procedia PDF Downloads 61791 Inputs and Outputs of Innovation Processes in the Colombian Services Sector
Authors: Álvaro Turriago-Hoyos
Abstract:
Most research tends to see innovation as an explanatory factor in achieving high levels of competitiveness and productivity. More recent studies have begun to analyze the determinants of innovation in the services sector as opposed to the much-discussed industrial sector of a country’s economy. This research paper focuses on the services sector in Colombia, one of Latin America’s fastest growing and biggest economies. Over the past decade, much of Colombia’s economic expansion has relied on commodity exports (mainly oil and coffee) whilst the industrial sector has performed relatively poorly. Such developments highlight the potential of the innovative role played by the services sector of the Colombian economy and its future growth prospects. This research paper analyzes the relationship between inputs, which at the same time are internal sources of innovation (such as R&D activities), and external sources that are improved by technology acquisition. The outputs are basically the four kinds of innovation that the OECD Oslo Manual recognizes: product, process, marketing and organizational innovations. The instrument used to measure this input-output relationship is based on Knowledge Production Function approaches. We run Probit models in order to identify the existing relationships between the above inputs and outputs, but also to identify spill-overs derived from interactions of the components of the value chain of the services firms analyzed: customers, suppliers, competitors, and complementary firms. Data are obtained from the Colombian National Administrative Department of Statistics for the period 2008 to 2013 published in the II and III Colombian National Innovation Survey. A short summary of the results obtained lead to conclude that firm size and a firm’s level of technological development turn out to be important discriminating factors for the description of the innovative process at the firm level. The model’s outcomes show a positive impact on the probability of introducing any kind of innovation both on R&D and Technology Acquisition investment. Also, cooperation agreements with customers, research institutes, competitors, and the suppliers are significant. Belonging to a particular industrial group is an important determinant but only to product and organizational innovation. It is possible to establish that Health Services, Education, Computer, Wholesale trade, and Financial Intermediation are the ISIC sectors, which report the highest number of frequencies of the considered set of firms. Those five sectors of the sixteen considered, in all cases, explained more than half of the total of all kinds of innovations. Product Innovation, which is followed by Marketing Innovation, gets the highest results. Displaying the same set of firms distinguishing by size, and belonging to high and low tech services sector shows that the larger the firms the larger a number of innovations, but also that always high-tech firms show a better innovation performance.Keywords: Colombia, determinants of innovation, innovation, services sector
Procedia PDF Downloads 26790 Assessment of Psychological Needs and Characteristics of Elderly Population for Developing Information and Communication Technology Services
Authors: Seung Ah Lee, Sunghyun Cho, Kyong Mee Chung
Abstract:
Rapid population aging became a worldwide demographic phenomenon due to rising life expectancy and declining fertility rates. Considering the current increasing rate of population aging, it is assumed that Korean society enters into a ‘super-aged’ society in 10 years, in which people aged 65 years or older account for more than 20% of entire population. In line with this trend, ICT services aimed to help elderly people to improve the quality of life have been suggested. However, existing ICT services mainly focus on supporting health or nursing care and are somewhat limited to meet a variety of specialized needs and challenges of this population. It is pointed out that the majority of services have been driven by technology-push policies. Given that the usage of ICT services greatly vary on individuals’ socio-economic status (SES), physical and psychosocial needs, this study systematically categorized elderly population into sub-groups and identified their needs and characteristics related to ICT usage in detail. First, three assessment criteria (demographic variables including SES, cognitive functioning level, and emotional functioning level) were identified based on previous literature, experts’ opinions, and focus group interview. Second, survey questions for needs assessment were developed based on the criteria and administered to 600 respondents from a national probability sample. The questionnaire consisted of 67 items concerning demographic information, experience on ICT services and information technology (IT) devices, quality of life and cognitive functioning, etc. As the result of survey, age (60s, 70s, 80s), education level (college graduates or more, middle and high school, less than primary school) and cognitive functioning level (above the cut-off, below the cut-off) were considered the most relevant factors for categorization and 18 sub-groups were identified. Finally, 18 sub-groups were clustered into 3 groups according to following similarities; computer usage rate, difficulties in using ICT, and familiarity with current or previous job. Group 1 (‘active users’) included those who with high cognitive function and educational level in their 60s and 70s. They showed favorable and familiar attitudes toward ICT services and used the services for ‘joyful life’, ‘intelligent living’ and ‘relationship management’. Group 2 (‘potential users’), ranged from age of 60s to 80s with high level of cognitive function and mostly middle to high school graduates, reported some difficulties in using ICT and their expectations were lower than in group 1 despite they were similar to group 1 in areas of needs. Group 3 (‘limited users’) consisted of people with the lowest education level or cognitive function, and 90% of group reported difficulties in using ICT. However, group 3 did not differ from group 2 regarding the level of expectation for ICT services and their main purpose of using ICT was ‘safe living’. This study developed a systematic needs assessment tool and identified three sub-groups of elderly ICT users based on multi-criteria. It is implied that current cognitive function plays an important role in using ICT and determining needs among the elderly population. Implications and limitations were further discussed.Keywords: elderly population, ICT, needs assessment, population aging
Procedia PDF Downloads 14389 A Multilingual App for Studying Children’s Developing Values: Developing a New Arabic Translation of the Picture-based Values Survey and Comparison of Palestinian and Jewish Children in Israel
Authors: Aysheh Maslamani, Ella Daniel, Anna Dӧring, Iyas Nasser, Ariel Knafo-Noam
Abstract:
Over 250 million people globally speak Arabic, one of the most widespread languages in the world, as their first language. Yet only a minuscule fraction of developmental research studies Middle East children. As values are a core component of culture, understanding how values develop is key to understanding development across cultures. Indeed, with the advent of research on value development, significantly since the introduction of the Picture-Based Value Survey for Children, interest in cross-cultural differences in children's values is increasing. As no measure exists for Arab children, PBVS-C in Arabic developed. The online application version of the PBVS-C that can be administered on a computer, tablet, or even a smartphone to measure the 10 values whose presence has been repeatedly demonstrated across the world. The application has been developed simultaneously in Hebrew and Arabic and can easily be adapted to include additional languages. In this research, the development of the multilingual PBVS-C application version adapted for five-year-olds. The translation process discussed (including important decisions such as which dialect of Arabic, a diglossic language, is most suitable), adaptations to subgroups (e.g., Muslim, Druze and Christian Arab children), and using recorded instructions and value item captions, as well as touchscreens to enhance applicability with young children. Four hundred Palestinian and Israeli 5-12 year old children reported their values using the app (50% in Arabic, 50% in Hebrew). Confirmatory Multidimensional Scaling (MDS) analyses revealed structural patterns that closely correspond to Schwartz's theoretical structure in both languages (e.g., universalism values correlated positively with benevolence and negatively with power, whereas tradition correlated negatively with hedonism and positively with conformity). Replicating past findings, power values showed lower importance than benevolence values in both cultural groups, and there were gender differences in which girls were higher in self-transcendence values and lower in self-enhancement values than boys. Cultural value importance differences were explored and revealed that Palestinian children are significantly higher in tradition and achievement values compared to Israeli children, whereas Israeli children are significantly higher in benevolence, hedonism, self-direction, and stimulation values. Age differences in value coherence across the two groups were also studied. Exploring the cultural differences opens a window to understanding the basic motivations driving populations that were hardly studied before. This study will contribute to the developmental value research since it considers the role of critical variables such as culture and religion and tests value coherence across middle childhood. Findings will be discussed, and the potential and limitations of the computerized PBVS-C concerning future values research.Keywords: Arab-children, culture, multilingual-application, value-development
Procedia PDF Downloads 11588 Aligning Informatics Study Programs with Occupational and Qualifications Standards
Authors: Patrizia Poscic, Sanja Candrlic, Danijela Jaksic
Abstract:
The University of Rijeka, Department of Informatics participated in the Stand4Info project, co-financed by the European Union, with the main idea of an alignment of study programs with occupational and qualifications standards in the field of Informatics. A brief overview of our research methodology, goals and deliverables is shown. Our main research and project objectives were: a) development of occupational standards, qualification standards and study programs based on the Croatian Qualifications Framework (CROQF), b) higher education quality improvement in the field of information and communication sciences, c) increasing the employability of students of information and communication technology (ICT) and science, and d) continuously improving competencies of teachers in accordance with the principles of CROQF. CROQF is a reform instrument in the Republic of Croatia for regulating the system of qualifications at all levels through qualifications standards based on learning outcomes and following the needs of the labor market, individuals and society. The central elements of CROQF are learning outcomes - competences acquired by the individual through the learning process and proved afterward. The place of each acquired qualification is set by the level of the learning outcomes belonging to that qualification. The placement of qualifications at respective levels allows the comparison and linking of different qualifications, as well as linking of Croatian qualifications' levels to the levels of the European Qualifications Framework and the levels of the Qualifications framework of the European Higher Education Area. This research has made 3 proposals of occupational standards for undergraduate study level (System Analyst, Developer, ICT Operations Manager), and 2 for graduate (master) level (System Architect, Business Architect). For each occupational standard employers have provided a list of key tasks and associated competencies necessary to perform them. A set of competencies required for each particular job in the workplace was defined and each set of competencies as described in more details by its individual competencies. Based on sets of competencies from occupational standards, sets of learning outcomes were defined and competencies from the occupational standard were linked with learning outcomes. For each learning outcome, as well as for the set of learning outcomes, it was necessary to specify verification method, material, and human resources. The task of the project was to suggest revision and improvement of the existing study programs. It was necessary to analyze existing programs and determine how they meet and fulfill defined learning outcomes. This way, one could see: a) which learning outcomes from the qualifications standards are covered by existing courses, b) which learning outcomes have yet to be covered, c) are they covered by mandatory or elective courses, and d) are some courses unnecessary or redundant. Overall, the main research results are: a) completed proposals of qualification and occupational standards in the field of ICT, b) revised curricula of undergraduate and master study programs in ICT, c) sustainable partnership and association stakeholders network, d) knowledge network - informing the public and stakeholders (teachers, students, and employers) about the importance of CROQF establishment, and e) teachers educated in innovative methods of teaching.Keywords: study program, qualification standard, occupational standard, higher education, informatics and computer science
Procedia PDF Downloads 14387 Efficient Computer-Aided Design-Based Multilevel Optimization of the LS89
Authors: A. Chatel, I. S. Torreguitart, T. Verstraete
Abstract:
The paper deals with a single point optimization of the LS89 turbine using an adjoint optimization and defining the design variables within a CAD system. The advantage of including the CAD model in the design system is that higher level constraints can be imposed on the shape, allowing the optimized model or component to be manufactured. However, CAD-based approaches restrict the design space compared to node-based approaches where every node is free to move. In order to preserve a rich design space, we develop a methodology to refine the CAD model during the optimization and to create the best parameterization to use at each time. This study presents a methodology to progressively refine the design space, which combines parametric effectiveness with a differential evolutionary algorithm in order to create an optimal parameterization. In this manuscript, we show that by doing the parameterization at the CAD level, we can impose higher level constraints on the shape, such as the axial chord length, the trailing edge radius and G2 geometric continuity between the suction side and pressure side at the leading edge. Additionally, the adjoint sensitivities are filtered out and only smooth shapes are produced during the optimization process. The use of algorithmic differentiation for the CAD kernel and grid generator allows computing the grid sensitivities to machine accuracy and avoid the limited arithmetic precision and the truncation error of finite differences. Then, the parametric effectiveness is computed to rate the ability of a set of CAD design parameters to produce the design shape change dictated by the adjoint sensitivities. During the optimization process, the design space is progressively enlarged using the knot insertion algorithm which allows introducing new control points whilst preserving the initial shape. The position of the inserted knots is generally assumed. However, this assumption can hinder the creation of better parameterizations that would allow producing more localized shape changes where the adjoint sensitivities dictate. To address this, we propose using a differential evolutionary algorithm to maximize the parametric effectiveness by optimizing the location of the inserted knots. This allows the optimizer to gradually explore larger design spaces and to use an optimal CAD-based parameterization during the course of the optimization. The method is tested on the LS89 turbine cascade and large aerodynamic improvements in the entropy generation are achieved whilst keeping the exit flow angle fixed. The trailing edge and axial chord length, which are kept fixed as manufacturing constraints. The optimization results show that the multilevel optimizations were more efficient than the single level optimization, even though they used the same number of design variables at the end of the multilevel optimizations. Furthermore, the multilevel optimization where the parameterization is created using the optimal knot positions results in a more efficient strategy to reach a better optimum than the multilevel optimization where the position of the knots is arbitrarily assumed.Keywords: adjoint, CAD, knots, multilevel, optimization, parametric effectiveness
Procedia PDF Downloads 11086 Use of Analytic Hierarchy Process for Plant Site Selection
Authors: Muzaffar Shaikh, Shoaib Shaikh, Mark Moyou, Gaby Hawat
Abstract:
This paper presents the use of Analytic Hierarchy Process (AHP) in evaluating the site selection of a new plant by a corporation. Due to intense competition at a global level, multinational corporations are continuously striving to minimize production and shipping costs of their products. One key factor that plays significant role in cost minimization is where the production plant is located. In the U.S. for example, labor and land costs continue to be very high while they are much cheaper in countries such as India, China, Indonesia, etc. This is why many multinational U.S. corporations (e.g. General Electric, Caterpillar Inc., Ford, General Motors, etc.), have shifted their manufacturing plants outside. The continued expansion of the Internet and its availability along with technological advances in computer hardware and software all around the globe have facilitated U.S. corporations to expand abroad as they seek to reduce production cost. In particular, management of multinational corporations is constantly engaged in concentrating on countries at a broad level, or cities within specific countries where certain or all parts of their end products or the end products themselves can be manufactured cheaper than in the U.S. AHP is based on preference ratings of a specific decision maker who can be the Chief Operating Officer of a company or his/her designated data analytics engineer. It serves as a tool to first evaluate the plant site selection criteria and second, alternate plant sites themselves against these criteria in a systematic manner. Examples of site selection criteria are: Transportation Modes, Taxes, Energy Modes, Labor Force Availability, Labor Rates, Raw Material Availability, Political Stability, Land Costs, etc. As a necessary first step under AHP, evaluation criteria and alternate plant site countries are identified. Depending upon the fidelity of analysis, specific cities within a country can also be chosen as alternative facility locations. AHP experience in this type of analysis indicates that the initial analysis can be performed at the Country-level. Once a specific country is chosen via AHP, secondary analyses can be performed by selecting specific cities or counties within a country. AHP analysis is usually based on preferred ratings of a decision-maker (e.g., 1 to 5, 1 to 7, or 1 to 9, etc., where 1 means least preferred and a 5 means most preferred). The decision-maker assigns preferred ratings first, criterion vs. criterion and creates a Criteria Matrix. Next, he/she assigns preference ratings by alternative vs. alternative against each criterion. Once this data is collected, AHP is applied to first get the rank-ordering of criteria. Next, rank-ordering of alternatives is done against each criterion resulting in an Alternative Matrix. Finally, overall rank ordering of alternative facility locations is obtained by matrix multiplication of Alternative Matrix and Criteria Matrix. The most practical aspect of AHP is the ‘what if’ analysis that the decision-maker can conduct after the initial results to provide valuable sensitivity information of specific criteria to other criteria and alternatives.Keywords: analytic hierarchy process, multinational corporations, plant site selection, preference ratings
Procedia PDF Downloads 28885 Innovations and Challenges: Multimodal Learning in Cybersecurity
Authors: Tarek Saadawi, Rosario Gennaro, Jonathan Akeley
Abstract:
There is rapidly growing demand for professionals to fill positions in Cybersecurity. This is recognized as a national priority both by government agencies and the private sector. Cybersecurity is a very wide technical area which encompasses all measures that can be taken in an electronic system to prevent criminal or unauthorized use of data and resources. This requires defending computers, servers, networks, and their users from any kind of malicious attacks. The need to address this challenge has been recognized globally but is particularly acute in the New York metropolitan area, home to some of the largest financial institutions in the world, which are prime targets of cyberattacks. In New York State alone, there are currently around 57,000 jobs in the Cybersecurity industry, with more than 23,000 unfilled positions. The Cybersecurity Program at City College is a collaboration between the Departments of Computer Science and Electrical Engineering. In Fall 2020, The City College of New York matriculated its first students in theCybersecurity Master of Science program. The program was designed to fill gaps in the previous offerings and evolved out ofan established partnership with Facebook on Cybersecurity Education. City College has designed a program where courses, curricula, syllabi, materials, labs, etc., are developed in cooperation and coordination with industry whenever possible, ensuring that students graduating from the program will have the necessary background to seamlessly segue into industry jobs. The Cybersecurity Program has created multiple pathways for prospective students to obtain the necessary prerequisites to apply in order to build a more diverse student population. The program can also be pursued on a part-time basis which makes it available to working professionals. Since City College’s Cybersecurity M.S. program was established to equip students with the advanced technical skills needed to thrive in a high-demand, rapidly-evolving field, it incorporates a range of pedagogical formats. From its outset, the Cybersecurity program has sought to provide both the theoretical foundations necessary for meaningful work in the field along with labs and applied learning projects aligned with skillsets required by industry. The efforts have involved collaboration with outside organizations and with visiting professors designing new courses on topics such as Adversarial AI, Data Privacy, Secure Cloud Computing, and blockchain. Although the program was initially designed with a single asynchronous course in the curriculum with the rest of the classes designed to be offered in-person, the advent of the COVID-19 pandemic necessitated a move to fullyonline learning. The shift to online learning has provided lessons for future development by providing examples of some inherent advantages to the medium in addition to its drawbacks. This talk will address the structure of the newly-implemented Cybersecurity Master’s Program and discuss the innovations, challenges, and possible future directions.Keywords: cybersecurity, new york, city college, graduate degree, master of science
Procedia PDF Downloads 14784 Development of a Social Assistive Robot for Elderly Care
Authors: Edwin Foo, Woei Wen, Lui, Meijun Zhao, Shigeru Kuchii, Chin Sai Wong, Chung Sern Goh, Yi Hao He
Abstract:
This presentation presents an elderly care and assistive social robot development work. We named this robot JOS and he is restricted to table top operation. JOS is designed to have a maximum volume of 3600 cm3 with its base restricted to 250 mm and his mission is to provide companion, assist and help the elderly. In order for JOS to accomplish his mission, he will be equipped with perception, reaction and cognition capability. His appearance will be not human like but more towards cute and approachable type. JOS will also be designed to be neutral gender. However, the robot will still have eyes, eyelid and a mouth. For his eyes and eyelids, they will be built entirely with Robotis Dynamixel AX18 motor. To realize this complex task, JOS will be also be equipped with micro-phone array, vision camera and Intel i5 NUC computer and a powered by a 12 V lithium battery that will be self-charging. His face is constructed using 1 motor each for the eyelid, 2 motors for the eyeballs, 3 motors for the neck mechanism and 1 motor for the lips movement. The vision senor will be house on JOS forehead and the microphone array will be somewhere below the mouth. For the vision system, Omron latest OKAO vision sensor is used. It is a compact and versatile sensor that is only 60mm by 40mm in size and operates with only 5V supply. In addition, OKAO vision sensor is capable of identifying the user and recognizing the expression of the user. With these functions, JOS is able to track and identify the user. If he cannot recognize the user, JOS will ask the user if he would want him to remember the user. If yes, JOS will store the user information together with the capture face image into a database. This will allow JOS to recognize the user the next time the user is with JOS. In addition, JOS is also able to interpret the mood of the user through the facial expression of the user. This will allow the robot to understand the user mood and behavior and react according. Machine learning will be later incorporated to learn the behavior of the user so as to understand the mood of the user and requirement better. For the speech system, Microsoft speech and grammar engine is used for the speech recognition. In order to use the speech engine, we need to build up a speech grammar database that captures the commonly used words by the elderly. This database is built from research journals and literature on elderly speech and also interviewing elderly what do they want to robot to assist them with. Using the result from the interview and research from journal, we are able to derive a set of common words the elderly frequently used to request for the help. It is from this set that we build up our grammar database. In situation where there is more than one person near JOS, he is able to identify the person who is talking to him through an in-house developed microphone array structure. In order to make the robot more interacting, we have also included the capability for the robot to express his emotion to the user through the facial expressions by changing the position and movement of the eyelids and mouth. All robot emotions will be in response to the user mood and request. Lastly, we are expecting to complete this phase of project and test it with elderly and also delirium patient by Feb 2015.Keywords: social robot, vision, elderly care, machine learning
Procedia PDF Downloads 44183 Monitoring of Educational Achievements of Kazakhstani 4th and 9th Graders
Authors: Madina Tynybayeva, Sanya Zhumazhanova, Saltanat Kozhakhmetova, Merey Mussabayeva
Abstract:
One of the leading indicators of the education quality is the level of students’ educational achievements. The processes of modernization of Kazakhstani education system have predetermined the need to improve the national system by assessing the quality of education. The results of assessment greatly contribute to addressing questions about the current state of the educational system in the country. The monitoring of students’ educational achievements (MEAS) is the systematic measurement of the quality of education for compliance with the state obligatory standard of Kazakhstan. This systematic measurement is independent of educational organizations and approved by the order of the Minister of Education and Scienceof Kazakhstan. The MEAS was conducted in the regions of Kazakhstanfor the first time in 2022 by the National Testing Centre. The measurement does not have legal consequences either for students or for educational organizations. Students’ achievements were measured in three subject areas: reading, mathematics and science literacy. MEAS was held for the first time in April this year, 105 thousand students from 1436 schools of Kazakhstan took part in the testing. The monitoring was accompanied by a survey of students, teachers, and school leaders. The goal is to identify which contextual factors affect learning outcomes. The testing was carried out in a computer format. The test tasks of MEAS are ranked according to the three levels of difficulty: basic, medium, and high. Fourth graders are asked to complete 30 closed-type tasks. The average score of the results is 21 points out of 30, which means 70% of tasks were successfully completed. The total number of test tasks for 9th grade students – 75 questions. The results of ninth graders are comparatively lower, the success rate of completing tasks is 63%. MEAS participants did not reveal a statistically significant gap in results in terms of the language of instruction, territorial status, and type of school. The trend of reducing the gap in these indicators is also noted in the framework of recent international studies conducted across the country, in particular PISA for schools in Kazakhstan. However, there is a regional gap in MOES performance. The difference in the values of the indicators of the highest and lowest scores of the regions was 11% of the success of completing tasks in the 4th grade, 14% in the 9thgrade. The results of the 4th grade students in reading, mathematics, and science literacy are: 71.5%, 70%, and 66.9%, respectively. The results of ninth-graders in reading, mathematics, and science literacy are 69.6%, 54%, and 60.8%, respectively. From the surveys, it was revealed that the educational achievements of students are considerably influenced by such factors as the subject competences of teachers, as well as the school climate and motivation of students. Thus, the results of MEAS indicate the need for an integrated approach to improving the quality of education. In particular, the combination of improving the content of curricula and textbooks, internal and external assessment of the educational achievements of students, educational programs of pedagogical specialties, and advanced training courses is required.Keywords: assessment, secondary school, monitoring, functional literacy, kazakhstan
Procedia PDF Downloads 10782 Assessing the Socio-Economic Problems and Environmental Implications of Green Revolution In Uttar Pradesh, India
Authors: Naima Umar
Abstract:
Mid-1960’s has been landmark in the history of Indian agriculture. It was in 1966-67 when a New Agricultural Strategy was put into practice to tide over chronic shortages of food grains in the country. This strategy adopted was the use High-Yielding Varieties (HYV) of seeds (wheat and rice), which was popularly known as the Green Revolution. This phase of agricultural development has saved us from hunger and starvation and made the peasants more confident than ever before, but it has also created a number of socio-economic and environmental implications such as the reduction in area under forest, salinization, waterlogging, soil erosion, lowering of underground water table, soil, water and air pollution, decline in soil fertility, silting of rivers and emergence of several diseases and health hazards. The state of Uttar Pradesh in the north is bounded by the country of Nepal, the states of Uttrakhand on the northwest, Haryana on the west, Rajasthan on the southwest, Madhya Pradesh on the south and southwest, and Bihar on the east. It is situated between 23052´N and 31028´N latitudes and 7703´ and 84039´E longitudes. It is the fifth largest state of the country in terms of area, and first in terms of population. Forming the part of Ganga plain the state is crossed by a number of rivers which originate from the snowy peaks of Himalayas. The fertile plain of the Ganga has led to a high concentration of population with high density and the dominance of agriculture as an economic activity. Present paper highlights the negative impact of new agricultural technology on health of the people and environment and will attempt to find out factors which are responsible for these implications. Karl Pearson’s Correlation coefficient technique has been applied by selecting 1 dependent variable (i.e. Productivity Index) and some independent variables which may impact crop productivity in the districts of the state. These variables have categorized as: X1 (Cropping Intensity), X2 (Net irrigated area), X3 (Canal Irrigated area), X4 (Tube-well Irrigated area), X5 (Irrigated area by other sources), X6 (Consumption of chemical fertilizers (NPK) Kg. /ha.), X7 (Number of wooden plough), X8 (Number of iron plough), X9 (Number of harrows and cultivators), X10 (Number of thresher machines), X11(Number of sprayers), X12 (Number of sowing instruments), X13 (Number of tractors) and X14 (Consumption of insecticides and pesticides (in Kg. /000 ha.). The entire data during 2001-2005 and 2006- 2010 have been taken and 5 years average value is taken into consideration, based on secondary sources obtained from various government, organizations, master plan report, economic abstracts, district census handbooks and village and town directories etc,. put on a standard computer programmed SPSS and the results obtained have been properly tabulated.Keywords: agricultural technology, environmental implications, health hazards, socio-economic problems
Procedia PDF Downloads 30781 Application of the Standard Deviation in Regulating Design Variation of Urban Solutions Generated through Evolutionary Computation
Authors: Mohammed Makki, Milad Showkatbakhsh, Aiman Tabony
Abstract:
Computational applications of natural evolutionary processes as problem-solving tools have been well established since the mid-20th century. However, their application within architecture and design has only gained ground in recent years, with an increasing number of academics and professionals in the field electing to utilize evolutionary computation to address problems comprised from multiple conflicting objectives with no clear optimal solution. Recent advances in computer science and its consequent constructive influence on the architectural discourse has led to the emergence of multiple algorithmic processes capable of simulating the evolutionary process in nature within an efficient timescale. Many of the developed processes of generating a population of candidate solutions to a design problem through an evolutionary based stochastic search process are often driven through the application of both environmental and architectural parameters. These methods allow for conflicting objectives to be simultaneously, independently, and objectively optimized. This is an essential approach in design problems with a final product that must address the demand of a multitude of individuals with various requirements. However, one of the main challenges encountered through the application of an evolutionary process as a design tool is the ability for the simulation to maintain variation amongst design solutions in the population while simultaneously increasing in fitness. This is most commonly known as the ‘golden rule’ of balancing exploration and exploitation over time; the difficulty of achieving this balance in the simulation is due to the tendency of either variation or optimization being favored as the simulation progresses. In such cases, the generated population of candidate solutions has either optimized very early in the simulation, or has continued to maintain high levels of variation to which an optimal set could not be discerned; thus, providing the user with a solution set that has not evolved efficiently to the objectives outlined in the problem at hand. As such, the experiments presented in this paper seek to achieve the ‘golden rule’ by incorporating a mathematical fitness criterion for the development of an urban tissue comprised from the superblock as its primary architectural element. The mathematical value investigated in the experiments is the standard deviation factor. Traditionally, the standard deviation factor has been used as an analytical value rather than a generative one, conventionally used to measure the distribution of variation within a population by calculating the degree by which the majority of the population deviates from the mean. A higher standard deviation value delineates a higher number of the population is clustered around the mean and thus limited variation within the population, while a lower standard deviation value is due to greater variation within the population and a lack of convergence towards an optimal solution. The results presented will aim to clarify the extent to which the utilization of the standard deviation factor as a fitness criterion can be advantageous to generating fitter individuals in a more efficient timeframe when compared to conventional simulations that only incorporate architectural and environmental parameters.Keywords: architecture, computation, evolution, standard deviation, urban
Procedia PDF Downloads 13380 Multi-Objectives Genetic Algorithm for Optimizing Machining Process Parameters
Authors: Dylan Santos De Pinho, Nabil Ouerhani
Abstract:
Energy consumption of machine-tools is becoming critical for machine-tool builders and end-users because of economic, ecological and legislation-related reasons. Many machine-tool builders are seeking for solutions that allow the reduction of energy consumption of machine-tools while preserving the same productivity rate and the same quality of machined parts. In this paper, we present the first results of a project conducted jointly by academic and industrial partners to reduce the energy consumption of a Swiss-Type lathe. We employ genetic algorithms to find optimal machining parameters – the set of parameters that lead to the best trade-off between energy consumption, part quality and tool lifetime. Three main machining process parameters are considered in our optimization technique, namely depth of cut, spindle rotation speed and material feed rate. These machining process parameters have been identified as the most influential ones in the configuration of the Swiss-type machining process. A state-of-the-art multi-objective genetic algorithm has been used. The algorithm combines three fitness functions, which are objective functions that permit to evaluate a set of parameters against the three objectives: energy consumption, quality of the machined parts, and tool lifetime. In this paper, we focus on the investigation of the fitness function related to energy consumption. Four different energy consumption related fitness functions have been investigated and compared. The first fitness function refers to the Kienzle cutting force model. The second fitness function uses the Material Removal Rate (RMM) as an indicator of energy consumption. The two other fitness functions are non-deterministic, learning-based functions. One fitness function uses a simple Neural Network to learn the relation between the process parameters and the energy consumption from experimental data. Another fitness function uses Lasso regression to determine the same relation. The goal is, then, to find out which fitness functions predict best the energy consumption of a Swiss-Type machining process for the given set of machining process parameters. Once determined, these functions may be used for optimization purposes – determine the optimal machining process parameters leading to minimum energy consumption. The performance of the four fitness functions has been evaluated. The Tornos DT13 Swiss-Type Lathe has been used to carry out the experiments. A mechanical part including various Swiss-Type machining operations has been selected for the experiments. The evaluation process starts with generating a set of CNC (Computer Numerical Control) programs for machining the part at hand. Each CNC program considers a different set of machining process parameters. During the machining process, the power consumption of the spindle is measured. All collected data are assigned to the appropriate CNC program and thus to the set of machining process parameters. The evaluation approach consists in calculating the correlation between the normalized measured power consumption and the normalized power consumption prediction for each of the four fitness functions. The evaluation shows that the Lasso and Neural Network fitness functions have the highest correlation coefficient with 97%. The fitness function “Material Removal Rate” (MRR) has a correlation coefficient of 90%, whereas the Kienzle-based fitness function has a correlation coefficient of 80%.Keywords: adaptive machining, genetic algorithms, smart manufacturing, parameters optimization
Procedia PDF Downloads 14779 Augmented Reality Enhanced Order Picking: The Potential for Gamification
Authors: Stavros T. Ponis, George D. Plakas-Koumadorakis, Sotiris P. Gayialis
Abstract:
Augmented Reality (AR) can be defined as a technology, which takes the capabilities of computer-generated display, sound, text and effects to enhance the user's real-world experience by overlaying virtual objects into the real world. By doing that, AR is capable of providing a vast array of work support tools, which can significantly increase employee productivity, enhance existing job training programs by making them more realistic and in some cases introduce completely new forms of work and task executions. One of the most promising AR industrial applications, as literature shows, is the use of Head Worn, monocular or binocular Displays (HWD) to support logistics and production operations, such as order picking, part assembly and maintenance. This paper presents the initial results of an ongoing research project for the introduction of a dedicated AR-HWD solution to the picking process of a Distribution Center (DC) in Greece operated by a large Telecommunication Service Provider (TSP). In that context, the proposed research aims to determine whether gamification elements should be integrated in the functional requirements of the AR solution, such as providing points for reaching objectives and creating leaderboards and awards (e.g. badges) for general achievements. Up to now, there is a an ambiguity on the impact of gamification in logistics operations since gamification literature mostly focuses on non-industrial organizational contexts such as education and customer/citizen facing applications, such as tourism and health. To the contrary, the gamification efforts described in this study focus in one of the most labor- intensive and workflow dependent logistics processes, i.e. Customer Order Picking (COP). Although introducing AR in COP, undoubtedly, creates significant opportunities for workload reduction and increased process performance the added value of gamification is far from certain. This paper aims to provide insights on the suitability and usefulness of AR-enhanced gamification in the hard and very demanding environment of a logistics center. In doing so, it will utilize a review of the current state-of-the art regarding gamification of production and logistics processes coupled with the results of questionnaire guided interviews with industry experts, i.e. logisticians, warehouse workers (pickers) and AR software developers. The findings of the proposed research aim to contribute towards a better understanding of AR-enhanced gamification, the organizational change it entails and the consequences it potentially has for all implicated entities in the often highly standardized and structured work required in the logistics setting. The interpretation of these findings will support the decision of logisticians regarding the introduction of gamification in their logistics processes by providing them useful insights and guidelines originating from a real life case study of a large DC operating more than 300 retail outlets in Greece.Keywords: augmented reality, technology acceptance, warehouse management, vision picking, new forms of work, gamification
Procedia PDF Downloads 15078 Attention Treatment for People With Aphasia: Language-Specific vs. Domain-General Neurofeedback
Authors: Yael Neumann
Abstract:
Attention deficits are common in people with aphasia (PWA). Two treatment approaches address these deficits: domain-general methods like Play Attention, which focus on cognitive functioning, and domain-specific methods like Language-Specific Attention Treatment (L-SAT), which use linguistically based tasks. Research indicates that L-SAT can improve both attentional deficits and functional language skills, while Play Attention has shown success in enhancing attentional capabilities among school-aged children with attention issues compared to standard cognitive training. This study employed a randomized controlled cross-over single-subject design to evaluate the effectiveness of these two attention treatments over 25 weeks. Four PWA participated, undergoing a battery of eight standardized tests measuring language and cognitive skills. The treatments were counterbalanced. Play Attention used EEG sensors to detect brainwaves, enabling participants to manipulate items in a computer game while learning to suppress theta activity and increase beta activity. An algorithm tracked changes in the theta-to-beta ratio, allowing points to be earned during the games. L-SAT, on the other hand, involved hierarchical language tasks that increased in complexity, requiring greater attention from participants. Results showed that for language tests, Participant 1 (moderate aphasia) aligned with existing literature, showing L-SAT was more effective than Play Attention. However, Participants 2 (very severe) and 3 and 4 (mild) did not conform to this pattern; both treatments yielded similar outcomes. This may be due to the extremes of aphasia severity: the very severe participant faced significant overall deficits, making both approaches equally challenging, while the mild participant performed well initially, leaving limited room for improvement. In attention tests, Participants 1 and 4 exhibited results consistent with prior research, indicating Play Attention was superior to L-SAT. Participant 2, however, showed no significant improvement with either program, although L-SAT had a slight edge on the Visual Elevator task, measuring switching and mental flexibility. This advantage was not sustained at the one-month follow-up, likely due to the participant’s struggles with complex attention tasks. Participant 3's results similarly did not align with prior studies, revealing no difference between the two treatments, possibly due to the challenging nature of the attention measures used. Regarding participation and ecological tests, all participants showed similar mild improvements with both treatments. This limited progress could stem from the short study duration, with only five weeks allocated for each treatment, which may not have been enough time to achieve meaningful changes affecting life participation. In conclusion, the performance of participants appeared influenced by their level of aphasia severity. The moderate PWA’s results were most aligned with existing literature, indicating better attention improvement from the domain-general approach (Play Attention) and better language improvement from the domain-specific approach (L-SAT).Keywords: attention, language, cognitive rehabilitation, neurofeedback
Procedia PDF Downloads 1777 Thermoluminescence Investigations of Tl2Ga2Se3S Layered Single Crystals
Authors: Serdar Delice, Mehmet Isik, Nizami Hasanli, Kadir Goksen
Abstract:
Researchers have donated great interest to ternary and quaternary semiconductor compounds especially with the improvement of the optoelectronic technology. The quaternary compound Tl2Ga2Se3S which was grown by Bridgman method carries the properties of ternary thallium chalcogenides group of semiconductors with layered structure. This compound can be formed from TlGaSe2 crystals replacing the one quarter of selenium atom by sulfur atom. Although Tl2Ga2Se3S crystals are not intentionally doped, some unintended defect types such as point defects, dislocations and stacking faults can occur during growth processes of crystals. These defects can cause undesirable problems in semiconductor materials especially produced for optoelectronic technology. Defects of various types in the semiconductor devices like LEDs and field effect transistor may act as a non-radiative or scattering center in electron transport. Also, quick recombination of holes with electrons without any energy transfer between charge carriers can occur due to the existence of defects. Therefore, the characterization of defects may help the researchers working in this field to produce high quality devices. Thermoluminescence (TL) is an effective experimental method to determine the kinetic parameters of trap centers due to defects in crystals. In this method, the sample is illuminated at low temperature by a light whose energy is bigger than the band gap of studied sample. Thus, charge carriers in the valence band are excited to delocalized band. Then, the charge carriers excited into conduction band are trapped. The trapped charge carriers are released by heating the sample gradually and these carriers then recombine with the opposite carriers at the recombination center. By this way, some luminescence is emitted from the samples. The emitted luminescence is converted to pulses by using an experimental setup controlled by computer program and TL spectrum is obtained. Defect characterization of Tl2Ga2Se3S single crystals has been performed by TL measurements at low temperatures between 10 and 300 K with various heating rate ranging from 0.6 to 1.0 K/s. The TL signal due to the luminescence from trap centers revealed one glow peak having maximum temperature of 36 K. Curve fitting and various heating rate methods were used for the analysis of the glow curve. The activation energy of 13 meV was found by the application of curve fitting method. This practical method established also that the trap center exhibits the characteristics of mixed (general) kinetic order. In addition, various heating rate analysis gave a compatible result (13 meV) with curve fitting as the temperature lag effect was taken into consideration. Since the studied crystals were not intentionally doped, these centers are thought to originate from stacking faults, which are quite possible in Tl2Ga2Se3S due to the weakness of the van der Waals forces between the layers. Distribution of traps was also investigated using an experimental method. A quasi-continuous distribution was attributed to the determined trap centers.Keywords: chalcogenides, defects, thermoluminescence, trap centers
Procedia PDF Downloads 28276 Process of Production of an Artisanal Brewery in a City in the North of the State of Mato Grosso, Brazil
Authors: Ana Paula S. Horodenski, Priscila Pelegrini, Salli Baggenstoss
Abstract:
The brewing industry with artisanal concepts seeks to serve a specific market, with diversified production that has been gaining ground in the national environment, also in the Amazon region. This growth is due to the more demanding consumer, with a diversified taste that wants to try new types of beer, enjoying products with new aromas, flavors, as a differential of what is so widely spread through the big industrial brands. Thus, through qualitative research methods, the study aimed to investigate how is the process of managing the production of a craft brewery in a city in the northern State of Mato Grosso (BRAZIL), providing knowledge of production processes and strategies in the industry. With the efficient use of resources, it is possible to obtain the necessary quality and provide better performance and differentiation of the company, besides analyzing the best management model. The research is descriptive with a qualitative approach through a case study. For the data collection, a semi-structured interview was elaborated, composed of the areas: microbrewery characterization, artisan beer production process, and the company supply chain management. Also, production processes were observed during technical visits. With the study, it was verified that the artisan brewery researched develops preventive maintenance strategies with the inputs, machines, and equipment, so that the quality of the product and the production process are achieved. It was observed that the distance from the supplying centers makes the management of processes and the supply chain be carried out with a longer planning time so that the delivery of the final product is satisfactory. The production process of the brewery is composed of machines and equipment that allows the control and quality of the product, which the manager states that for the productive capacity of the industry and its consumer market, the available equipment meets the demand. This study also contributes to highlight one of the challenges for the development of small breweries in front of the market giants, that is, the legislation, which fits the microbreweries as producers of alcoholic beverages. This makes the micro and small business segment to be taxed as a major, who has advantages in purchasing large batches of raw materials and tax incentives because they are large employers and tax pickers. It was possible to observe that the supply chain management system relies on spreadsheets and notes that are done manually, which could be simplified with a computer program to streamline procedures and reduce risks and failures of the manual process. In relation to the control of waste and effluents affected by the industry is outsourced and meets the needs. Finally, the results showed that the industry uses preventive maintenance as a productive strategy, which allows better conditions for the production and quality of artisanal beer. The quality is directly related to the satisfaction of the final consumer, being prized and performed throughout the production process, with the selection of better inputs, the effectiveness of the production processes and the relationship with the commercial partners.Keywords: artisanal brewery, production management, production processes, supply chain
Procedia PDF Downloads 12075 Solutions for Food-Safe 3D Printing
Authors: Geremew Geidare Kailo, Igor Gáspár, András Koris, Ivana Pajčin, Flóra Vitális, Vanja Vlajkov
Abstract:
Three-dimension (3D) printing, a very popular additive manufacturing technology, has recently undergone rapid growth and replaced the use of conventional technology from prototyping to producing end-user parts and products. The 3D Printing technology involves a digital manufacturing machine that produces three-dimensional objects according to designs created by the user via 3D modeling or computer-aided design/manufacturing (CAD/CAM) software. The most popular 3D printing system is Fused Deposition Modeling (FDM) or also called Fused Filament Fabrication (FFF). A 3D-printed object is considered food safe if it can have direct contact with the food without any toxic effects, even after cleaning, storing, and reusing the object. This work analyzes the processing timeline of the filament (material for 3D printing) from unboxing to the extrusion through the nozzle. It is an important task to analyze the growth of bacteria on the 3D printed surface and in gaps between the layers. By default, the 3D-printed object is not food safe after longer usage and direct contact with food (even though they use food-safe filaments), but there are solutions for this problem. The aim of this work was to evaluate the 3D-printed object from different perspectives of food safety. Firstly, testing antimicrobial 3D printing filaments from a food safety aspect since the 3D Printed object in the food industry may have direct contact with the food. Therefore, the main purpose of the work is to reduce the microbial load on the surface of a 3D-printed part. Coating with epoxy resin was investigated, too, to see its effect on mechanical strength, thermal resistance, surface smoothness and food safety (cleanability). Another aim of this study was to test new temperature-resistant filaments and the effect of high temperature on 3D printed materials to see if they can be cleaned with boiling or similar hi-temp treatment. This work proved that all three mentioned methods could improve the food safety of the 3D printed object, but the size of this effect variates. The best result we got was with coating with epoxy resin, and the object was cleanable like any other injection molded plastic object with a smooth surface. Very good results we got by boiling the objects, and it is good to see that nowadays, more and more special filaments have a food-safe certificate and can withstand boiling temperatures too. Using antibacterial filaments reduced bacterial colonies to 1/5, but the biggest advantage of this method is that it doesn’t require any post-processing. The object is ready out of the 3D printer. Acknowledgements: The research was supported by the Hungarian and Serbian bilateral scientific and technological cooperation project funded by the Hungarian National Office for Research, Development and Innovation (NKFI, 2019-2.1.11-TÉT-2020-00249) and the Ministry of Education, Science and Technological Development of the Republic of Serbia. The authors acknowledge the Hungarian University of Agriculture and Life Sciences’s Doctoral School of Food Science for the support in this studyKeywords: food safety, 3D printing, filaments, microbial, temperature
Procedia PDF Downloads 14274 Social Media Governance in UK Higher Education Institutions
Authors: Rebecca Lees, Deborah Anderson
Abstract:
Whilst the majority of research into social media in education focuses on the applications for teaching and learning environments, this study looks at how such activities can be managed by investigating the current state of social media regulation within UK higher education. Social media has pervaded almost all aspects of higher education; from marketing, recruitment and alumni relations to both distance and classroom-based learning and teaching activities. In terms of who uses it and how it is used, social media is growing at an unprecedented rate, particularly amongst the target market for higher education. Whilst the platform presents opportunities not found in more traditional methods of communication and interaction, such as speed and reach, it also carries substantial risks that come with inappropriate use, lack of control and issues of privacy. Typically, organisations rely on the concept of a social contract to guide employee behaviour to conform to the expectations of that organisation. Yet, where academia and social media intersect applying the notion of a social contract to enforce governance may be problematic; firstly considering the emphasis on treating students as customers with a growing focus on the use and collection of satisfaction metrics; and secondly regarding the notion of academic’s freedom of speech, opinion and discussion, which is a long-held tradition of learning instruction. Therefore the need for sound governance procedures to support expectations over online behaviour is vital, especially when the speed and breadth of adoption of social media activities has in the past outrun organisations’ abilities to manage it. An analysis of the current level of governance was conducted by gathering relevant policies, guidelines and best practice documentation available online via internet search and institutional requests. The documents were then subjected to a content analysis in the second phase of this study to determine the approach taken by institutions to apply such governance. Documentation was separated according to audience, i.e.: applicable to staff, students or all users. Given many of these included guests and visitors to the institution within their scope being easily accessible was considered important. Yet, within the UK only about half of all education institutions had explicit social media governance documentation available online without requiring member access or considerable searching. Where they existed, the majority focused solely on employee activities and tended to be policy based rather than rooted in guidelines or best practices, or held a fallback position of governing online behaviour via implicit instructions within IT and computer regulations. Explicit instructions over expected online behaviours is therefore lacking within UK HE. Given the number of educational practices that now include significant online components, it is imperative that education organisations keep up to date with the progress of social media use. Initial results from the second phase of this study which analyses the content of the governance documentation suggests they require reading levels at or above the target audience, with some considerable variability in length and layout. Further analysis will add to this growing field of investigating social media governance within higher education.Keywords: governance, higher education, policy, social media
Procedia PDF Downloads 18473 Investigations on the Application of Avalanche Simulations: A Survey Conducted among Avalanche Experts
Authors: Korbinian Schmidtner, Rudolf Sailer, Perry Bartelt, Wolfgang Fellin, Jan-Thomas Fischer, Matthias Granig
Abstract:
This study focuses on the evaluation of snow avalanche simulations, based on a survey that has been carried out among avalanche experts. In the last decades, the application of avalanche simulation tools has gained recognition within the realm of hazard management. Traditionally, avalanche runout models were used to predict extreme avalanche runout and prepare avalanche maps. This has changed rather dramatically with the application of numerical models. For safety regulations such as road safety simulation tools are now being coupled with real-time meteorological measurements to predict frequent avalanche hazard. That places new demands on model accuracy and requires the simulation of physical processes that previously could be ignored. These simulation tools are based on a deterministic description of the avalanche movement allowing to predict certain quantities (e.g. pressure, velocities, flow heights, runout lengths etc.) of the avalanche flow. Because of the highly variable regimes of the flowing snow, no uniform rheological law describing the motion of an avalanche is known. Therefore, analogies to fluid dynamical laws of other materials are stated. To transfer these constitutional laws to snow flows, certain assumptions and adjustments have to be imposed. Besides these limitations, there exist high uncertainties regarding the initial and boundary conditions. Further challenges arise when implementing the underlying flow model equations into an algorithm executable by a computer. This implementation is constrained by the choice of adequate numerical methods and their computational feasibility. Hence, the model development is compelled to introduce further simplifications and the related uncertainties. In the light of these issues many questions arise on avalanche simulations, on their assets and drawbacks, on potentials for improvements as well as their application in practice. To address these questions a survey among experts in the field of avalanche science (e.g. researchers, practitioners, engineers) from various countries has been conducted. In the questionnaire, special attention is drawn on the expert’s opinion regarding the influence of certain variables on the simulation result, their uncertainty and the reliability of the results. Furthermore, it was tested to which degree a simulation result influences the decision making for a hazard assessment. A discrepancy could be found between a large uncertainty of the simulation input parameters as compared to a relatively high reliability of the results. This contradiction can be explained taking into account how the experts employ the simulations. The credibility of the simulations is the result of a rather thoroughly simulation study, where different assumptions are tested, comparing the results of different flow models along with the use of supplemental data such as chronicles, field observation, silent witnesses i.a. which are regarded as essential for the hazard assessment and for sanctioning simulation results. As the importance of avalanche simulations grows within the hazard management along with their further development studies focusing on the modeling fashion could contribute to a better understanding how knowledge of the avalanche process can be gained by running simulations.Keywords: expert interview, hazard management, modeling, simulation, snow avalanche
Procedia PDF Downloads 32672 A Conceptual Model of Sex Trafficking Dynamics in the Context of Pandemics and Provisioning Systems
Authors: Brian J. Biroscak
Abstract:
In the United States (US), “sex trafficking” is defined at the federal level in the Trafficking Victims Protection Act of 2000 as encompassing a number of processes such as recruitment, transportation, and provision of a person for the purpose of a commercial sex act. It involves the use of force, fraud, or coercion, or in which the person induced to perform such act has not attained 18 years of age. Accumulating evidence suggests that sex trafficking is exacerbated by social and environmental stressors (e.g., pandemics). Given that “provision” is a key part of the definition, “provisioning systems” may offer a useful lens through which to study sex trafficking dynamics. Provisioning systems are the social systems connecting individuals, small groups, entities, and embedded communities as they seek to satisfy their needs and wants for goods, services, experiences and ideas through value-based exchange in communities. This project presents a conceptual framework for understanding sex trafficking dynamics in the context of the COVID pandemic. The framework is developed as a system dynamics simulation model based on published evidence, social and behavioral science theory, and key informant interviews with stakeholders from the Protection, Prevention, Prosecution, and Partnership sectors in one US state. This “4 P Paradigm” has been described as fundamental to the US government’s anti-trafficking strategy. The present research question is: “How do sex trafficking systems (e.g., supply, demand and price) interact with other provisioning systems (e.g., networks of organizations that help sexually exploited persons) to influence trafficking over time vis-à-vis the COVID pandemic?” Semi-structured interviews with stakeholders (n = 19) were analyzed based on grounded theory and combined for computer simulation. The first step (Problem Definition) was completed by open coding video-recorded interviews, supplemented by a literature review. The model depicts provision of sex trafficking services for victims and survivors as declining in March 2020, coincidental with COVID, but eventually rebounding. The second modeling step (Dynamic Hypothesis Formulation) was completed by open- and axial coding of interview segments, as well as consulting peer-reviewed literature. Part of the hypothesized explanation for changes over time is that the sex trafficking system behaves somewhat like a commodities market, with each of the other subsystems exhibiting delayed responses but collectively keeping trafficking levels below what they would be otherwise. Next steps (Model Building & Testing) led to a ‘proof of concept’ model that can be used to conduct simulation experiments and test various action ideas, by taking model users outside the entire system and seeing it whole. If sex trafficking dynamics unfold as hypothesized, e.g., oscillated post-COVID, then one potential leverage point is to address the lack of information feedback loops between the actual occurrence and consequences of sex trafficking and those who seek to prevent its occurrence, prosecute the traffickers, protect the victims and survivors, and partner with the other anti-trafficking advocates. Implications for researchers, administrators, and other stakeholders are discussed.Keywords: pandemics, provisioning systems, sex trafficking, system dynamics modeling
Procedia PDF Downloads 7971 Pushover Analysis of a Typical Bridge Built in Central Zone of Mexico
Authors: Arturo Galvan, Jatziri Y. Moreno-Martinez, Daniel Arroyo-Montoya, Jose M. Gutierrez-Villalobos
Abstract:
Bridges are one of the most seismically vulnerable structures on highway transportation systems. The general process for assessing the seismic vulnerability of a bridge involves the evaluation of its overall capacity and demand. One of the most common procedures to obtain this capacity is by means of pushover analysis of the structure. Typically, the bridge capacity is assessed using non-linear static methods or non-linear dynamic analyses. The non-linear dynamic approaches use step by step numerical solutions for assessing the capacity with the consuming computer time inconvenience. In this study, a nonlinear static analysis (‘pushover analysis’) was performed to predict the collapse mechanism of a typical bridge built in the central zone of Mexico (Celaya, Guanajuato). The bridge superstructure consists of three simple supported spans with a total length of 76 m: 22 m of the length of extreme spans and 32 m of length of the central span. The deck width is of 14 m and the concrete slab depth is of 18 cm. The bridge is built by means of frames of five piers with hollow box-shaped sections. The dimensions of these piers are 7.05 m height and 1.20 m diameter. The numerical model was created using a commercial software considering linear and non-linear elements. In all cases, the piers were represented by frame type elements with geometrical properties obtained from the structural project and construction drawings of the bridge. The deck was modeled with a mesh of rectangular thin shell (plate bending and stretching) finite elements. The moment-curvature analysis was performed for the sections of the piers of the bridge considering in each pier the effect of confined concrete and its reinforcing steel. In this way, plastic hinges were defined on the base of the piers to carry out the pushover analysis. In addition, time history analyses were performed using 19 accelerograms of real earthquakes that have been registered in Guanajuato. In this way, the displacements produced by the bridge were determined. Finally, pushover analysis was applied through the control of displacements in the piers to obtain the overall capacity of the bridge before the failure occurs. It was concluded that the lateral deformation of the piers due to a critical earthquake occurred in this zone is almost imperceptible due to the geometry and reinforcement demanded by the current design standards and compared to its displacement capacity, they were excessive. According to the analysis, it was found that the frames built with five piers increase the rigidity in the transverse direction of the bridge. Hence it is proposed to reduce these frames of five piers to three piers, maintaining the same geometrical characteristics and the same reinforcement in each pier. Also, the mechanical properties of materials (concrete and reinforcing steel) were maintained. Once a pushover analysis was performed considering this configuration, it was concluded that the bridge would continue having a “correct” seismic behavior, at least for the 19 accelerograms considered in this study. In this way, costs in material, construction, time and labor would be reduced in this study case.Keywords: collapse mechanism, moment-curvature analysis, overall capacity, push-over analysis
Procedia PDF Downloads 151