Search results for: instrument validation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2387

Search results for: instrument validation

197 The Community Stakeholders’ Perspectives on Sexual Health Education for Young Adolescents in Western New York, USA: A Qualitative Descriptive Study

Authors: Sadandaula Rose Muheriwa Matemba, Alexander Glazier, Natalie M. LeBlanc

Abstract:

In the United States, up to 10% of girls and 22 % of boys 10-14 years have had sex, 5% of them had their first sex before 11 years, and the age of first sexual encounter is reported to be 8 years. Over 4,000 adolescent girls, 10-14 years, become pregnant every year, and 2.6% of the abortions in 2019 were among adolescents below 15 years. Despite these negative outcomes, little research has been conducted to understand the sexual health education offered to young adolescents ages 10-14. Early sexual health education is one of the most effective strategies to help lower the rate of early pregnancies, HIV infections, and other sexually transmitted. Such knowledge is necessary to inform best practices for supporting the healthy sexual development of young adolescents and prevent adverse outcomes. This qualitative descriptive study was conducted to explore the community stakeholders’ experiences in sexual health education for young adolescents ages 10-14 and ascertain the young adolescents’ sexual health support needs. Maximum variation purposive sampling was used to recruit a total sample of 13 community stakeholders, including health education teachers, members of youth-based organizations, and Adolescent Clinic providers in Rochester, New York State, in the United States of America from April to June 2022. Data were collected through semi-structured individual in-depth interviews and were analyzed using MAXQDA following a conventional content analysis approach. Triangulation, team analysis, and respondent validation to enhance rigor were also employed to enhance study rigor. The participants were predominantly female (92.3%) and comprised of Caucasians (53.8%), Black/African Americans (38.5%), and Indian-American (7.7%), with ages ranging from 23-59. Four themes emerged: the perceived need for early sexual health education, preferred timing to initiate sexual health conversations, perceived age-appropriate content for young adolescents, and initiating sexual health conversations with young adolescents. The participants described encouraging and concerning experiences. Most participants were concerned that young adolescents are living in a sexually driven environment and are not given the sexual health education they need, even though they are open to learning sexual health materials. There was consensus on the need to initiate sexual health conversations early at 4 years or younger, standardize sexual health education in schools and make age-appropriate sexual health education progressive. These results show that early sexual health education is essential if young adolescents are to delay sexual debut, prevent early pregnancies, and if the goal of ending the HIV epidemic is to be achieved. However, research is needed on a larger scale to understand how best to implement sexual health education among young adolescents and to inform interventions for implementing contextually-relevant sexuality education for this population. These findings call for increased multidisciplinary efforts in promoting early sexual health education for young adolescents.

Keywords: community stakeholders’ perspectives, sexual development, sexual health education, young adolescents

Procedia PDF Downloads 52
196 A Design Methodology and Tool to Support Ecodesign Implementation in Induction Hobs

Authors: Anna Costanza Russo, Daniele Landi, Michele Germani

Abstract:

Nowadays, the European Ecodesign Directive has emerged as a new approach to integrate environmental concerns into the product design and related processes. Ecodesign aims to minimize environmental impacts throughout the product life cycle, without compromising performances and costs. In addition, the recent Ecodesign Directives require products which are increasingly eco-friendly and eco-efficient, preserving high-performances. It is very important for producers measuring performances, for electric cooking ranges, hobs, ovens, and grills for household use, and a low power consumption of appliances represents a powerful selling point, also in terms of ecodesign requirements. The Ecodesign Directive provides a clear framework about the sustainable design of products and it has been extended in 2009 to all energy-related products, or products with an impact on energy consumption during the use. The European Regulation establishes measures of ecodesign of ovens, hobs, and kitchen hoods, and domestic use and energy efficiency of a product has a significant environmental aspect in the use phase which is the most impactful in the life cycle. It is important that the product parameters and performances are not affected by ecodesign requirements from a user’s point of view, and the benefits of reducing energy consumption in the use phase should offset the possible environmental impact in the production stage. Accurate measurements of cooking appliance performance are essential to help the industry to produce more energy efficient appliances. The development of ecodriven products requires ecoinnovation and ecodesign tools to support the sustainability improvement. The ecodesign tools should be practical and focused on specific ecoobjectives in order to be largely diffused. The main scope of this paper is the development, implementation, and testing of an innovative tool, which could be an improvement for the sustainable design of induction hobs. In particular, a prototypical software tool is developed in order to simulate the energy performances of the induction hobs. The tool is focused on a multiphysics model which is able to simulate the energy performances and the efficiency of induction hobs starting from the design data. The multiphysics model is composed by an electromagnetic simulation and a thermal simulation. The electromagnetic simulation is able to calculate the eddy current induced in the pot, which leads to the Joule heating of material. The thermal simulation is able to measure the energy consumption during the operational phase. The Joule heating caused from the eddy currents is the output of electromagnetic simulation and the input of thermal ones. The aims of the paper are the development of integrated tools and methodologies of virtual prototyping in the context of the ecodesign. This tool could be a revolutionary instrument in the field of industrial engineering and it gives consideration to the environmental aspects of product design and focus on the ecodesign of energy-related products, in order to achieve a reduced environmental impact.

Keywords: ecodesign, energy efficiency, induction hobs, virtual prototyping

Procedia PDF Downloads 235
195 Inputs and Outputs of Innovation Processes in the Colombian Services Sector

Authors: Álvaro Turriago-Hoyos

Abstract:

Most research tends to see innovation as an explanatory factor in achieving high levels of competitiveness and productivity. More recent studies have begun to analyze the determinants of innovation in the services sector as opposed to the much-discussed industrial sector of a country’s economy. This research paper focuses on the services sector in Colombia, one of Latin America’s fastest growing and biggest economies. Over the past decade, much of Colombia’s economic expansion has relied on commodity exports (mainly oil and coffee) whilst the industrial sector has performed relatively poorly. Such developments highlight the potential of the innovative role played by the services sector of the Colombian economy and its future growth prospects. This research paper analyzes the relationship between inputs, which at the same time are internal sources of innovation (such as R&D activities), and external sources that are improved by technology acquisition. The outputs are basically the four kinds of innovation that the OECD Oslo Manual recognizes: product, process, marketing and organizational innovations. The instrument used to measure this input-output relationship is based on Knowledge Production Function approaches. We run Probit models in order to identify the existing relationships between the above inputs and outputs, but also to identify spill-overs derived from interactions of the components of the value chain of the services firms analyzed: customers, suppliers, competitors, and complementary firms. Data are obtained from the Colombian National Administrative Department of Statistics for the period 2008 to 2013 published in the II and III Colombian National Innovation Survey. A short summary of the results obtained lead to conclude that firm size and a firm’s level of technological development turn out to be important discriminating factors for the description of the innovative process at the firm level. The model’s outcomes show a positive impact on the probability of introducing any kind of innovation both on R&D and Technology Acquisition investment. Also, cooperation agreements with customers, research institutes, competitors, and the suppliers are significant. Belonging to a particular industrial group is an important determinant but only to product and organizational innovation. It is possible to establish that Health Services, Education, Computer, Wholesale trade, and Financial Intermediation are the ISIC sectors, which report the highest number of frequencies of the considered set of firms. Those five sectors of the sixteen considered, in all cases, explained more than half of the total of all kinds of innovations. Product Innovation, which is followed by Marketing Innovation, gets the highest results. Displaying the same set of firms distinguishing by size, and belonging to high and low tech services sector shows that the larger the firms the larger a number of innovations, but also that always high-tech firms show a better innovation performance.

Keywords: Colombia, determinants of innovation, innovation, services sector

Procedia PDF Downloads 239
194 Time-Domain Nuclear Magnetic Resonance as a Potential Analytical Tool to Assess Thermisation in Ewe's Milk

Authors: Alessandra Pardu, Elena Curti, Marco Caredda, Alessio Dedola, Margherita Addis, Massimo Pes, Antonio Pirisi, Tonina Roggio, Sergio Uzzau, Roberto Anedda

Abstract:

Some of the artisanal cheeses products of European Countries certificated as PDO (Protected Designation of Origin) are made from raw milk. To recognise potential frauds (e.g. pasteurisation or thermisation of milk aimed at raw milk cheese production), the alkaline phosphatase (ALP) assay is currently applied only for pasteurisation, although it is known to have notable limitations for the validation of ALP enzymatic state in nonbovine milk. It is known that frauds considerably impact on customers and certificating institutions, sometimes resulting in a damage of the product image and potential economic losses for cheesemaking producers. Robust, validated, and univocal analytical methods are therefore needed to allow Food Control and Security Organisms, to recognise a potential fraud. In an attempt to develop a new reliable method to overcome this issue, Time-Domain Nuclear Magnetic Resonance (TD-NMR) spectroscopy has been applied in the described work. Daily fresh milk was analysed raw (680.00 µL in each 10-mm NMR glass tube) at least in triplicate. Thermally treated samples were also produced, by putting each NMR tube of fresh raw milk in water pre-heated at temperatures from 68°C up to 72°C and for up to 3 min, with continuous agitation, and quench-cooled to 25°C in a water and ice solution. Raw and thermally treated samples were analysed in terms of 1H T2 transverse relaxation times with a CPMG sequence (Recycle Delay: 6 s, interpulse spacing: 0.05 ms, 8000 data points) and quasi-continuous distributions of T2 relaxation times were obtained by CONTIN analysis. In line with previous data collected by high field NMR techniques, a decrease in the spin-spin relaxation constant T2 of the predominant 1H population was detected in heat-treated milk as compared to raw milk. The decrease of T2 parameter is consistent with changes in chemical exchange and diffusive phenomena, likely associated to changes in milk protein (i.e. whey proteins and casein) arrangement promoted by heat treatment. Furthermore, experimental data suggest that molecular alterations are strictly dependent on the specific heat treatment conditions (temperature/time). Such molecular variations in milk, which are likely transferred to cheese during cheesemaking, highlight the possibility to extend the TD-NMR technique directly on cheese to develop a method for assessing a fraud related to the use of a milk thermal treatment in PDO raw milk cheese. Results suggest that TDNMR assays might pave a new way to the detailed characterisation of heat treatments of milk.

Keywords: cheese fraud, milk, pasteurisation, TD-NMR

Procedia PDF Downloads 212
193 Finite Element Analysis of the Anaconda Device: Efficiently Predicting the Location and Shape of a Deployed Stent

Authors: Faidon Kyriakou, William Dempster, David Nash

Abstract:

Abdominal Aortic Aneurysm (AAA) is a major life-threatening pathology for which modern approaches reduce the need for open surgery through the use of stenting. The success of stenting though is sometimes jeopardized by the final position of the stent graft inside the human artery which may result in migration, endoleaks or blood flow occlusion. Herein, a finite element (FE) model of the commercial medical device AnacondaTM (Vascutek, Terumo) has been developed and validated in order to create a numerical tool able to provide useful clinical insight before the surgical procedure takes place. The AnacondaTM device consists of a series of NiTi rings sewn onto woven polyester fabric, a structure that despite its column stiffness is flexible enough to be used in very tortuous geometries. For the purposes of this study, a FE model of the device was built in Abaqus® (version 6.13-2) with the combination of beam, shell and surface elements; the choice of these building blocks was made to keep the computational cost to a minimum. The validation of the numerical model was performed by comparing the deployed position of a full stent graft device inside a constructed AAA with a duplicate set-up in Abaqus®. Specifically, an AAA geometry was built in CAD software and included regions of both high and low tortuosity. Subsequently, the CAD model was 3D printed into a transparent aneurysm, and a stent was deployed in the lab following the steps of the clinical procedure. Images on the frontal and sagittal planes of the experiment allowed the comparison with the results of the numerical model. By overlapping the experimental and computational images, the mean and maximum distances between the rings of the two models were measured in the longitudinal, and the transverse direction and, a 5mm upper bound was set as a limit commonly used by clinicians when working with simulations. The two models showed very good agreement of their spatial positioning, especially in the less tortuous regions. As a result, and despite the inherent uncertainties of a surgical procedure, the FE model allows confidence that the final position of the stent graft, when deployed in vivo, can also be predicted with significant accuracy. Moreover, the numerical model run in just a few hours, an encouraging result for applications in the clinical routine. In conclusion, the efficient modelling of a complicated structure which combines thin scaffolding and fabric has been demonstrated to be feasible. Furthermore, the prediction capabilities of the location of each stent ring, as well as the global shape of the graft, has been shown. This can allow surgeons to better plan their procedures and medical device manufacturers to optimize their designs. The current model can further be used as a starting point for patient specific CFD analysis.

Keywords: AAA, efficiency, finite element analysis, stent deployment

Procedia PDF Downloads 167
192 Comparison of Two Home Sleep Monitors Designed for Self-Use

Authors: Emily Wood, James K. Westphal, Itamar Lerner

Abstract:

Background: Polysomnography (PSG) recordings are regularly used in research and clinical settings to study sleep and sleep-related disorders. Typical PSG studies are conducted in professional laboratories and performed by qualified researchers. However, the number of sleep labs worldwide is disproportionate to the increasing number of individuals with sleep disorders like sleep apnea and insomnia. Consequently, there is a growing need to supply cheaper yet reliable means to measure sleep, preferably autonomously by subjects in their own home. Over the last decade, a variety of devices for self-monitoring of sleep became available in the market; however, very few have been directly validated against PSG to demonstrate their ability to perform reliable automatic sleep scoring. Two popular mobile EEG-based systems that have published validation results, the DREEM 3 headband and the Z-Machine, have never been directly compared one to the other by independent researchers. The current study aimed to compare the performance of DREEM 3 and the Z-Machine to help investigators and clinicians decide which of these devices may be more suitable for their studies. Methods: 26 participants have completed the study for credit or monetary compensation. Exclusion criteria included any history of sleep, neurological or psychiatric disorders. Eligible participants arrived at the lab in the afternoon and received the two devices. They then spent two consecutive nights monitoring their sleep at home. Participants were also asked to keep a sleep log, indicating the time they fell asleep, woke up, and the number of awakenings occurring during the night. Data from both devices, including detailed sleep hypnograms in 30-second epochs (differentiating Wake, combined N1/N2, N3; and Rapid Eye Movement sleep), were extracted and aligned upon retrieval. For analysis, the number of awakenings each night was defined as four or more consecutive wake epochs between sleep onset and termination. Total sleep time (TST) and the number of awakenings were compared to subjects’ sleep logs to measure consistency with the subjective reports. In addition, the sleep scores from each device were compared epoch-by-epoch to calculate the agreement between the two devices using Cohen’s Kappa. All analysis was performed using Matlab 2021b and SPSS 27. Results/Conclusion: Subjects consistently reported longer times spent asleep than the time reported by each device (M= 448 minutes for sleep logs compared to M= 406 and M= 345 minutes for the DREEM and Z-Machine, respectively; both ps<0.05). Linear correlations between the sleep log and each device were higher for the DREEM than the Z-Machine for both TST and the number of awakenings, and, likewise, the mean absolute bias between the sleep logs and each device was higher for the Z-Machine for both TST (p<0.001) and awakenings (p<0.04). There was some indication that these effects were stronger for the second night compared to the first night. Epoch-by-epoch comparisons showed that the main discrepancies between the devices were for detecting N2 and REM sleep, while N3 had a high agreement. Overall, the DREEM headband seems superior for reliably scoring sleep at home.

Keywords: DREEM, EEG, seep monitoring, Z-machine

Procedia PDF Downloads 80
191 Real-Time Monitoring of Complex Multiphase Behavior in a High Pressure and High Temperature Microfluidic Chip

Authors: Renée M. Ripken, Johannes G. E. Gardeniers, Séverine Le Gac

Abstract:

Controlling the multiphase behavior of aqueous biomass mixtures is essential when working in the biomass conversion industry. Here, the vapor/liquid equilibria (VLE) of ethylene glycol, glycerol, and xylitol were studied for temperatures between 25 and 200 °C and pressures of 1 to 10 bar. These experiments were performed in a microfluidic platform, which exhibits excellent heat transfer properties so that equilibrium is reached fast. Firstly, the saturated vapor pressure as a function of the temperature and the substrate mole fraction of the substrate was calculated using AspenPlus with a Redlich-Kwong-Soave Boston-Mathias (RKS-BM) model. Secondly, we developed a high-pressure and high-temperature microfluidic set-up for experimental validation. Furthermore, we have studied the multiphase flow pattern that occurs after the saturation temperature was achieved. A glass-silicon microfluidic device containing a 0.4 or 0.2 m long meandering channel with a depth of 250 μm and a width of 250 or 500 μm was fabricated using standard microfabrication techniques. This device was placed in a dedicated chip-holder, which includes a ceramic heater on the silicon side. The temperature was controlled and monitored by three K-type thermocouples: two were located between the heater and the silicon substrate, one to set the temperature and one to measure it, and the third one was placed in a 300 μm wide and 450 μm deep groove on the glass side to determine the heat loss over the silicon. An adjustable back pressure regulator and a pressure meter were added to control and evaluate the pressure during the experiment. Aqueous biomass solutions (10 wt%) were pumped at a flow rate of 10 μL/min using a syringe pump, and the temperature was slowly increased until the theoretical saturation temperature for the pre-set pressure was reached. First and surprisingly, a significant difference was observed between our theoretical saturation temperature and the experimental results. The experimental values were 10’s of degrees higher than the calculated ones and, in some cases, saturation could not be achieved. This discrepancy can be explained in different ways. Firstly, the pressure in the microchannel is locally higher due to both the thermal expansion of the liquid and the Laplace pressure that has to be overcome before a gas bubble can be formed. Secondly, superheating effects are likely to be present. Next, once saturation was reached, the flow pattern of the gas/liquid multiphase system was recorded. In our device, the point of nucleation can be controlled by taking advantage of the pressure drop across the channel and the accurate control of the temperature. Specifically, a higher temperature resulted in nucleation further upstream in the channel. As the void fraction increases downstream, the flow regime changes along the channel from bubbly flow to Taylor flow and later to annular flow. All three flow regimes were observed simultaneously. The findings of this study are key for the development and optimization of a microreactor for hydrogen production from biomass.

Keywords: biomass conversion, high pressure and high temperature microfluidics, multiphase, phase diagrams, superheating

Procedia PDF Downloads 193
190 Aligning Informatics Study Programs with Occupational and Qualifications Standards

Authors: Patrizia Poscic, Sanja Candrlic, Danijela Jaksic

Abstract:

The University of Rijeka, Department of Informatics participated in the Stand4Info project, co-financed by the European Union, with the main idea of an alignment of study programs with occupational and qualifications standards in the field of Informatics. A brief overview of our research methodology, goals and deliverables is shown. Our main research and project objectives were: a) development of occupational standards, qualification standards and study programs based on the Croatian Qualifications Framework (CROQF), b) higher education quality improvement in the field of information and communication sciences, c) increasing the employability of students of information and communication technology (ICT) and science, and d) continuously improving competencies of teachers in accordance with the principles of CROQF. CROQF is a reform instrument in the Republic of Croatia for regulating the system of qualifications at all levels through qualifications standards based on learning outcomes and following the needs of the labor market, individuals and society. The central elements of CROQF are learning outcomes - competences acquired by the individual through the learning process and proved afterward. The place of each acquired qualification is set by the level of the learning outcomes belonging to that qualification. The placement of qualifications at respective levels allows the comparison and linking of different qualifications, as well as linking of Croatian qualifications' levels to the levels of the European Qualifications Framework and the levels of the Qualifications framework of the European Higher Education Area. This research has made 3 proposals of occupational standards for undergraduate study level (System Analyst, Developer, ICT Operations Manager), and 2 for graduate (master) level (System Architect, Business Architect). For each occupational standard employers have provided a list of key tasks and associated competencies necessary to perform them. A set of competencies required for each particular job in the workplace was defined and each set of competencies as described in more details by its individual competencies. Based on sets of competencies from occupational standards, sets of learning outcomes were defined and competencies from the occupational standard were linked with learning outcomes. For each learning outcome, as well as for the set of learning outcomes, it was necessary to specify verification method, material, and human resources. The task of the project was to suggest revision and improvement of the existing study programs. It was necessary to analyze existing programs and determine how they meet and fulfill defined learning outcomes. This way, one could see: a) which learning outcomes from the qualifications standards are covered by existing courses, b) which learning outcomes have yet to be covered, c) are they covered by mandatory or elective courses, and d) are some courses unnecessary or redundant. Overall, the main research results are: a) completed proposals of qualification and occupational standards in the field of ICT, b) revised curricula of undergraduate and master study programs in ICT, c) sustainable partnership and association stakeholders network, d) knowledge network - informing the public and stakeholders (teachers, students, and employers) about the importance of CROQF establishment, and e) teachers educated in innovative methods of teaching.

Keywords: study program, qualification standard, occupational standard, higher education, informatics and computer science

Procedia PDF Downloads 116
189 Caring for Children with Intellectual Disabilities in Malawi: Parental Psychological Experiences and Needs

Authors: Charles Masulani Mwale

Abstract:

Background: It is argued that 85% of children with the disability live in resource-poor countries where there are few available disability services. A majority of these children, including their parents, suffer a lot as a result of the disability and its associated stigmatization, leading to a marginalized life. These parents also experience more stress and mental health problems such as depression, compared with families of normal developing children. There is little research from Africa addressing these issues especially among parents of intellectually disabled children. WHO encourages research on the impact that child with a disability have on their family and appropriate training and support to the families so that they can promote the child’s development and well-being. This study investigated the parenting experiences, mechanisms of coping with these challenges and psychosocial needs while caring for children with intellectual disabilities in both rural and urban settings of Lilongwe and Mzuzu. Methods: This is part of a larger Mixed-methods study aimed at developing a contextualized psychosocial intervention for parents of intellectually disabled children. 16 focus group discussions and four in-depth interviews were conducted with parents in catchments areas for St John of God and Children of Blessings in Mzuzu and Lilongwe cities respectively. Ethical clearance was obtained from COMREC. Data were stored in NVivo software for easy retrieval and management. All interviews were tape-recorded, transcribed and translated into English. Note-taking was performed during all the observations. Data triangulation from the interviews, note taking and the observations were done for validation and reliability. Results: Caring for intellectually disabled children comes with a number of challenges. Parents experience stigma and discrimination; fear for the child’s future; have self-blame and guilt; get coerced by neighbors to kill the disabled child; and fear violence by and to the child. Their needs include respite relief, improved access to disability services, education on disability management and financial support. For their emotional stability, parents cope by sharing with others and turning to God while other use poor coping mechanisms like alcohol use. Discussion and Recommendation: Apart from neighbors’ coercion to eliminate the child life, the findings of this study are similar to those done in other countries like Kenya and Pakistan. It is recommended that parents get educated on disability, its causes, and management to array fears of unknown. Community education is also crucial to promote community inclusiveness and correct prevailing myths associated with disability. Disability institutions ought to intensify individual as well as group counseling services to these parents. Further studies need to be done to design culturally appropriate and specific psychosocial interventions for the parents to promote their psychological resilience.

Keywords: psychological distress, intellectual disability, psychosocial interventions, mental health, psychological resilience, children

Procedia PDF Downloads 418
188 Expressing Locality in Learning English: A Study of English Textbooks for Junior High School Year VII-IX in Indonesia Context

Authors: Agnes Siwi Purwaning Tyas, Dewi Cahya Ambarwati

Abstract:

This paper concerns the language learning that develops as a habit formation and a constructive process while exercising an oppressive power to construct the learners. As a locus of discussion, the investigation problematizes the transfer of English language to Indonesian students of junior high school through the use of English textbooks ‘Real Time: An Interactive English Course for Junior High School Students Year VII-IX’. English language has long performed as a global language and it is a demand upon the non-English native speakers to master the language if they desire to become internationally recognized individuals. Generally, English teachers teach the language in accordance with the nature of language learning in which they are trained and expected to teach the language within the culture of the target language. This provides a potential soft cultural penetration of a foreign ideology through language transmission. In the context of Indonesia, learning English as international language is considered dilemmatic. Most English textbooks in Indonesia incorporate cultural elements of the target language which in some extent may challenge the sensitivity towards local cultural values. On the other hand, local teachers demand more English textbooks for junior high school students which can facilitate cultural dissemination of both local and global values and promote learners’ cultural traits of both cultures to avoid misunderstanding and confusion. It also aims to support language learning as bidirectional process instead of instrument of oppression. However, sensitizing and localizing this foreign language is not sufficient to restrain its soft infiltration. In due course, domination persists making the English language as an authoritative language and positioning the locality as ‘the other’. Such critical premise has led to a discursive analysis referring to how the cultural elements of the target language are presented in the textbooks and whether the local characteristics of Indonesia are able to gradually reduce the degree of the foreign oppressive ideology. The three textbooks researched were written by non-Indonesian author edited by two Indonesia editors published by a local commercial publishing company, PT Erlangga. The analytical elaboration examines the cultural characteristics in the forms of names, terminologies, places, objects and imageries –not the linguistic aspect– of both cultural domains; English and Indonesia. Comparisons as well as categorizations were made to identify the cultural traits of each language and scrutinize the contextual analysis. In the analysis, 128 foreign elements and 27 local elements were found in textbook for grade VII, 132 foreign elements and 23 local elements were found in textbook for grade VIII, while 144 foreign elements and 35 local elements were found in grade IX textbook, demonstrating the unequal distribution of both cultures. Even though the ideal pedagogical approach of English learning moves to a different direction by the means of inserting local elements, the learners are continuously imposed to the culture of the target language and forced to internalize the concept of values under the influence of the target language which tend to marginalize their native culture.

Keywords: bidirectional process, English, local culture, oppression

Procedia PDF Downloads 242
187 Analytical, Numerical, and Experimental Research Approaches to Influence of Vibrations on Hydroelastic Processes in Centrifugal Pumps

Authors: Dinara F. Gaynutdinova, Vladimir Ya Modorsky, Nikolay A. Shevelev

Abstract:

The problem under research is that of unpredictable modes occurring in two-stage centrifugal hydraulic pump as a result of hydraulic processes caused by vibrations of structural components. Numerical, analytical and experimental approaches are considered. A hypothesis was developed that the problem of unpredictable pressure decrease at the second stage of centrifugal pumps is caused by cavitation effects occurring upon vibration. The problem has been studied experimentally and theoretically as of today. The theoretical study was conducted numerically and analytically. Hydroelastic processes in dynamic “liquid – deformed structure” system were numerically modelled and analysed. Using ANSYS CFX program engineering analysis complex and computing capacity of a supercomputer the cavitation parameters were established to depend on vibration parameters. An influence domain of amplitudes and vibration frequencies on concentration of cavitation bubbles was formulated. The obtained numerical solution was verified using CFM program package developed in PNRPU. The package is based on a differential equation system in hyperbolic and elliptic partial derivatives. The system is solved by using one of finite-difference method options – the particle-in-cell method. The method defines the problem solution algorithm. The obtained numerical solution was verified analytically by model problem calculations with the use of known analytical solutions of in-pipe piston movement and cantilever rod end face impact. An infrastructure consisting of an experimental fast hydro-dynamic processes research installation and a supercomputer connected by a high-speed network, was created to verify the obtained numerical solutions. Physical experiments included measurement, record, processing and analysis of data for fast processes research by using National Instrument signals measurement system and Lab View software. The model chamber end face oscillated during physical experiments and, thus, loaded the hydraulic volume. The loading frequency varied from 0 to 5 kHz. The length of the operating chamber varied from 0.4 to 1.0 m. Additional loads weighed from 2 to 10 kg. The liquid column varied from 0.4 to 1 m high. Liquid pressure history was registered. The experiment showed dependence of forced system oscillation amplitude on loading frequency at various values: operating chamber geometrical dimensions, liquid column height and structure weight. Maximum pressure oscillation (in the basic variant) amplitudes were discovered at loading frequencies of approximately 1,5 kHz. These results match the analytical and numerical solutions in ANSYS and CFM.

Keywords: computing experiment, hydroelasticity, physical experiment, vibration

Procedia PDF Downloads 226
186 Local Community's Response on Post-Disaster and Role of Social Capital towards Recovery Process: A Case Study of Kaminani Community in Bhaktapur Municipality after 2015 Gorkha Nepal Earthquake

Authors: Lata Shakya, Toshio Otsuki, Saori Imoto, Bijaya Krishna Shrestha, Umesh Bahadur Malla

Abstract:

2015 Gorkha Nepal earthquake have damaged the human settlements in 14 districts of Nepal. Historic core areas of three principal cities namely Kathmandu, Lalitpur and Bhaktapur including numerous traditional ‘newari’ settlements in the peripheral areas have been either collapsed or severely damaged. Despite Government of Nepal and (international) non-government organisations’ attempt towards disaster risk management through the preparation of policies and guidelines and implementation of community-based activities, the recent ‘Gorkha’ earthquake has demonstrated the inadequate preparedness, poor implementation of a legal instrument, resource constraints, and managerial weakness. However, the social capital through community based institutions, self-help attitude, and community bond has helped a lot not only in rescue and relief operation but also in a post-disaster temporary shelter living thereby exhibiting the resilient power of the local community. Conducting a detailed case study of ‘Kaminani’ community with 42 houses at ward no. 16 of Bhaktapur municipality, this paper analyses the local community’s response and activities on the Gorkha earthquake in rescue and relief operation as well as in post disaster work. Leadership, the existence of internal/external aid, physical and human support are also analyzed. Social resource and networking are also explained through critical review of the existing community organisation and their activities. The research methodology includes literature review, field survey, and interview with community leaders and residents based on a semi-structured questionnaire. The study reveals that community carried their recovery process in four different phases: (i) management of emergency evacuation, (ii) constructing community owed temporary shelter for individuals, (iii) demolishing upper floors of the damaged houses, and (iv) planning for collaborative housing reconstruction. As territorial based organization, religion based agency and aim based institution exist in the survey area from pre-disaster time, it can be assumed that the community activists including leaders are well experienced to create aim-based group and manage teamwork to deal with various issues and problems collaboratively. Physical and human support including partial financial aid from external source as a result of community leader’s personal networking is extended to the community members. Thus, human/social resource and personal/social network play a crucial role in the recovery process. And to build such social capital, community should have potential from pre-disaster time.

Keywords: Gorkha Nepal earthquake, local community, recovery process, social resource, social network

Procedia PDF Downloads 229
185 A Finite Element Analysis of Hexagonal Double-Arrowhead Auxetic Structure with Enhanced Energy Absorption Characteristics and Stiffness

Authors: Keda Li, Hong Hu

Abstract:

Auxetic materials, as an emerging artificial designed metamaterial has attracted growing attention due to their promising negative Poisson’s ratio behaviors and tunable properties. The conventional auxetic lattice structures for which the deformation process is governed by a bending-dominated mechanism have faced the limitation of poor mechanical performance for many potential engineering applications. Recently, both load-bearing and energy absorption capabilities have become a crucial consideration in auxetic structure design. This study reports the finite element analysis of a class of hexagonal double-arrowhead auxetic structures with enhanced stiffness and energy absorption performance. The structure design was developed by extending the traditional double-arrowhead honeycomb to a hexagon frame, the stretching-dominated deformation mechanism was determined according to Maxwell’s stability criterion. The finite element (FE) models of 2D lattice structures established with stainless steel material were analyzed in ABAQUS/Standard for predicting in-plane structural deformation mechanism, failure process, and compressive elastic properties. Based on the computational simulation, the parametric analysis was studied to investigate the effect of the structural parameters on Poisson’s ratio and mechanical properties. The geometrical optimization was then implemented to achieve the optimal Poisson’s ratio for the maximum specific energy absorption. In addition, the optimized 2D lattice structure was correspondingly converted into a 3D geometry configuration by using the orthogonally splicing method. The numerical results of 2D and 3D structures under compressive quasi-static loading conditions were compared separately with the traditional double-arrowhead re-entrant honeycomb in terms of specific Young's moduli, Poisson's ratios, and specified energy absorption. As a result, the energy absorption capability and stiffness are significantly reinforced with a wide range of Poisson’s ratio compared to traditional double-arrowhead re-entrant honeycomb. The auxetic behaviors, energy absorption capability, and yield strength of the proposed structure are adjustable with different combinations of joint angle, struts thickness, and the length-width ratio of the representative unit cell. The numerical prediction in this study suggests the proposed concept of hexagonal double-arrowhead structure could be a suitable candidate for the energy absorption applications with a constant request of load-bearing capacity. For future research, experimental analysis is required for the validation of the numerical simulation.

Keywords: auxetic, energy absorption capacity, finite element analysis, negative Poisson's ratio, re-entrant hexagonal honeycomb

Procedia PDF Downloads 64
184 Optimization of Ultrasound-Assisted Extraction of Oil from Spent Coffee Grounds Using a Central Composite Rotatable Design

Authors: Malek Miladi, Miguel Vegara, Maria Perez-Infantes, Khaled Mohamed Ramadan, Antonio Ruiz-Canales, Damaris Nunez-Gomez

Abstract:

Coffee is the second consumed commodity worldwide, yet it also generates colossal waste. Proper management of coffee waste is proposed by converting them into products with higher added value to achieve sustainability of the economic and ecological footprint and protect the environment. Based on this, a study looking at the recovery of coffee waste is becoming more relevant in recent decades. Spent coffee grounds (SCG's) resulted from brewing coffee represents the major waste produced among all coffee industry. The fact that SCGs has no economic value be abundant in nature and industry, do not compete with agriculture and especially its high oil content (between 7-15% from its total dry matter weight depending on the coffee varieties, Arabica or Robusta), encourages its use as a sustainable feedstock for bio-oil production. The bio-oil extraction is a crucial step towards biodiesel production by the transesterification process. However, conventional methods used for oil extraction are not recommended due to their high consumption of energy, time, and generation of toxic volatile organic solvents. Thus, finding a sustainable, economical, and efficient extraction technique is crucial to scale up the process and to ensure more environment-friendly production. Under this perspective, the aim of this work was the statistical study to know an efficient strategy for oil extraction by n-hexane using indirect sonication. The coffee waste mixed Arabica and Robusta, which was used in this work. The temperature effect, sonication time, and solvent-to-solid ratio on the oil yield were statistically investigated as dependent variables by Central Composite Rotatable Design (CCRD) 23. The results were analyzed using STATISTICA 7 StatSoft software. The CCRD showed the significance of all the variables tested (P < 0.05) on the process output. The validation of the model by analysis of variance (ANOVA) showed good adjustment for the results obtained for a 95% confidence interval, and also, the predicted values graph vs. experimental values confirmed the satisfactory correlation between the model results. Besides, the identification of the optimum experimental conditions was based on the study of the surface response graphs (2-D and 3-D) and the critical statistical values. Based on the CCDR results, 29 ºC, 56.6 min, and solvent-to-solid ratio 16 were the better experimental conditions defined statistically for coffee waste oil extraction using n-hexane as solvent. In these conditions, the oil yield was >9% in all cases. The results confirmed the efficiency of using an ultrasound bath in extracting oil as a more economical, green, and efficient way when compared to the Soxhlet method.

Keywords: coffee waste, optimization, oil yield, statistical planning

Procedia PDF Downloads 91
183 Surface Roughness in the Incremental Forming of Drawing Quality Cold Rolled CR2 Steel Sheet

Authors: Zeradam Yeshiwas, A. Krishnaia

Abstract:

The aim of this study is to verify the resulting surface roughness of parts formed by the Single-Point Incremental Forming (SPIF) process for an ISO 3574 Drawing Quality Cold Rolled CR2 Steel. The chemical composition of drawing quality Cold Rolled CR2 steel is comprised of 0.12 percent of carbon, 0.5 percent of manganese, 0.035 percent of sulfur, 0.04 percent phosphorous, and the remaining percentage is iron with negligible impurities. The experiments were performed on a 3-axis vertical CNC milling machining center equipped with a tool setup comprising a fixture and forming tools specifically designed and fabricated for the process. The CNC milling machine was used to transfer the tool path code generated in Mastercam 2017 environment into three-dimensional motions by the linear incremental progress of the spindle. The blanks of Drawing Quality Cold Rolled CR2 steel sheets of 1 mm of thickness have been fixed along their periphery by a fixture and hardened high-speed steel (HSS) tools with a hemispherical tip of 8, 10 and 12mm of diameter were employed to fabricate sample parts. To investigate the surface roughness, hyperbolic-cone shape specimens were fabricated based on the chosen experimental design. The effect of process parameters on the surface roughness was studied using three important process parameters, i.e., tool diameter, feed rate, and step depth. In this study, the Taylor-Hobson Surtronic 3+ surface roughness tester profilometer was used to determine the surface roughness of the parts fabricated using the arithmetic mean deviation (Rₐ). In this instrument, a small tip is dragged across a surface while its deflection is recorded. Finally, the optimum process parameters and the main factor affecting surface roughness were found using the Taguchi design of the experiment and ANOVA. A Taguchi experiment design with three factors and three levels for each factor, the standard orthogonal array L9 (3³) was selected for the study using the array selection table. The lowest value of surface roughness is significant for surface roughness improvement. For this objective, the ‘‘smaller-the-better’’ equation was used for the calculation of the S/N ratio. The finishing roughness parameter Ra has been measured for the different process combinations. The arithmetic means deviation (Rₐ) was measured via the experimental design for each combination of the control factors by using Taguchi experimental design. Four roughness measurements were taken for a single component and the average roughness was taken to optimize the surface roughness. The lowest value of Rₐ is very important for surface roughness improvement. For this reason, the ‘‘smaller-the-better’’ Equation was used for the calculation of the S/N ratio. Analysis of the effect of each control factor on the surface roughness was performed with a ‘‘S/N response table’’. Optimum surface roughness was obtained at a feed rate of 1500 mm/min, with a tool radius of 12 mm, and with a step depth of 0.5 mm. The ANOVA result shows that step depth is an essential factor affecting surface roughness (91.1 %).

Keywords: incremental forming, SPIF, drawing quality steel, surface roughness, roughness behavior

Procedia PDF Downloads 39
182 Clinical and Analytical Performance of Glial Fibrillary Acidic Protein and Ubiquitin C-Terminal Hydrolase L1 Biomarkers for Traumatic Brain Injury in the Alinity Traumatic Brain Injury Test

Authors: Raj Chandran, Saul Datwyler, Jaime Marino, Daniel West, Karla Grasso, Adam Buss, Hina Syed, Zina Al Sahouri, Jennifer Yen, Krista Caudle, Beth McQuiston

Abstract:

The Alinity i TBI test is Therapeutic Goods Administration (TGA) registered and is a panel of in vitro diagnostic chemiluminescent microparticle immunoassays for the measurement of glial fibrillary acidic protein (GFAP) and ubiquitin C-terminal hydrolase L1 (UCH-L1) in plasma and serum. The Alinity i TBI performance was evaluated in a multi-center pivotal study to demonstrate the capability to assist in determining the need for a CT scan of the head in adult subjects (age 18+) presenting with suspected mild TBI (traumatic brain injury) with a Glasgow Coma Scale score of 13 to 15. TBI has been recognized as an important cause of death and disability and is a growing public health problem. An estimated 69 million people globally experience a TBI annually1. Blood-based biomarkers such as glial fibrillary acidic protein (GFAP) and ubiquitin C-terminal hydrolase L1 (UCH-L1) have shown utility to predict acute traumatic intracranial injury on head CT scans after TBI. A pivotal study using prospectively collected archived (frozen) plasma specimens was conducted to establish the clinical performance of the TBI test on the Alinity i system. The specimens were originally collected in a prospective, multi-center clinical study. Testing of the specimens was performed at three clinical sites in the United States. Performance characteristics such as detection limits, imprecision, linearity, measuring interval, expected values, and interferences were established following Clinical and Laboratory Standards Institute (CLSI) guidance. Of the 1899 mild TBI subjects, 120 had positive head CT scan results; 116 of the 120 specimens had a positive TBI interpretation (Sensitivity 96.7%; 95% CI: 91.7%, 98.7%). Of the 1779 subjects with negative CT scan results, 713 had a negative TBI interpretation (Specificity 40.1%; 95% CI: 37.8, 42.4). The negative predictive value (NPV) of the test was 99.4% (713/717, 95% CI: 98.6%, 99.8%). The analytical measuring interval (AMI) extends from the limit of quantitation (LoQ) to the upper LoQ and is determined by the range that demonstrates acceptable performance for linearity, imprecision, and bias. The AMI is 6.1 to 42,000 pg/mL for GFAP and 26.3 to 25,000 pg/mL for UCH-L1. Overall, within-laboratory imprecision (20 day) ranged from 3.7 to 5.9% CV for GFAP and 3.0 to 6.0% CV for UCH-L1, when including lot and instrument variances. The Alinity i TBI clinical performance results demonstrated high sensitivity and high NPV, supporting the utility to assist in determining the need for a head CT scan in subjects presenting to the emergency department with suspected mild TBI. The GFAP and UCH-L1 assays show robust analytical performance across a broad concentration range of GFAP and UCH-L1 and may serve as a valuable tool to help evaluate TBI patients across the spectrum of mild to severe injury.

Keywords: biomarker, diagnostic, neurology, TBI

Procedia PDF Downloads 36
181 Purpose-Driven Collaborative Strategic Learning

Authors: Mingyan Hong, Shuozhao Hou

Abstract:

Collaborative Strategic Learning (CSL) teaches students to use learning strategies while working cooperatively. Student strategies include the following steps: defining the learning task and purpose; conducting ongoing negotiation of the learning materials by deciding "click" (I get it and I can teach it – green card, I get it –yellow card) or "clunk" (I don't get it – red card) at the end of each learning unit; "getting the gist" of the most important parts of the learning materials; and "wrapping up" key ideas. Find out how to help students of mixed achievement levels apply learning strategies while learning content area in materials in small groups. The design of CSL is based on social-constructivism and Vygotsky’s best-known concept of the Zone of Proximal Development (ZPD). The definition of ZPD is the distance between the actual acquisition level as decided by individual problem solution case and the level of potential acquisition level, similar to Krashen (1980)’s i+1, as decided through the problem-solution case under the facilitator’s guidance, or in group work with other more capable members (Vygotsky, 1978). Vygotsky claimed that learners’ ideal learning environment is in the ZPD. An ideal teacher or more-knowledgable-other (MKO) should be able to recognize a learner’s ZPD and facilitates them to develop beyond it. Then the MKO is able to leave the support step by step until the learner can perform the task without aid. Steven Krashen (1980) proposed Input hypothesis including i+1 hypothesis. The input hypothesis models are the application of ZPD in second language acquisition and have been widely recognized until today. Krashen (2019)’s optimal language learning environment (2019) further developed the application of ZPD and added the component of strategic group learning. The strategic group learning is composed of desirable learning materials learners are motivated to learn and desirable group members who are more capable and are therefore able to offer meaningful input to the learners. Purpose-driven Collaborative Strategic Learning Model is a strategic integration of ZPD, i+1 hypothesis model, and Optimal Language Learning Environment Model. It is purpose driven to ensure group members are motivated. It is collaborative so that an optimal learning environment where meaningful input from meaningful conversation can be generated. It is strategic because facilitators in the model strategically assign each member a meaningful and collaborative role, e.g., team leader, technician, problem solver, appraiser, offer group learning instrument so that the learning process is structured, and integrate group learning and team building making sure holistic development of each participant. Using data collected from college year one and year two students’ English courses, this presentation will demonstrate how purpose-driven collaborative strategic learning model is implemented in the second/foreign language classroom, using the qualitative data from questionnaire and interview. Particular, this presentation will show how second/foreign language learners grow from functioning with facilitator or more capable peer’s aid to performing without aid. The implication of this research is that purpose-driven collaborative strategic learning model can be used not only in language learning, but also in any subject area.

Keywords: collaborative, strategic, optimal input, second language acquisition

Procedia PDF Downloads 103
180 Integrations of the Instructional System Design for Students Learning Achievement Motives and Science Attitudes with Stem Educational Model on Stoichiometry Issue in Chemistry Classes with Different Genders

Authors: Tiptunya Duangsri, Panwilai Chomchid, Natchanok Jansawang

Abstract:

This research study was to investigate of education decisions must be made which a part of it should be passed on to future generations as obligatory for all members of a chemistry class for students who will prepare themselves for a special position. The descriptions of instructional design were provided and the recent criticisms are discussed. This research study to an outline of an integrative framework for the description of information and the instructional design model give structure to negotiate a semblance of conscious understanding. The aims of this study are to describe the instructional design model for comparisons between students’ genders of their effects on STEM educational learning achievement motives to their science attitudes and logical thinking abilities with a sample size of 18 students at the 11th grade level with the cluster random sampling technique in Mahawichanukul School were designed. The chemistry learning environment was administered with the STEM education method. To build up the 5-instrument lesson instructional plan issues were instructed innovations, the 30-item Logical Thinking Test (LTT) on 5 scales, namely; Inference, Recognition of Assumptions, Deduction, Interpretation and Evaluation scales was used. Students’ responses of their perceptions with the Test Of Chemistry-Related Attitude (TOCRA) were assessed of their attitude in science toward chemistry. The validity from Index Objective Congruence value (IOC) checked by five expert specialist educator in two chemistry classroom targets in STEM education, the E1/E2 process were equaled evidence of 84.05/81.42 which results based on criteria are higher than of 80/80 standard level with the IOC from the expert educators. Comparisons between students’ learning achievement motives with STEM educational model on stoichiometry issue in chemistry classes with different genders were differentiated at evidence level of .05, significantly. Associations between students’ learning achievement motives on their posttest outcomes and logical thinking abilities, the predictive efficiency (R2) values indicate that 69% and 70% of the variances in different male and female student groups of their logical thinking abilities. The predictive efficiency (R2) values indicate that 73%; and 74% of the variances in different male and female student groups of their science attitudes toward chemistry were associated. Statistically significant on students’ perceptions of their chemistry learning classroom environment and their science attitude toward chemistry when using the MCI and TOCRA, the predictive efficiency (R2) values indicated that 72% and 74% of the variances in different male and female student groups of their chemistry classroom climate, consequently. Suggestions that supporting chemistry or science teachers from science, technology, engineering and mathematics (STEM) in addressing complex teaching and learning issues related instructional design to develop, teach, and assess traditional are important strategies with a focus on STEM education instructional method.

Keywords: development, the instructional design model, students learning achievement motives, science attitudes with STEM educational model, stoichiometry issue, chemistry classes, genders

Procedia PDF Downloads 253
179 Communicating Safety: A Digital Ethnography Investigating Social Media Use for Workplace Safety

Authors: Kelly Jaunzems

Abstract:

Social media is a powerful instrument of communication, enabling the presentation of information in multiple forms and modes, amplifying the interactions between people, organisations, and stakeholders, and increasing the range of communication channels available. Younger generations are highly engaged with social media and more likely to use this channel than any other to seek information. Given this, it may appear extraordinary that occupational safety and health professionals have yet to seriously engage with social media for communicating safety messages to younger audiences who, in many industries, might be statistically more likely to encounter more workplace harm or injury. Millennials, defined as those born between 1981-2000, have distinctive characteristics that also impact their interaction patterns rendering many traditional occupational safety and health communication channels sub-optimal or near obsolete. Used to immediate responses, 280-character communication, shares, likes, and visual imagery, millennials struggle to take seriously the low-tech, top-down communication channels such as safety noticeboards, toolbox meetings, and passive tick-box online inductions favoured by traditional OSH professionals. This paper draws upon well-established communication findings, which argue that it is important to know a target audience and reach them using their preferred communication pathways, particularly if the aim is to impact attitudes and behaviours. Health practitioners have adopted social media as a communication channel with great success, yet safety practitioners have failed to follow this lead. Using a digital ethnography approach, this paper examines seven organisations’ Facebook posts from two one-month periods one year apart, one in 2018 and one in 2019. Each of the years informs organisation-based case studies. Comparing, contrasting, and drawing upon these case studies, the paper discusses and evaluates the (non) use of social media communication of safety information in terms of user engagement, shareability, and overall appeal. The success of health practitioners’ use of social media provides a compelling template for the implementation of social media into organisations’ safety communication strategies. Highly visible content such as that found on social media allows an organization to become more responsive and engage in two-way conversations with their audience, creating more engaged and participatory conversations around safety. Further, using social media to address younger audiences with a range of tonal qualities (for example, the use of humour) can achieve cut through in a way that grim statistics fail to do. On the basis of 18 months of interviews, filed work, and data analysis, the paper concludes with recommendations for communicating safety information via social media. It proposes exploration of the social media communication formula that, when utilised by safety practitioners, may create an effective social media presence. It is anticipated that such social media use will increase engagement, expand the number of followers and reduce the likelihood and severity of safety-related incidents. The tools offered may provide a path for safety practitioners to reach a disengaged generation of workers to build a cohesive and inclusive conversation around ways to keep people safe at work.

Keywords: social media, workplace safety, communication strategies, young workers

Procedia PDF Downloads 94
178 Linking Information Systems Capabilities for Service Quality: The Role of Customer Connection and Environmental Dynamism

Authors: Teng Teng, Christos Tsinopoulos

Abstract:

The purpose of this research is to explore the link between IS capabilities, customer connection, and quality performance in the service context, with investigation of the impact of firm’s stable and dynamic environments. The application of Information Systems (IS) has become a significant effect on contemporary service operations. Firms invest in IS with the presumption that they will facilitate operations processes so that their performance will improve. Yet, IS resources by themselves are not sufficiently 'unique' and thus, it would be more useful and theoretically relevant to focus on the processes they affect. One such organisational process, which has attracted a lot of research attention by supply chain management scholars, is the integration of customer connection, where IS-enabled customer connection enhances communication and contact processes, and with such customer resources integration comes greater success for the firm in its abilities to develop a good understanding of customer needs and set accurate customer. Nevertheless, prior studies on IS capabilities have focused on either one specific type of technology or operationalised it as a highly aggregated concept. Moreover, although conceptual frameworks have been identified to show customer integration is valuable in service provision, there is much to learn about the practices of integrating customer resources. In this research, IS capabilities have been broken down into three dimensions based on the framework of Wade and Hulland: IT for supply chain activities (ITSCA), flexible IT infrastructure (ITINF), and IT operations shared knowledge (ITOSK); and focus on their impact on operational performance of firms in services. With this background, this paper addresses the following questions: -How do IS capabilities affect the integration of customer connection and service quality? -What is the relationship between environmental dynamism and the relationship of customer connection and service quality? A survey of 156 service establishments was conducted, and the data analysed to determine the role of customer connection in mediating the effects of IS capabilities on firms’ service quality. Confirmatory factor analysis was used to check convergent validity. There is a good model fit for the structural model. Moderating effect of environmental dynamism on the relationship of customer connection and service quality is analysed. Results show that ITSCA, ITINF, and ITOSK have a positive influence on the degree of the integration of customer connection. In addition, customer connection positively related to service quality; this relationship is further emphasised when firms work in a dynamic environment. This research takes a step towards quelling concerns about the business value of IS, contributing to the development and validation of the measurement of IS capabilities in the service operations context. Additionally, it adds to the emerging body of literature linking customer connection to the operational performance of service firms. Managers of service firms should consider the strength of the mediating role of customer connection when investing in IT-related technologies and policies. Particularly, service firms developing IS capabilities should simultaneously implement processes that encourage supply chain integration.

Keywords: customer connection, environmental dynamism, information systems capabilities, service quality, service supply chain

Procedia PDF Downloads 116
177 Use of Artificial Neural Networks to Estimate Evapotranspiration for Efficient Irrigation Management

Authors: Adriana Postal, Silvio C. Sampaio, Marcio A. Villas Boas, Josué P. Castro, Ralpho R. Reis

Abstract:

This study deals with the estimation of reference evapotranspiration (ET₀) in an agricultural context, focusing on efficient irrigation management to meet the growing interest in the sustainable management of water resources. Given the importance of water in agriculture and its scarcity in many regions, efficient use of this resource is essential to ensure food security and environmental sustainability. The methodology used involved the application of artificial intelligence techniques, specifically Multilayer Perceptron (MLP) Artificial Neural Networks (ANNs), to predict ET₀ in the state of Paraná, Brazil. The models were trained and validated with meteorological data from the Brazilian National Institute of Meteorology (INMET), together with data obtained from a producer's weather station in the western region of Paraná. Two optimizers (SGD and Adam) and different meteorological variables, such as temperature, humidity, solar radiation, and wind speed, were explored as inputs to the models. Nineteen configurations with different input variables were tested; amidst them, configuration 9, with 8 input variables, was identified as the most efficient of all. Configuration 10, with 4 input variables, was considered the most effective, considering the smallest number of variables. The main conclusions of this study show that MLP ANNs are capable of accurately estimating ET₀, providing a valuable tool for irrigation management in agriculture. Both configurations (9 and 10) showed promising performance in predicting ET₀. The validation of the models with cultivator data underlined the practical relevance of these tools and confirmed their generalization ability for different field conditions. The results of the statistical metrics, including Mean Absolute Error (MAE), Mean Squared Error (MSE), Root Mean Squared Error (RMSE), and Coefficient of Determination (R2), showed excellent agreement between the model predictions and the observed data, with MAE as low as 0.01 mm/day and 0.03 mm/day, respectively. In addition, the models achieved an R2 between 0.99 and 1, indicating a satisfactory fit to the real data. This agreement was also confirmed by the Kolmogorov-Smirnov test, which evaluates the agreement of the predictions with the statistical behavior of the real data and yields values between 0.02 and 0.04 for the producer data. In addition, the results of this study suggest that the developed technique can be applied to other locations by using specific data from these sites to further improve ET₀ predictions and thus contribute to sustainable irrigation management in different agricultural regions. To summarize, this study has helped to advance research in the field of irrigation management in agriculture. It provides an accessible and effective approach to ET₀ estimation that has the potential to significantly improve water use efficiency and promote agricultural sustainability in different contexts.

Keywords: agricultural technology, neural networks in agriculture, water efficiency, water use optimization

Procedia PDF Downloads 17
176 Coping Strategies and Characterization of Vulnerability in the Perspective of Climate Change

Authors: Muhammad Umer Mehmood, Muhammad Luqman, Muhammad Yaseen, Imtiaz Hussain

Abstract:

Climate change is an arduous fact, which could not be unheeded easily. It is a phenomenon which has brought a collection of challenges for the mankind. Scientists have found many of its negative impacts on the life of human being and the resources on which the life of humanity is dependent. There are many issues which are associated with the factor of prime importance in this study, 'climate change'. Whenever changes happen in nature, they strike the whole globe. Effects of these changes vary from region to region. Climate of every region of this globe is different from the other. Even within a state, country or the province has different climatic conditions. So it is mandatory that the response in that specific region and the coping strategy of this specific region should be according to the prevailing risk. In the present study, the objective was to assess the coping strategies and vulnerability of small landholders. So that a professional suggestion could be made to cope with the vulnerability factor of small farmers. The cross-sectional research design was used with the intervention of quantitative approach. The study was conducted in the Khanewal district, of Punjab, Pakistan. 120 small farmers were interviewed after randomized sampling from the population of respective area. All respondents were above the age of 15 years. A questionnaire was developed after keen observation of facts in the respective area. Content and face validity of the instrument was assessed with SPSS and experts in the field. Data were analyzed through SPSS using descriptive statistics. From the sample of 120, 81.67% of the respondents claimed that the environment is getting warmer and not fit for their present agricultural practices. 84.17% of the sample expressed serious concern that they are disturbed due to change in rainfall pattern and vulnerability towards the climatic effects. On the other hand, they expressed that they are not good at tackling the effects of climate change. Adaptation of coping strategies like change in cropping pattern, use of resistant varieties, varieties with minimum water requirement, intercropping and tree planting was low by more than half of the sample. From the sample 63.33% small farmers said that the coping strategies they adopt are not effective enough. The present study showed that subsistence farming, lack of marketing and overall infrastructure, lack of access to social security networks, limited access to agriculture extension services, inappropriate access to agrometeorological system, unawareness and access to scientific development and low crop yield are the prominent factors which are responsible for the vulnerability of small farmers. A comprehensive study should be conducted at national level so that a national policy could be formulated to cope with the dilemma in future with relevance to climate change. Mainstreaming and collaboration among the researchers and academicians could prove beneficiary in this regard the interest of national leaders’ does matter. Proper policies to avoid the vulnerability factors should be the top priority. The world is taking up this issue with full responsibility as should we, keeping in view the local situation.

Keywords: adaptation, coping strategies, climate change, Pakistan, small farmers, vulnerability

Procedia PDF Downloads 107
175 Spectral Responses of the Laser Generated Coal Aerosol

Authors: Tibor Ajtai, Noémi Utry, Máté Pintér, Tomi Smausz, Zoltán Kónya, Béla Hopp, Gábor Szabó, Zoltán Bozóki

Abstract:

Characterization of spectral responses of light absorbing carbonaceous particulate matter (LAC) is of great importance in both modelling its climate effect and interpreting remote sensing measurement data. The residential or domestic combustion of coal is one of the dominant LAC constituent. According to some related assessments the residential coal burning account for roughly half of anthropogenic BC emitted from fossil fuel burning. Despite of its significance in climate the comprehensive investigation of optical properties of residential coal aerosol is really limited in the literature. There are many reason of that starting from the difficulties associated with the controlled burning conditions of the fuel, through the lack of detailed supplementary proximate and ultimate chemical analysis enforced, the interpretation of the measured optical data, ending with many analytical and methodological difficulties regarding the in-situ measurement of coal aerosol spectral responses. Since the gas matrix of ambient can significantly mask the physicochemical characteristics of the generated coal aerosol the accurate and controlled generation of residential coal particulates is one of the most actual issues in this research area. Most of the laboratory imitation of residential coal combustion is simply based on coal burning in stove with ambient air support allowing one to measure only the apparent spectral feature of the particulates. However, the recently introduced methodology based on a laser ablation of solid coal target opens up novel possibilities to model the real combustion procedure under well controlled laboratory conditions and makes the investigation of the inherent optical properties also possible. Most of the methodology for spectral characterization of LAC is based on transmission measurement made of filter accumulated aerosol or deduced indirectly from parallel measurements of scattering and extinction coefficient using free floating sampling. In the former one the accuracy while in the latter one the sensitivity are liming the applicability of this approaches. Although the scientific community are at the common platform that aerosol-phase PhotoAcoustic Spectroscopy (PAS) is the only method for precise and accurate determination of light absorption by LAC, the PAS based instrumentation for spectral characterization of absorption has only been recently introduced. In this study, the investigation of the inherent, spectral features of laser generated and chemically characterized residential coal aerosols are demonstrated. The experimental set-up and its characteristic for residential coal aerosol generation are introduced here. The optical absorption and the scattering coefficients as well as their wavelength dependency are determined by our state-of-the-art multi wavelength PAS instrument (4λ-PAS) and multi wavelength cosinus sensor (Aurora 3000). The quantified wavelength dependency (AAE and SAE) are deduced from the measured data. Finally, some correlation between the proximate and ultimate chemical as well as the measured or deduced optical parameters are also revealed.

Keywords: absorption, scattering, residential coal, aerosol generation by laser ablation

Procedia PDF Downloads 323
174 Cyber-Med: Practical Detection Methodology of Cyber-Attacks Aimed at Medical Devices Eco-Systems

Authors: Nir Nissim, Erez Shalom, Tomer Lancewiki, Yuval Elovici, Yuval Shahar

Abstract:

Background: A Medical Device (MD) is an instrument, machine, implant, or similar device that includes a component intended for the purpose of the diagnosis, cure, treatment, or prevention of disease in humans or animals. Medical devices play increasingly important roles in health services eco-systems, including: (1) Patient Diagnostics and Monitoring; Medical Treatment and Surgery; and Patient Life Support Devices and Stabilizers. MDs are part of the medical device eco-system and are connected to the network, sending vital information to the internal medical information systems of medical centers that manage this data. Wireless components (e.g. Wi-Fi) are often embedded within medical devices, enabling doctors and technicians to control and configure them remotely. All these functionalities, roles, and uses of MDs make them attractive targets of cyber-attacks launched for many malicious goals; this trend is likely to significantly increase over the next several years, with increased awareness regarding MD vulnerabilities, the enhancement of potential attackers’ skills, and expanded use of medical devices. Significance: We propose to develop and implement Cyber-Med, a unique collaborative project of Ben-Gurion University of the Negev and the Clalit Health Services Health Maintenance Organization. Cyber-Med focuses on the development of a comprehensive detection framework that relies on a critical attack repository that we aim to create. Cyber-Med will allow researchers and companies to better understand the vulnerabilities and attacks associated with medical devices as well as providing a comprehensive platform for developing detection solutions. Methodology: The Cyber-Med detection framework will consist of two independent, but complementary detection approaches: one for known attacks, and the other for unknown attacks. These modules incorporate novel ideas and algorithms inspired by our team's domains of expertise, including cyber security, biomedical informatics, and advanced machine learning, and temporal data mining techniques. The establishment and maintenance of Cyber-Med’s up-to-date attack repository will strengthen the capabilities of Cyber-Med’s detection framework. Major Findings: Based on our initial survey, we have already found more than 15 types of vulnerabilities and possible attacks aimed at MDs and their eco-system. Many of these attacks target individual patients who use devices such pacemakers and insulin pumps. In addition, such attacks are also aimed at MDs that are widely used by medical centers such as MRIs, CTs, and dialysis engines; the information systems that store patient information; protocols such as DICOM; standards such as HL7; and medical information systems such as PACS. However, current detection tools, techniques, and solutions generally fail to detect both the known and unknown attacks launched against MDs. Very little research has been conducted in order to protect these devices from cyber-attacks, since most of the development and engineering efforts are aimed at the devices’ core medical functionality, the contribution to patients’ healthcare, and the business aspects associated with the medical device.

Keywords: medical device, cyber security, attack, detection, machine learning

Procedia PDF Downloads 329
173 Probing Scientific Literature Metadata in Search for Climate Services in African Cities

Authors: Zohra Mhedhbi, Meheret Gaston, Sinda Haoues-Jouve, Julia Hidalgo, Pierre Mazzega

Abstract:

In the current context of climate change, supporting national and local stakeholders to make climate-smart decisions is necessary but still underdeveloped in many countries. To overcome this problem, the Global Frameworks for Climate Services (GFCS), implemented under the aegis of the United Nations in 2012, has initiated many programs in different countries. The GFCS contributes to the development of Climate Services, an instrument based on the production and transfer of scientific climate knowledge for specific users such as citizens, urban planning actors, or agricultural professionals. As cities concentrate on economic, social and environmental issues that make them more vulnerable to climate change, the New Urban Agenda (NUA), adopted at Habitat III in October 2016, highlights the importance of paying particular attention to disaster risk management, climate and environmental sustainability and urban resilience. In order to support the implementation of the NUA, the World Meteorological Organization (WMO) has identified the urban dimension as one of its priorities and has proposed a new tool, the Integrated Urban Services (IUS), for more sustainable and resilient cities. In the southern countries, there’s a lack of development of climate services, which can be partially explained by problems related to their economic financing. In addition, it is often difficult to make climate change a priority in urban planning, given the more traditional urban challenges these countries face, such as massive poverty, high population growth, etc. Climate services and Integrated Urban Services, particularly in African cities, are expected to contribute to the sustainable development of cities. These tools will help promoting the acquisition of meteorological and socio-ecological data on their transformations, encouraging coordination between national or local institutions providing various sectoral urban services, and should contribute to the achievement of the objectives defined by the United Nations Framework Convention on Climate Change (UNFCCC) or the Paris Agreement, and the Sustainable Development Goals. To assess the state of the art on these various points, the Web of Science metadatabase is queried. With a query combining the keywords "climate*" and "urban*", more than 24,000 articles are identified, source of more than 40,000 distinct keywords (but including synonyms and acronyms) which finely mesh the conceptual field of research. The occurrence of one or more names of the 514 African cities of more than 100,000 inhabitants or countries, reduces this base to a smaller corpus of about 1410 articles (2990 keywords). 41 countries and 136 African cities are cited. The lexicometric analysis of the metadata of the articles and the analysis of the structural indicators (various centralities) of the networks induced by the co-occurrence of expressions related more specifically to climate services show the development potential of these services, identify the gaps which remain to be filled for their implementation and allow to compare the diversity of national and regional situations with regard to these services.

Keywords: African cities, climate change, climate services, integrated urban services, lexicometry, networks, urban planning, web of science

Procedia PDF Downloads 167
172 Reducing the Computational Cost of a Two-way Coupling CFD-FEA Model via a Multi-scale Approach for Fire Determination

Authors: Daniel Martin Fellows, Sean P. Walton, Jennifer Thompson, Oubay Hassan, Kevin Tinkham, Ella Quigley

Abstract:

Structural integrity for cladding products is a key performance parameter, especially concerning fire performance. Cladding products such as PIR-based sandwich panels are tested rigorously, in line with industrial standards. Physical fire tests are necessary to ensure the customer's safety but can give little information about critical behaviours that can help develop new materials. Numerical modelling is a tool that can help investigate a fire's behaviour further by replicating the fire test. However, fire is an interdisciplinary problem as it is a chemical reaction that behaves fluidly and impacts structural integrity. An analysis using Computational Fluid Dynamics (CFD) and Finite Element Analysis (FEA) is needed to capture all aspects of a fire performance test. One method is a two-way coupling analysis that imports the updated changes in thermal data, due to the fire's behaviour, to the FEA solver in a series of iterations. In light of our recent work with Tata Steel U.K using a two-way coupling methodology to determine the fire performance, it has been shown that a program called FDS-2-Abaqus can make predictions of a BS 476 -22 furnace test with a degree of accuracy. The test demonstrated the fire performance of Tata Steel U.K Trisomet product, a Polyisocyanurate (PIR) based sandwich panel used for cladding. Previous works demonstrated the limitations of the current version of the program, the main limitation being the computational cost of modelling three Trisomet panels, totalling an area of 9 . The computational cost increases substantially, with the intention to scale up to an LPS 1181-1 test, which includes a total panel surface area of 200 .The FDS-2-Abaqus program is developed further within this paper to overcome this obstacle and better accommodate Tata Steel U.K PIR sandwich panels. The new developments aim to reduce the computational cost and error margin compared to experimental data. One avenue explored is a multi-scale approach in the form of Reduced Order Modeling (ROM). The approach allows the user to include refined details of the sandwich panels, such as the overlapping joints, without a computationally costly mesh size.Comparative studies will be made between the new implementations and the previous study completed using the original FDS-2-ABAQUS program. Validation of the study will come from physical experiments in line with governing body standards such as BS 476 -22 and LPS 1181-1. The physical experimental data includes the panels' gas and surface temperatures and mechanical deformation. Conclusions are drawn, noting the new implementations' impact factors and discussing the reasonability for scaling up further to a whole warehouse.

Keywords: fire testing, numerical coupling, sandwich panels, thermo fluids

Procedia PDF Downloads 44
171 Empirical Study on Causes of Project Delays

Authors: Khan Farhan Rafat, Riaz Ahmed

Abstract:

Renowned offshore organizations are drifting towards collaborative exertion to win and implement international projects for business gains. However, devoid of financial constraints, with the availability of skilled professionals, and despite improved project management practices through state-of-the-art tools and techniques, project delays have become a norm these days. This situation calls for exploring the factor(s) affecting the bonding between project management performance and project success. In the context of the well-known 3M’s of project management (that is, manpower, machinery, and materials), machinery and materials are dependent upon manpower. Because the body of knowledge inveterate on the influence of national culture on men, hence, the realization of the impact on the link between project management performance and project success need to be investigated in detail to arrive at the possible cause(s) of project delays. This research initiative was, therefore, undertaken to fill the research gap. The unit of analysis for the proposed research excretion was the individuals who had worked on skyscraper construction projects. In reverent studies, project management is best described using construction examples. It is due to this reason that the project oriented city of Dubai was chosen to reconnoiter on causes of project delays. A structured questionnaire survey was disseminated online with the courtesy of the Project Management Institute local chapter to carry out the cross-sectional study. The Construction Industry Institute, Austin, of the United States of America along with 23 high-rise builders in Dubai were also contacted by email requesting for their contribution to the study and providing them with the online link to the survey questionnaire. The reliability of the instrument was warranted using Cronbach’s alpha coefficient of 0.70. The appropriateness of sampling adequacy and homogeneity in variance was ensured by keeping Kaiser–Meyer–Olkin (KMO) and Bartlett’s test of sphericity in the range ≥ 0.60 and < 0.05, respectively. Factor analysis was used to verify construct validity. During exploratory factor analysis, all items were loaded using a threshold of 0.4. Four hundred and seventeen respondents, including members from top management, project managers, and project staff, contributed to the study. The link between project management performance and project success was significant at 0.01 level (2-tailed), and 0.05 level (2-tailed) for Pearson’s correlation. Before initiating the moderator analysis test for linearity, multicollinearity, outliers, leverage points and influential cases, test for homoscedasticity and normality were carried out which are prerequisites for conducting moderator review. The moderator analysis, using a macro named PROCESS, was performed to verify the hypothesis that national culture has an influence on the said link. The empirical findings, when compared with Hofstede's results, showed high power distance as the cause of construction project delays in Dubai. The research outcome calls for the project sponsors and top management to reshape their project management strategy and allow for low power distance between management and project personnel for timely completion of projects.

Keywords: causes of construction project delays, construction industry, construction management, power distance

Procedia PDF Downloads 184
170 Automated Evaluation Approach for Time-Dependent Question Answering Pairs on Web Crawler Based Question Answering System

Authors: Shraddha Chaudhary, Raksha Agarwal, Niladri Chatterjee

Abstract:

This work demonstrates a web crawler-based generalized end-to-end open domain Question Answering (QA) system. An efficient QA system requires a significant amount of domain knowledge to answer any question with the aim to find an exact and correct answer in the form of a number, a noun, a short phrase, or a brief piece of text for the user's questions. Analysis of the question, searching the relevant document, and choosing an answer are three important steps in a QA system. This work uses a web scraper (Beautiful Soup) to extract K-documents from the web. The value of K can be calibrated on the basis of a trade-off between time and accuracy. This is followed by a passage ranking process using the MS-Marco dataset trained on 500K queries to extract the most relevant text passage, to shorten the lengthy documents. Further, a QA system is used to extract the answers from the shortened documents based on the query and return the top 3 answers. For evaluation of such systems, accuracy is judged by the exact match between predicted answers and gold answers. But automatic evaluation methods fail due to the linguistic ambiguities inherent in the questions. Moreover, reference answers are often not exhaustive or are out of date. Hence correct answers predicted by the system are often judged incorrect according to the automated metrics. One such scenario arises from the original Google Natural Question (GNQ) dataset which was collected and made available in the year 2016. Use of any such dataset proves to be inefficient with respect to any questions that have time-varying answers. For illustration, if the query is where will be the next Olympics? Gold Answer for the above query as given in the GNQ dataset is “Tokyo”. Since the dataset was collected in the year 2016, and the next Olympics after 2016 were in 2020 that was in Tokyo which is absolutely correct. But if the same question is asked in 2022 then the answer is “Paris, 2024”. Consequently, any evaluation based on the GNQ dataset will be incorrect. Such erroneous predictions are usually given to human evaluators for further validation which is quite expensive and time-consuming. To address this erroneous evaluation, the present work proposes an automated approach for evaluating time-dependent question-answer pairs. In particular, it proposes a metric using the current timestamp along with top-n predicted answers from a given QA system. To test the proposed approach GNQ dataset has been used and the system achieved an accuracy of 78% for a test dataset comprising 100 QA pairs. This test data was automatically extracted using an analysis-based approach from 10K QA pairs of the GNQ dataset. The results obtained are encouraging. The proposed technique appears to have the possibility of developing into a useful scheme for gathering precise, reliable, and specific information in a real-time and efficient manner. Our subsequent experiments will be guided towards establishing the efficacy of the above system for a larger set of time-dependent QA pairs.

Keywords: web-based information retrieval, open domain question answering system, time-varying QA, QA evaluation

Procedia PDF Downloads 75
169 Vibration and Freeze-Thaw Cycling Tests on Fuel Cells for Automotive Applications

Authors: Gema M. Rodado, Jose M. Olavarrieta

Abstract:

Hydrogen fuel cell technologies have experienced a great boost in the last decades, significantly increasing the production of these devices for both stationary and portable (mainly automotive) applications; these are influenced by two main factors: environmental pollution and energy shortage. A fuel cell is an electrochemical device that converts chemical energy directly into electricity by using hydrogen and oxygen gases as reactive components and obtaining water and heat as byproducts of the chemical reaction. Fuel cells, specifically those of Proton Exchange Membrane (PEM) technology, are considered an alternative to internal combustion engines, mainly because of the low emissions they produce (almost zero), high efficiency and low operating temperatures (< 373 K). The introduction and use of fuel cells in the automotive market requires the development of standardized and validated procedures to test and evaluate their performance in different environmental conditions including vibrations and freeze-thaw cycles. These situations of vibration and extremely low/high temperatures can affect the physical integrity or even the excellent operation or performance of the fuel cell stack placed in a vehicle in circulation or in different climatic conditions. The main objective of this work is the development and validation of vibration and freeze-thaw cycling test procedures for fuel cell stacks that can be used in a vehicle in order to consolidate their safety, performance, and durability. In this context, different experimental tests were carried out at the facilities of the National Hydrogen Centre (CNH2). The experimental equipment used was: A vibration platform (shaker) for vibration test analysis on fuel cells in three axes directions with different vibration profiles. A walk-in climatic chamber to test the starting, operating, and stopping behavior of fuel cells under defined extreme conditions. A test station designed and developed by the CNH2 to test and characterize PEM fuel cell stacks up to 10 kWe. A 5 kWe PEM fuel cell stack in off-operation mode was used to carry out two independent experimental procedures. On the one hand, the fuel cell was subjected to a sinusoidal vibration test on the shaker in the three axes directions. It was defined by acceleration and amplitudes in the frequency range of 7 to 200 Hz for a total of three hours in each direction. On the other hand, the climatic chamber was used to simulate freeze-thaw cycles by defining a temperature range between +313 K and -243 K with an average relative humidity of 50% and a recommended ramp up and rump down of 1 K/min. The polarization curve and gas leakage rate were determined before and after the vibration and freeze-thaw tests at the fuel cell stack test station to evaluate the robustness of the stack. The results were very similar, which indicates that the tests did not affect the fuel cell stack structure and performance. The proposed procedures were verified and can be used as an initial point to perform other tests with different fuel cells.

Keywords: climatic chamber, freeze-thaw cycles, PEM fuel cell, shaker, vibration tests

Procedia PDF Downloads 89
168 Creative Mapping Landuse and Human Activities: From the Inventories of Factories to the History of the City and Citizens

Authors: R. Tamborrino, F. Rinaudo

Abstract:

Digital technologies offer possibilities to effectively convert historical archives into instruments of knowledge able to provide a guide for the interpretation of historical phenomena. Digital conversion and management of those documents allow the possibility to add other sources in a unique and coherent model that permits the intersection of different data able to open new interpretations and understandings. Urban history uses, among other sources, the inventories that register human activities in a specific space (e.g. cadastres, censuses, etc.). The geographic localisation of that information inside cartographic supports allows for the comprehension and visualisation of specific relationships between different historical realities registering both the urban space and the peoples living there. These links that merge the different nature of data and documentation through a new organisation of the information can suggest a new interpretation of other related events. In all these kinds of analysis, the use of GIS platforms today represents the most appropriate answer. The design of the related databases is the key to realise the ad-hoc instrument to facilitate the analysis and the intersection of data of different origins. Moreover, GIS has become the digital platform where it is possible to add other kinds of data visualisation. This research deals with the industrial development of Turin at the beginning of the 20th century. A census of factories realized just prior to WWI provides the opportunity to test the potentialities of GIS platforms for the analysis of urban landscape modifications during the first industrial development of the town. The inventory includes data about location, activities, and people. GIS is shaped in a creative way linking different sources and digital systems aiming to create a new type of platform conceived as an interface integrating different kinds of data visualisation. The data processing allows linking this information to an urban space, and also visualising the growth of the city at that time. The sources, related to the urban landscape development in that period, are of a different nature. The emerging necessity to build, enlarge, modify and join different buildings to boost the industrial activities, according to their fast development, is recorded by different official permissions delivered by the municipality and now stored in the Historical Archive of the Municipality of Turin. Those documents, which are reports and drawings, contain numerous data on the buildings themselves, including the block where the plot is located, the district, and the people involved such as the owner, the investor, and the engineer or architect designing the industrial building. All these collected data offer the possibility to firstly re-build the process of change of the urban landscape by using GIS and 3D modelling technologies thanks to the access to the drawings (2D plans, sections and elevations) that show the previous and the planned situation. Furthermore, they access information for different queries of the linked dataset that could be useful for different research and targets such as economics, biographical, architectural, or demographical. By superimposing a layer of the present city, the past meets to the present-industrial heritage, and people meet urban history.

Keywords: digital urban history, census, digitalisation, GIS, modelling, digital humanities

Procedia PDF Downloads 170