Search results for: computer tasks
82 Application of the Standard Deviation in Regulating Design Variation of Urban Solutions Generated through Evolutionary Computation
Authors: Mohammed Makki, Milad Showkatbakhsh, Aiman Tabony
Abstract:
Computational applications of natural evolutionary processes as problem-solving tools have been well established since the mid-20th century. However, their application within architecture and design has only gained ground in recent years, with an increasing number of academics and professionals in the field electing to utilize evolutionary computation to address problems comprised from multiple conflicting objectives with no clear optimal solution. Recent advances in computer science and its consequent constructive influence on the architectural discourse has led to the emergence of multiple algorithmic processes capable of simulating the evolutionary process in nature within an efficient timescale. Many of the developed processes of generating a population of candidate solutions to a design problem through an evolutionary based stochastic search process are often driven through the application of both environmental and architectural parameters. These methods allow for conflicting objectives to be simultaneously, independently, and objectively optimized. This is an essential approach in design problems with a final product that must address the demand of a multitude of individuals with various requirements. However, one of the main challenges encountered through the application of an evolutionary process as a design tool is the ability for the simulation to maintain variation amongst design solutions in the population while simultaneously increasing in fitness. This is most commonly known as the ‘golden rule’ of balancing exploration and exploitation over time; the difficulty of achieving this balance in the simulation is due to the tendency of either variation or optimization being favored as the simulation progresses. In such cases, the generated population of candidate solutions has either optimized very early in the simulation, or has continued to maintain high levels of variation to which an optimal set could not be discerned; thus, providing the user with a solution set that has not evolved efficiently to the objectives outlined in the problem at hand. As such, the experiments presented in this paper seek to achieve the ‘golden rule’ by incorporating a mathematical fitness criterion for the development of an urban tissue comprised from the superblock as its primary architectural element. The mathematical value investigated in the experiments is the standard deviation factor. Traditionally, the standard deviation factor has been used as an analytical value rather than a generative one, conventionally used to measure the distribution of variation within a population by calculating the degree by which the majority of the population deviates from the mean. A higher standard deviation value delineates a higher number of the population is clustered around the mean and thus limited variation within the population, while a lower standard deviation value is due to greater variation within the population and a lack of convergence towards an optimal solution. The results presented will aim to clarify the extent to which the utilization of the standard deviation factor as a fitness criterion can be advantageous to generating fitter individuals in a more efficient timeframe when compared to conventional simulations that only incorporate architectural and environmental parameters.Keywords: architecture, computation, evolution, standard deviation, urban
Procedia PDF Downloads 13381 Development of an Instrument Assessing Participants’ Motivation on Assigning Monetary Value to Quality of Life
Authors: Afentoula Mavrodi, Andreas Georgiou, Georgios Tsiotras, Vassilis Aletras
Abstract:
Placing a monetary value on a quality-adjusted-life-year (QALY) is of utmost importance in economic evaluation. Identifying the population’s preferences is critical in order to understand some of the reasons driving variations in the assigned monetary value. Yet, evidence of the motives behind value assignment to a QALY by the general public is limited. Developing an instrument that would capture the population’s motives could be proven valuable to policy-makers, to guide them in allocating different values to a QALY based on users’ motivations. The aim of this study was to identify the most relevant motives and develop an appropriate instrument to assess them. To design the instrument, we employed: a) the EQ-5D-3L tool to assess participants’ current health status, and b) the Willingness-to-Pay (WTP) approach, within the Contingent Valuation (CV) Method framework, to elicit the monetary value. Advancing the open-ended approach adopted to assess solely protest bidders’ motives; a variety of follow-up item-specific statements were designed (deductive approach), aiming to evaluate motives of both protest bidders and participants willing to pay for the hypothetical treatment under consideration. The initial design of the survey instrument was the outcome of an extensive literature review. This instrument was revised based on 15 semi-structured interviews that took place in September 2018 and a pilot study held during two months (October-November) in 2018. Individuals with different educational, occupational and economical backgrounds and adequate verbal skills were recruited to complete the semi-structured interviews. The follow-up motivation statements of both protest bidders and those willing to pay were revised and rephrased after the semi-structured interviews. In total 4 statements for protest bidders and 3 statements for those willing to pay for the treatment were chosen to be included in the survey tool. Using the CATI (Computer Assisted Telephone Interview) method, a randomly selected sample of 97 persons living in Thessaloniki, Greece, completed the questionnaire on two occasions over a period of 4 weeks. Based on pilot study results, a test-retest reliability assessment was performed using the intra-class correlation coefficient (ICC). All statements formulated for protest bidders showed acceptable reliability (ICC values of 0.84 (95% CI: 0.67, 0.92) and above). Similarly, all statements for those willing to pay for the treatment showed high reliability (ICC values of 0.86 (95% CI: 0.78, 0.91) and above). Overall, the instrument designed in this study was reliable with regards to the item-specific statements assessing participants’ motivation. Validation of the instrument will take place in a future study. For a holistic WTP per QALY instrument, participants’ motivation must be addressed broadly. The instrument developed in this study captured a variety of motives and provided insight with regards to the method through which the latter are evaluated. Last but not least, it extended motive assessment to all study participants and not only protest bidders.Keywords: contingent valuation method, instrument, motives, quality-adjusted life-year, willingness-to-pay
Procedia PDF Downloads 13680 Multi-Objectives Genetic Algorithm for Optimizing Machining Process Parameters
Authors: Dylan Santos De Pinho, Nabil Ouerhani
Abstract:
Energy consumption of machine-tools is becoming critical for machine-tool builders and end-users because of economic, ecological and legislation-related reasons. Many machine-tool builders are seeking for solutions that allow the reduction of energy consumption of machine-tools while preserving the same productivity rate and the same quality of machined parts. In this paper, we present the first results of a project conducted jointly by academic and industrial partners to reduce the energy consumption of a Swiss-Type lathe. We employ genetic algorithms to find optimal machining parameters – the set of parameters that lead to the best trade-off between energy consumption, part quality and tool lifetime. Three main machining process parameters are considered in our optimization technique, namely depth of cut, spindle rotation speed and material feed rate. These machining process parameters have been identified as the most influential ones in the configuration of the Swiss-type machining process. A state-of-the-art multi-objective genetic algorithm has been used. The algorithm combines three fitness functions, which are objective functions that permit to evaluate a set of parameters against the three objectives: energy consumption, quality of the machined parts, and tool lifetime. In this paper, we focus on the investigation of the fitness function related to energy consumption. Four different energy consumption related fitness functions have been investigated and compared. The first fitness function refers to the Kienzle cutting force model. The second fitness function uses the Material Removal Rate (RMM) as an indicator of energy consumption. The two other fitness functions are non-deterministic, learning-based functions. One fitness function uses a simple Neural Network to learn the relation between the process parameters and the energy consumption from experimental data. Another fitness function uses Lasso regression to determine the same relation. The goal is, then, to find out which fitness functions predict best the energy consumption of a Swiss-Type machining process for the given set of machining process parameters. Once determined, these functions may be used for optimization purposes – determine the optimal machining process parameters leading to minimum energy consumption. The performance of the four fitness functions has been evaluated. The Tornos DT13 Swiss-Type Lathe has been used to carry out the experiments. A mechanical part including various Swiss-Type machining operations has been selected for the experiments. The evaluation process starts with generating a set of CNC (Computer Numerical Control) programs for machining the part at hand. Each CNC program considers a different set of machining process parameters. During the machining process, the power consumption of the spindle is measured. All collected data are assigned to the appropriate CNC program and thus to the set of machining process parameters. The evaluation approach consists in calculating the correlation between the normalized measured power consumption and the normalized power consumption prediction for each of the four fitness functions. The evaluation shows that the Lasso and Neural Network fitness functions have the highest correlation coefficient with 97%. The fitness function “Material Removal Rate” (MRR) has a correlation coefficient of 90%, whereas the Kienzle-based fitness function has a correlation coefficient of 80%.Keywords: adaptive machining, genetic algorithms, smart manufacturing, parameters optimization
Procedia PDF Downloads 14779 Augmented Reality Enhanced Order Picking: The Potential for Gamification
Authors: Stavros T. Ponis, George D. Plakas-Koumadorakis, Sotiris P. Gayialis
Abstract:
Augmented Reality (AR) can be defined as a technology, which takes the capabilities of computer-generated display, sound, text and effects to enhance the user's real-world experience by overlaying virtual objects into the real world. By doing that, AR is capable of providing a vast array of work support tools, which can significantly increase employee productivity, enhance existing job training programs by making them more realistic and in some cases introduce completely new forms of work and task executions. One of the most promising AR industrial applications, as literature shows, is the use of Head Worn, monocular or binocular Displays (HWD) to support logistics and production operations, such as order picking, part assembly and maintenance. This paper presents the initial results of an ongoing research project for the introduction of a dedicated AR-HWD solution to the picking process of a Distribution Center (DC) in Greece operated by a large Telecommunication Service Provider (TSP). In that context, the proposed research aims to determine whether gamification elements should be integrated in the functional requirements of the AR solution, such as providing points for reaching objectives and creating leaderboards and awards (e.g. badges) for general achievements. Up to now, there is a an ambiguity on the impact of gamification in logistics operations since gamification literature mostly focuses on non-industrial organizational contexts such as education and customer/citizen facing applications, such as tourism and health. To the contrary, the gamification efforts described in this study focus in one of the most labor- intensive and workflow dependent logistics processes, i.e. Customer Order Picking (COP). Although introducing AR in COP, undoubtedly, creates significant opportunities for workload reduction and increased process performance the added value of gamification is far from certain. This paper aims to provide insights on the suitability and usefulness of AR-enhanced gamification in the hard and very demanding environment of a logistics center. In doing so, it will utilize a review of the current state-of-the art regarding gamification of production and logistics processes coupled with the results of questionnaire guided interviews with industry experts, i.e. logisticians, warehouse workers (pickers) and AR software developers. The findings of the proposed research aim to contribute towards a better understanding of AR-enhanced gamification, the organizational change it entails and the consequences it potentially has for all implicated entities in the often highly standardized and structured work required in the logistics setting. The interpretation of these findings will support the decision of logisticians regarding the introduction of gamification in their logistics processes by providing them useful insights and guidelines originating from a real life case study of a large DC operating more than 300 retail outlets in Greece.Keywords: augmented reality, technology acceptance, warehouse management, vision picking, new forms of work, gamification
Procedia PDF Downloads 15078 Thermoluminescence Investigations of Tl2Ga2Se3S Layered Single Crystals
Authors: Serdar Delice, Mehmet Isik, Nizami Hasanli, Kadir Goksen
Abstract:
Researchers have donated great interest to ternary and quaternary semiconductor compounds especially with the improvement of the optoelectronic technology. The quaternary compound Tl2Ga2Se3S which was grown by Bridgman method carries the properties of ternary thallium chalcogenides group of semiconductors with layered structure. This compound can be formed from TlGaSe2 crystals replacing the one quarter of selenium atom by sulfur atom. Although Tl2Ga2Se3S crystals are not intentionally doped, some unintended defect types such as point defects, dislocations and stacking faults can occur during growth processes of crystals. These defects can cause undesirable problems in semiconductor materials especially produced for optoelectronic technology. Defects of various types in the semiconductor devices like LEDs and field effect transistor may act as a non-radiative or scattering center in electron transport. Also, quick recombination of holes with electrons without any energy transfer between charge carriers can occur due to the existence of defects. Therefore, the characterization of defects may help the researchers working in this field to produce high quality devices. Thermoluminescence (TL) is an effective experimental method to determine the kinetic parameters of trap centers due to defects in crystals. In this method, the sample is illuminated at low temperature by a light whose energy is bigger than the band gap of studied sample. Thus, charge carriers in the valence band are excited to delocalized band. Then, the charge carriers excited into conduction band are trapped. The trapped charge carriers are released by heating the sample gradually and these carriers then recombine with the opposite carriers at the recombination center. By this way, some luminescence is emitted from the samples. The emitted luminescence is converted to pulses by using an experimental setup controlled by computer program and TL spectrum is obtained. Defect characterization of Tl2Ga2Se3S single crystals has been performed by TL measurements at low temperatures between 10 and 300 K with various heating rate ranging from 0.6 to 1.0 K/s. The TL signal due to the luminescence from trap centers revealed one glow peak having maximum temperature of 36 K. Curve fitting and various heating rate methods were used for the analysis of the glow curve. The activation energy of 13 meV was found by the application of curve fitting method. This practical method established also that the trap center exhibits the characteristics of mixed (general) kinetic order. In addition, various heating rate analysis gave a compatible result (13 meV) with curve fitting as the temperature lag effect was taken into consideration. Since the studied crystals were not intentionally doped, these centers are thought to originate from stacking faults, which are quite possible in Tl2Ga2Se3S due to the weakness of the van der Waals forces between the layers. Distribution of traps was also investigated using an experimental method. A quasi-continuous distribution was attributed to the determined trap centers.Keywords: chalcogenides, defects, thermoluminescence, trap centers
Procedia PDF Downloads 28277 Process of Production of an Artisanal Brewery in a City in the North of the State of Mato Grosso, Brazil
Authors: Ana Paula S. Horodenski, Priscila Pelegrini, Salli Baggenstoss
Abstract:
The brewing industry with artisanal concepts seeks to serve a specific market, with diversified production that has been gaining ground in the national environment, also in the Amazon region. This growth is due to the more demanding consumer, with a diversified taste that wants to try new types of beer, enjoying products with new aromas, flavors, as a differential of what is so widely spread through the big industrial brands. Thus, through qualitative research methods, the study aimed to investigate how is the process of managing the production of a craft brewery in a city in the northern State of Mato Grosso (BRAZIL), providing knowledge of production processes and strategies in the industry. With the efficient use of resources, it is possible to obtain the necessary quality and provide better performance and differentiation of the company, besides analyzing the best management model. The research is descriptive with a qualitative approach through a case study. For the data collection, a semi-structured interview was elaborated, composed of the areas: microbrewery characterization, artisan beer production process, and the company supply chain management. Also, production processes were observed during technical visits. With the study, it was verified that the artisan brewery researched develops preventive maintenance strategies with the inputs, machines, and equipment, so that the quality of the product and the production process are achieved. It was observed that the distance from the supplying centers makes the management of processes and the supply chain be carried out with a longer planning time so that the delivery of the final product is satisfactory. The production process of the brewery is composed of machines and equipment that allows the control and quality of the product, which the manager states that for the productive capacity of the industry and its consumer market, the available equipment meets the demand. This study also contributes to highlight one of the challenges for the development of small breweries in front of the market giants, that is, the legislation, which fits the microbreweries as producers of alcoholic beverages. This makes the micro and small business segment to be taxed as a major, who has advantages in purchasing large batches of raw materials and tax incentives because they are large employers and tax pickers. It was possible to observe that the supply chain management system relies on spreadsheets and notes that are done manually, which could be simplified with a computer program to streamline procedures and reduce risks and failures of the manual process. In relation to the control of waste and effluents affected by the industry is outsourced and meets the needs. Finally, the results showed that the industry uses preventive maintenance as a productive strategy, which allows better conditions for the production and quality of artisanal beer. The quality is directly related to the satisfaction of the final consumer, being prized and performed throughout the production process, with the selection of better inputs, the effectiveness of the production processes and the relationship with the commercial partners.Keywords: artisanal brewery, production management, production processes, supply chain
Procedia PDF Downloads 12076 Solutions for Food-Safe 3D Printing
Authors: Geremew Geidare Kailo, Igor Gáspár, András Koris, Ivana Pajčin, Flóra Vitális, Vanja Vlajkov
Abstract:
Three-dimension (3D) printing, a very popular additive manufacturing technology, has recently undergone rapid growth and replaced the use of conventional technology from prototyping to producing end-user parts and products. The 3D Printing technology involves a digital manufacturing machine that produces three-dimensional objects according to designs created by the user via 3D modeling or computer-aided design/manufacturing (CAD/CAM) software. The most popular 3D printing system is Fused Deposition Modeling (FDM) or also called Fused Filament Fabrication (FFF). A 3D-printed object is considered food safe if it can have direct contact with the food without any toxic effects, even after cleaning, storing, and reusing the object. This work analyzes the processing timeline of the filament (material for 3D printing) from unboxing to the extrusion through the nozzle. It is an important task to analyze the growth of bacteria on the 3D printed surface and in gaps between the layers. By default, the 3D-printed object is not food safe after longer usage and direct contact with food (even though they use food-safe filaments), but there are solutions for this problem. The aim of this work was to evaluate the 3D-printed object from different perspectives of food safety. Firstly, testing antimicrobial 3D printing filaments from a food safety aspect since the 3D Printed object in the food industry may have direct contact with the food. Therefore, the main purpose of the work is to reduce the microbial load on the surface of a 3D-printed part. Coating with epoxy resin was investigated, too, to see its effect on mechanical strength, thermal resistance, surface smoothness and food safety (cleanability). Another aim of this study was to test new temperature-resistant filaments and the effect of high temperature on 3D printed materials to see if they can be cleaned with boiling or similar hi-temp treatment. This work proved that all three mentioned methods could improve the food safety of the 3D printed object, but the size of this effect variates. The best result we got was with coating with epoxy resin, and the object was cleanable like any other injection molded plastic object with a smooth surface. Very good results we got by boiling the objects, and it is good to see that nowadays, more and more special filaments have a food-safe certificate and can withstand boiling temperatures too. Using antibacterial filaments reduced bacterial colonies to 1/5, but the biggest advantage of this method is that it doesn’t require any post-processing. The object is ready out of the 3D printer. Acknowledgements: The research was supported by the Hungarian and Serbian bilateral scientific and technological cooperation project funded by the Hungarian National Office for Research, Development and Innovation (NKFI, 2019-2.1.11-TÉT-2020-00249) and the Ministry of Education, Science and Technological Development of the Republic of Serbia. The authors acknowledge the Hungarian University of Agriculture and Life Sciences’s Doctoral School of Food Science for the support in this studyKeywords: food safety, 3D printing, filaments, microbial, temperature
Procedia PDF Downloads 14275 Social Media Governance in UK Higher Education Institutions
Authors: Rebecca Lees, Deborah Anderson
Abstract:
Whilst the majority of research into social media in education focuses on the applications for teaching and learning environments, this study looks at how such activities can be managed by investigating the current state of social media regulation within UK higher education. Social media has pervaded almost all aspects of higher education; from marketing, recruitment and alumni relations to both distance and classroom-based learning and teaching activities. In terms of who uses it and how it is used, social media is growing at an unprecedented rate, particularly amongst the target market for higher education. Whilst the platform presents opportunities not found in more traditional methods of communication and interaction, such as speed and reach, it also carries substantial risks that come with inappropriate use, lack of control and issues of privacy. Typically, organisations rely on the concept of a social contract to guide employee behaviour to conform to the expectations of that organisation. Yet, where academia and social media intersect applying the notion of a social contract to enforce governance may be problematic; firstly considering the emphasis on treating students as customers with a growing focus on the use and collection of satisfaction metrics; and secondly regarding the notion of academic’s freedom of speech, opinion and discussion, which is a long-held tradition of learning instruction. Therefore the need for sound governance procedures to support expectations over online behaviour is vital, especially when the speed and breadth of adoption of social media activities has in the past outrun organisations’ abilities to manage it. An analysis of the current level of governance was conducted by gathering relevant policies, guidelines and best practice documentation available online via internet search and institutional requests. The documents were then subjected to a content analysis in the second phase of this study to determine the approach taken by institutions to apply such governance. Documentation was separated according to audience, i.e.: applicable to staff, students or all users. Given many of these included guests and visitors to the institution within their scope being easily accessible was considered important. Yet, within the UK only about half of all education institutions had explicit social media governance documentation available online without requiring member access or considerable searching. Where they existed, the majority focused solely on employee activities and tended to be policy based rather than rooted in guidelines or best practices, or held a fallback position of governing online behaviour via implicit instructions within IT and computer regulations. Explicit instructions over expected online behaviours is therefore lacking within UK HE. Given the number of educational practices that now include significant online components, it is imperative that education organisations keep up to date with the progress of social media use. Initial results from the second phase of this study which analyses the content of the governance documentation suggests they require reading levels at or above the target audience, with some considerable variability in length and layout. Further analysis will add to this growing field of investigating social media governance within higher education.Keywords: governance, higher education, policy, social media
Procedia PDF Downloads 18474 Automatic Adult Age Estimation Using Deep Learning of the ResNeXt Model Based on CT Reconstruction Images of the Costal Cartilage
Authors: Ting Lu, Ya-Ru Diao, Fei Fan, Ye Xue, Lei Shi, Xian-e Tang, Meng-jun Zhan, Zhen-hua Deng
Abstract:
Accurate adult age estimation (AAE) is a significant and challenging task in forensic and archeology fields. Attempts have been made to explore optimal adult age metrics, and the rib is considered a potential age marker. The traditional way is to extract age-related features designed by experts from macroscopic or radiological images followed by classification or regression analysis. Those results still have not met the high-level requirements for practice, and the limitation of using feature design and manual extraction methods is loss of information since the features are likely not designed explicitly for extracting information relevant to age. Deep learning (DL) has recently garnered much interest in imaging learning and computer vision. It enables learning features that are important without a prior bias or hypothesis and could be supportive of AAE. This study aimed to develop DL models for AAE based on CT images and compare their performance to the manual visual scoring method. Chest CT data were reconstructed using volume rendering (VR). Retrospective data of 2500 patients aged 20.00-69.99 years were obtained between December 2019 and September 2021. Five-fold cross-validation was performed, and datasets were randomly split into training and validation sets in a 4:1 ratio for each fold. Before feeding the inputs into networks, all images were augmented with random rotation and vertical flip, normalized, and resized to 224×224 pixels. ResNeXt was chosen as the DL baseline due to its advantages of higher efficiency and accuracy in image classification. Mean absolute error (MAE) was the primary parameter. Independent data from 100 patients acquired between March and April 2022 were used as a test set. The manual method completely followed the prior study, which reported the lowest MAEs (5.31 in males and 6.72 in females) among similar studies. CT data and VR images were used. The radiation density of the first costal cartilage was recorded using CT data on the workstation. The osseous and calcified projections of the 1 to 7 costal cartilages were scored based on VR images using an eight-stage staging technique. According to the results of the prior study, the optimal models were the decision tree regression model in males and the stepwise multiple linear regression equation in females. Predicted ages of the test set were calculated separately using different models by sex. A total of 2600 patients (training and validation sets, mean age=45.19 years±14.20 [SD]; test set, mean age=46.57±9.66) were evaluated in this study. Of ResNeXt model training, MAEs were obtained with 3.95 in males and 3.65 in females. Based on the test set, DL achieved MAEs of 4.05 in males and 4.54 in females, which were far better than the MAEs of 8.90 and 6.42 respectively, for the manual method. Those results showed that the DL of the ResNeXt model outperformed the manual method in AAE based on CT reconstruction of the costal cartilage and the developed system may be a supportive tool for AAE.Keywords: forensic anthropology, age determination by the skeleton, costal cartilage, CT, deep learning
Procedia PDF Downloads 7373 Investigations on the Application of Avalanche Simulations: A Survey Conducted among Avalanche Experts
Authors: Korbinian Schmidtner, Rudolf Sailer, Perry Bartelt, Wolfgang Fellin, Jan-Thomas Fischer, Matthias Granig
Abstract:
This study focuses on the evaluation of snow avalanche simulations, based on a survey that has been carried out among avalanche experts. In the last decades, the application of avalanche simulation tools has gained recognition within the realm of hazard management. Traditionally, avalanche runout models were used to predict extreme avalanche runout and prepare avalanche maps. This has changed rather dramatically with the application of numerical models. For safety regulations such as road safety simulation tools are now being coupled with real-time meteorological measurements to predict frequent avalanche hazard. That places new demands on model accuracy and requires the simulation of physical processes that previously could be ignored. These simulation tools are based on a deterministic description of the avalanche movement allowing to predict certain quantities (e.g. pressure, velocities, flow heights, runout lengths etc.) of the avalanche flow. Because of the highly variable regimes of the flowing snow, no uniform rheological law describing the motion of an avalanche is known. Therefore, analogies to fluid dynamical laws of other materials are stated. To transfer these constitutional laws to snow flows, certain assumptions and adjustments have to be imposed. Besides these limitations, there exist high uncertainties regarding the initial and boundary conditions. Further challenges arise when implementing the underlying flow model equations into an algorithm executable by a computer. This implementation is constrained by the choice of adequate numerical methods and their computational feasibility. Hence, the model development is compelled to introduce further simplifications and the related uncertainties. In the light of these issues many questions arise on avalanche simulations, on their assets and drawbacks, on potentials for improvements as well as their application in practice. To address these questions a survey among experts in the field of avalanche science (e.g. researchers, practitioners, engineers) from various countries has been conducted. In the questionnaire, special attention is drawn on the expert’s opinion regarding the influence of certain variables on the simulation result, their uncertainty and the reliability of the results. Furthermore, it was tested to which degree a simulation result influences the decision making for a hazard assessment. A discrepancy could be found between a large uncertainty of the simulation input parameters as compared to a relatively high reliability of the results. This contradiction can be explained taking into account how the experts employ the simulations. The credibility of the simulations is the result of a rather thoroughly simulation study, where different assumptions are tested, comparing the results of different flow models along with the use of supplemental data such as chronicles, field observation, silent witnesses i.a. which are regarded as essential for the hazard assessment and for sanctioning simulation results. As the importance of avalanche simulations grows within the hazard management along with their further development studies focusing on the modeling fashion could contribute to a better understanding how knowledge of the avalanche process can be gained by running simulations.Keywords: expert interview, hazard management, modeling, simulation, snow avalanche
Procedia PDF Downloads 32672 Hear Me: The Learning Experience on “Zoom” of Students With Deafness or Hard of Hearing Impairments
Authors: H. Weigelt-Marom
Abstract:
Over the years and up to the arousal of the COVID-19 pandemic, deaf or hard of hearing students studying in higher education institutions, participated lectures on campus using hearing aids and strategies adapted for frontal learning in a classroom. Usually, these aids were well known to them from their earlier study experience in school. However, the transition to online lessons, due to the latest pandemic, led deaf or hard of hearing students to study outside of their physical, well known learning environment. The change of learning environment and structure rose new challenges for these students. The present study examined the learning experience, limitations, challenges and benefits regarding learning online with lecture and classmates via the “Zoom” video conference program, among deaf or hard of hearing students in academia setting. In addition, emotional and social aspects related to learning in general versus the “Zoom” were examined. The study included 18 students diagnosed as deaf or hard of hearing, studying in various higher education institutions in Israel. All students had experienced lessons on the “Zoom”. Following allocation of the group study by the deaf and hard of hearing non-profit organization “Ma’agalei Shema”, and receiving the participants inform of consent, students were requested to answer a google form questioner and participate in an interview. The questioner included background information (e.g., age, year of studying, faculty etc.), level of computer literacy, and level of hearing and forms of communication (e.g., lip reading, sign language etc.). The interviews included a one on one, semi-structured, in-depth interview, conducted by the main researcher of the study (interview duration: up to 60 minutes). The interviews were held on “ZOOM” using specific adaptations for each interviewee: clear face screen of the interviewer for lip and face reading, and/ or professional sign language or live text transcript of the conversation. Additionally, interviewees used their audio devices if needed. Questions regarded: learning experience, difficulties and advantages studying using “Zoom”, learning in a classroom versus on “Zoom”, and questions concerning emotional and social aspects related to learning. Thematic analysis of the interviews revealed severe difficulties regarding the ability of deaf or hard of hearing students to comprehend during ”Zoom“ lessons without adoptive aids. For example, interviewees indicated difficulties understanding “Zoom” lessons due to their inability to use hearing devices commonly used by them in the classroom (e.g., FM systems). 80% indicated that they could not comprehend “Zoom” lessons since they could not see the lectures face, either because lectures did not agree to open their cameras or, either because they did not keep a straight forward clear face appearance while teaching. However, not all descriptions regarded learning via the “zoom” were negative. For example, 20% reported the recording of “Zoom” lessons as a main advantage. Enabling then to repeatedly watch the lessons at their own pace, mostly assisted by friends and family to translate the audio output into an accessible input. These finding and others regarding the learning experience of the group study on the “Zoom”, as well as their recommendation to enable deaf or hard of hearing students to study inclusively online, will be presented at the conference.Keywords: deaf or hard of hearing, learning experience, Zoom, qualitative research
Procedia PDF Downloads 11671 A Conceptual Model of Sex Trafficking Dynamics in the Context of Pandemics and Provisioning Systems
Authors: Brian J. Biroscak
Abstract:
In the United States (US), “sex trafficking” is defined at the federal level in the Trafficking Victims Protection Act of 2000 as encompassing a number of processes such as recruitment, transportation, and provision of a person for the purpose of a commercial sex act. It involves the use of force, fraud, or coercion, or in which the person induced to perform such act has not attained 18 years of age. Accumulating evidence suggests that sex trafficking is exacerbated by social and environmental stressors (e.g., pandemics). Given that “provision” is a key part of the definition, “provisioning systems” may offer a useful lens through which to study sex trafficking dynamics. Provisioning systems are the social systems connecting individuals, small groups, entities, and embedded communities as they seek to satisfy their needs and wants for goods, services, experiences and ideas through value-based exchange in communities. This project presents a conceptual framework for understanding sex trafficking dynamics in the context of the COVID pandemic. The framework is developed as a system dynamics simulation model based on published evidence, social and behavioral science theory, and key informant interviews with stakeholders from the Protection, Prevention, Prosecution, and Partnership sectors in one US state. This “4 P Paradigm” has been described as fundamental to the US government’s anti-trafficking strategy. The present research question is: “How do sex trafficking systems (e.g., supply, demand and price) interact with other provisioning systems (e.g., networks of organizations that help sexually exploited persons) to influence trafficking over time vis-à-vis the COVID pandemic?” Semi-structured interviews with stakeholders (n = 19) were analyzed based on grounded theory and combined for computer simulation. The first step (Problem Definition) was completed by open coding video-recorded interviews, supplemented by a literature review. The model depicts provision of sex trafficking services for victims and survivors as declining in March 2020, coincidental with COVID, but eventually rebounding. The second modeling step (Dynamic Hypothesis Formulation) was completed by open- and axial coding of interview segments, as well as consulting peer-reviewed literature. Part of the hypothesized explanation for changes over time is that the sex trafficking system behaves somewhat like a commodities market, with each of the other subsystems exhibiting delayed responses but collectively keeping trafficking levels below what they would be otherwise. Next steps (Model Building & Testing) led to a ‘proof of concept’ model that can be used to conduct simulation experiments and test various action ideas, by taking model users outside the entire system and seeing it whole. If sex trafficking dynamics unfold as hypothesized, e.g., oscillated post-COVID, then one potential leverage point is to address the lack of information feedback loops between the actual occurrence and consequences of sex trafficking and those who seek to prevent its occurrence, prosecute the traffickers, protect the victims and survivors, and partner with the other anti-trafficking advocates. Implications for researchers, administrators, and other stakeholders are discussed.Keywords: pandemics, provisioning systems, sex trafficking, system dynamics modeling
Procedia PDF Downloads 7970 Pushover Analysis of a Typical Bridge Built in Central Zone of Mexico
Authors: Arturo Galvan, Jatziri Y. Moreno-Martinez, Daniel Arroyo-Montoya, Jose M. Gutierrez-Villalobos
Abstract:
Bridges are one of the most seismically vulnerable structures on highway transportation systems. The general process for assessing the seismic vulnerability of a bridge involves the evaluation of its overall capacity and demand. One of the most common procedures to obtain this capacity is by means of pushover analysis of the structure. Typically, the bridge capacity is assessed using non-linear static methods or non-linear dynamic analyses. The non-linear dynamic approaches use step by step numerical solutions for assessing the capacity with the consuming computer time inconvenience. In this study, a nonlinear static analysis (‘pushover analysis’) was performed to predict the collapse mechanism of a typical bridge built in the central zone of Mexico (Celaya, Guanajuato). The bridge superstructure consists of three simple supported spans with a total length of 76 m: 22 m of the length of extreme spans and 32 m of length of the central span. The deck width is of 14 m and the concrete slab depth is of 18 cm. The bridge is built by means of frames of five piers with hollow box-shaped sections. The dimensions of these piers are 7.05 m height and 1.20 m diameter. The numerical model was created using a commercial software considering linear and non-linear elements. In all cases, the piers were represented by frame type elements with geometrical properties obtained from the structural project and construction drawings of the bridge. The deck was modeled with a mesh of rectangular thin shell (plate bending and stretching) finite elements. The moment-curvature analysis was performed for the sections of the piers of the bridge considering in each pier the effect of confined concrete and its reinforcing steel. In this way, plastic hinges were defined on the base of the piers to carry out the pushover analysis. In addition, time history analyses were performed using 19 accelerograms of real earthquakes that have been registered in Guanajuato. In this way, the displacements produced by the bridge were determined. Finally, pushover analysis was applied through the control of displacements in the piers to obtain the overall capacity of the bridge before the failure occurs. It was concluded that the lateral deformation of the piers due to a critical earthquake occurred in this zone is almost imperceptible due to the geometry and reinforcement demanded by the current design standards and compared to its displacement capacity, they were excessive. According to the analysis, it was found that the frames built with five piers increase the rigidity in the transverse direction of the bridge. Hence it is proposed to reduce these frames of five piers to three piers, maintaining the same geometrical characteristics and the same reinforcement in each pier. Also, the mechanical properties of materials (concrete and reinforcing steel) were maintained. Once a pushover analysis was performed considering this configuration, it was concluded that the bridge would continue having a “correct” seismic behavior, at least for the 19 accelerograms considered in this study. In this way, costs in material, construction, time and labor would be reduced in this study case.Keywords: collapse mechanism, moment-curvature analysis, overall capacity, push-over analysis
Procedia PDF Downloads 15169 Developing a Deep Understanding of the Immune Response in Hepatitis B Virus Infected Patients Using a Knowledge Driven Approach
Authors: Hanan Begali, Shahi Dost, Annett Ziegler, Markus Cornberg, Maria-Esther Vidal, Anke R. M. Kraft
Abstract:
Chronic hepatitis B virus (HBV) infection can be treated with nucleot(s)ide analog (NA), for example, which inhibits HBV replication. However, they have hardly any influence on the functional cure of HBV, which is defined by hepatitis B surface antigen (HBsAg) loss. NA needs to be taken life-long, which is not available for all patients worldwide. Additionally, NA-treated patients are still at risk of developing cirrhosis, liver failure, or hepatocellular carcinoma (HCC). Although each patient has the same components of the immune system, immune responses vary between patients. Therefore, a deeper understanding of the immune response against HBV in different patients is necessary to understand the parameters leading to HBV cure and to use this knowledge to optimize HBV therapies. This requires seamless integration of an enormous amount of diverse and fine-grained data from viral markers, e.g., hepatitis B core-related antigen (HBcrAg) and hepatitis B surface antigen (HBsAg). The data integration system relies on the assumption that profiling human immune systems requires the analysis of various variables (e.g., demographic data, treatments, pre-existing conditions, immune cell response, or HLA-typing) rather than only one. However, the values of these variables are collected independently. They are presented in a myriad of formats, e.g., excel files, textual descriptions, lab book notes, and images of flow cytometry dot plots. Additionally, patients can be identified differently in these analyses. This heterogeneity complicates the integration of variables, as data management techniques are needed to create a unified view in which individual formats and identifiers are transparent when profiling the human immune systems. The proposed study (HBsRE) aims at integrating heterogeneous data sets of 87 chronically HBV-infected patients, e.g., clinical data, immune cell response, and HLA-typing, with knowledge encoded in biomedical ontologies and open-source databases into a knowledge-driven framework. This new technique enables us to harmonize and standardize heterogeneous datasets in the defined modeling of the data integration system, which will be evaluated in the knowledge graph (KG). KGs are data structures that represent the knowledge and data as factual statements using a graph data model. Finally, the analytic data model will be applied on top of KG in order to develop a deeper understanding of the immune profiles among various patients and to evaluate factors playing a role in a holistic profile of patients with HBsAg level loss. Additionally, our objective is to utilize this unified approach to stratify patients for new effective treatments. This study is developed in the context of the project “Transforming big data into knowledge: for deep immune profiling in vaccination, infectious diseases, and transplantation (ImProVIT)”, which is a multidisciplinary team composed of computer scientists, infection biologists, and immunologists.Keywords: chronic hepatitis B infection, immune response, knowledge graphs, ontology
Procedia PDF Downloads 10868 A Mathematical Model for Studying Landing Dynamics of a Typical Lunar Soft Lander
Authors: Johns Paul, Santhosh J. Nalluveettil, P. Purushothaman, M. Premdas
Abstract:
Lunar landing is one of the most critical phases of lunar mission. The lander is provided with a soft landing system to prevent structural damage of lunar module by absorbing the landing shock and also assure stability during landing. Presently available software are not capable to simulate the rigid body dynamics coupled with contact simulation and elastic/plastic deformation analysis. Hence a separate mathematical model has been generated for studying the dynamics of a typical lunar soft lander. Parameters used in the analysis includes lunar surface slope, coefficient of friction, initial touchdown velocity (vertical and horizontal), mass and moment of inertia of lander, crushing force due to energy absorbing material in the legs, number of legs and geometry of lander. The mathematical model is capable to simulate plastic and elastic deformation of honey comb, frictional force between landing leg and lunar soil, surface contact simulation, lunar gravitational force, rigid body dynamics and linkage dynamics of inverted tripod landing gear. The non linear differential equations generated for studying the dynamics of lunar lander is solved by numerical method. Matlab programme has been used as a computer tool for solving the numerical equations. The position of each kinematic joint is defined by mathematical equations for the generation of equation of motion. All hinged locations are defined by position vectors with respect to body fixed coordinate. The vehicle rigid body rotations and motions about body coordinate are only due to the external forces and moments arise from footpad reaction force due to impact, footpad frictional force and weight of vehicle. All these force are mathematically simulated for the generation of equation of motion. The validation of mathematical model is done by two different phases. First phase is the validation of plastic deformation of crushable elements by employing conservation of energy principle. The second phase is the validation of rigid body dynamics of model by simulating a lander model in ADAMS software after replacing the crushable elements to elastic spring element. Simulation of plastic deformation along with rigid body dynamics and contact force cannot be modeled in ADAMS. Hence plastic element of primary strut is replaced with a spring element and analysis is carried out in ADAMS software. The same analysis is also carried out using the mathematical model where the simulation of honeycomb crushing is replaced by elastic spring deformation and compared the results with ADAMS analysis. The rotational motion of linkages and 6 degree of freedom motion of lunar Lander about its CG can be validated by ADAMS software by replacing crushing element to spring element. The model is also validated by the drop test results of 4 leg lunar lander. This paper presents the details of mathematical model generated and its validation.Keywords: honeycomb, landing leg tripod, lunar lander, primary link, secondary link
Procedia PDF Downloads 35167 Behavioral Patterns of Adopting Digitalized Services (E-Sport versus Sports Spectating) Using Agent-Based Modeling
Authors: Justyna P. Majewska, Szymon M. Truskolaski
Abstract:
The growing importance of digitalized services in the so-called new economy, including the e-sports industry, can be observed recently. Various demographic or technological changes lead consumers to modify their needs, not regarding the services themselves but the method of their application (attracting customers, forms of payment, new content, etc.). In the case of leisure-related to competitive spectating activities, there is a growing need to participate in events whose content is not sports competitions but computer games challenge – e-sport. The literature in this area so far focuses on determining the number of e-sport fans with elements of a simple statistical description (mainly concerning demographic characteristics such as age, gender, place of residence). Meanwhile, the development of the industry is influenced by a combination of many different, intertwined demographic, personality and psychosocial characteristics of customers, as well as the characteristics of their environment. Therefore, there is a need for a deeper recognition of the determinants of the behavioral patterns upon selecting digitalized services by customers, which, in the absence of available large data sets, can be achieved by using econometric simulations – multi-agent modeling. The cognitive aim of the study is to reveal internal and external determinants of behavioral patterns of customers taking into account various variants of economic development (the pace of digitization and technological development, socio-demographic changes, etc.). In the paper, an agent-based model with heterogeneous agents (characteristics of customers themselves and their environment) was developed, which allowed identifying a three-stage development scenario: i) initial interest, ii) standardization, and iii) full professionalization. The probabilities regarding the transition process were estimated using the Method of Simulated Moments. The estimation of the agent-based model parameters and sensitivity analysis reveals crucial factors that have driven a rising trend in e-sport spectating and, in a wider perspective, the development of digitalized services. Among the psychosocial characteristics of customers, they are the level of familiarization with the rules of games as well as sports disciplines, active and passive participation history and individual perception of challenging activities. Environmental factors include general reception of games, number and level of recognition of community builders and the level of technological development of streaming as well as community building platforms. However, the crucial factor underlying the good predictive power of the model is the level of professionalization. While in the initial interest phase, the entry barriers for new customers are high. They decrease during the phase of standardization and increase again in the phase of full professionalization when new customers perceive participation history inaccessible. In this case, they are prone to switch to new methods of service application – in the case of e-sport vs. sports to new content and more modern methods of its delivery. In a wider context, the findings in the paper support the idea of a life cycle of services regarding methods of their application from “traditional” to digitalized.Keywords: agent-based modeling, digitalized services, e-sport, spectators motives
Procedia PDF Downloads 17266 Prevalence and Factors Associated With Concurrent Use of Herbal Medicine and Anti-retroviral Therapy Among HIV/Aids Patients Attending Selected HIV Clinics in Wakiso District
Authors: Nanteza Rachel
Abstract:
Background: Worldwide, there were 36.7 million people living with Human Immunodeficiency Virus (HIV) in 2015, up from 35 million at the end of 2013. Wakiso district is one of the hotspots for the Human Immunodeficiency Virus (HIV)/ Acquired Immune Deficiency Syndrome (AIDS) infection in Uganda, with the prevalence of 8.1 %. Herbal medicine has gained popularity among Human Immunodeficiency Virus (HIV)/ Acquired Immune Deficiency Syndrome (AIDS) patients as adjuvant therapy to reduce the adverse effects of ART. Regardless of the subsidized and physical availability of the Anti-Retroviral Therapy (ART), majority of Africans living with Human Immunodeficiency Virus (HIV)/ Acquired Immune Deficiency Syndrome (AIDS) resort to adding to their ART traditional medicine. Result found out from a pilot observation made by the PI that indicate 13 out of 30 People Living with AIDS(PLWA) who are attending Human Immunodeficiency Virus (HIV) clinics in Wakiso district reported to be using herbal preparations despite the fact that they were taking Anti Retro Viral (ARVs) this prompted this study to be done. Purpose of the study: To determine the prevalence and factors associated with concurrent use of herbal medicine and anti-retroviral therapy among HIV/AIDS patients attending selected HIV clinics in Wakiso district. Methodology: This was a cross sectional study with both quantitative data collection (use of a questionnaire) and qualitative data collection (key informants’ interviews). A mixed method of sampling was used, that is, purposive and random sampling. Purposive sampling was based on the location in the district and used to select 7 health facilities basing on the 7 health sub districts from Wakiso. Simple random sampling was used to select one HIV clinic from each of the 7 health sub districts. Furthermore, the study units were enrolled in to the study as they entered into the HIV clinics, and 105 respondents were interviewed. Both manual and computer packages (SPSS) were used to analyze the data Results: The prevalence of concurrent use of herbal medicine and ART was 38 (36.2%). Commonly HIV symptom treated with herbs was fever 27(71.1%), diarrhea 3(7.9%) and cough 2(5.3%). Commonly used herbs for fever (Omululuza (Vernonica amydalina), Ekigagi (Aloe sp), Nalongo (Justicia betonica Linn) while for diarrhea was Ntwatwa. The side effects also included; too much pain, itchy pain of HIV, aneamia,felt sick, loss/gain appetite, joint pain and bad dreams. Herbs used to sooth the side effects were; for aneamia was avocado leaves Parea Americana mill The significant factors associated with concurrent use of herbal medicine were being familiar with herbs and conventional medicine for management HIV symptoms being expensive. The other significant factor was exhibiting hostility to patients by health personnel providing HIV care. Conclusion: Herbal medicine is widely used by clients in HIV/AIDS care. Patients being familiar with herbs and conventional medicine being expensive were associated with concurrent use of herbal medicine and ART. The exhibition of hostility to the HIV/AIDS patients by the health care providers was also associated with concurrent use of herbal medicine and ART among HIV/AIDS patients.Keywords: HIV patients, herbal medicine, antiretroviral therapy, factors associated
Procedia PDF Downloads 9765 Assessing Sustainability of Bike Sharing Projects Using Envision™ Rating System
Authors: Tamar Trop
Abstract:
Bike sharing systems can be important elements of smart cities as they have the potential for impact on multiple levels. These systems can add a significant alternative to other modes of mass transit in cities that are continuously looking for measures to become more livable and maintain their attractiveness for citizens, businesses and tourism. Bike-sharing began in Europe in 1965, and a viable format emerged in the mid-2000s thanks to the introduction of information technology. The rate of growth in bike-sharing schemes and fleets has been very rapid since 2008 and has probably outstripped growth in every other form of urban transport. Today, public bike-sharing systems are available on five continents, including over 700 cities, operating more than 800,000 bicycles at approximately 40,000 docking stations. Since modern bike sharing systems have become prevalent only in the last decade, the existing literature analyzing these systems and their sustainability is relatively new. The purpose of the presented study is to assess the sustainability of these newly emerging transportation systems, by using the Envision™ rating system as a methodological framework and the Israeli 'Tel -O-Fun' – bike sharing project as a case study. The assessment was conducted by project team members. Envision™ is a new guidance and rating system used to assess and improve the sustainability of all types and sizes of infrastructure projects. This tool provides a holistic framework for evaluating and rating the community, environmental, and economic benefits of infrastructure projects over the course of their life cycle. This evaluation method has 60 sustainability criteria divided into five categories: Quality of life, leadership, resource allocation, natural world, and climate and risk. 'Tel -O-Fun' project was launched in Tel Aviv-Yafo on 2011 and today provides about 1,800 bikes for rent, at 180 rental stations across the city. The system is based on a complex computer terminal that is located in the docking stations. The highest-rated sustainable features that the project scored include: (a) Improving quality of life by: offering a low cost and efficient form of public transit, improving community mobility and access, enabling the flexibility of travel within a multimodal transportation system, saving commuters time and money, enhancing public health and reducing air and noise pollution; (b) improving resource allocation by: offering inexpensive and flexible last-mile connectivity, reducing space, materials and energy consumption, reducing wear and tear on public roads, and maximizing the utility of existing infrastructure, and (c) reducing of greenhouse gas emissions from transportation. Overall, 'Tel -O-Fun' project was highly scored as an environmentally sustainable and socially equitable infrastructure. The use of this practical framework for evaluation also yielded various interesting insights on the shortcoming of the system and the characteristics of good solutions. This can contribute to the improvement of the project and may assist planners and operators of bike sharing systems to develop a sustainable, efficient and reliable transportation infrastructure within smart cities.Keywords: bike sharing, Envision™, sustainability rating system, sustainable infrastructure
Procedia PDF Downloads 34064 Rotational and Linear Accelerations of an Anthropometric Test Dummy Head from Taekwondo Kicks among Amateur Practitioners
Authors: Gabriel P. Fife, Saeyong Lee, David M. O'Sullivan
Abstract:
Introduction: Although investigations into injury characteristics are represented well in the literature, few have investigated the biomechanical characteristics associated with head impacts in Taekwondo. Therefore, the purpose of this study was to identify the kinematic characteristics of head impacts due to taekwondo kicks among non-elite practitioners. Participants: Male participants (n= 11, 175 + 5.3 cm, 71 + 8.3 kg) with 7.5 + 3.6 years of taekwondo training volunteered for this study. Methods: Participants were asked to perform five repetitions of each technique (i.e., turning kick, spinning hook kick, spinning back kick, front axe kick, and clench axe kick) aimed at the Hybrid III head with their dominant kicking leg. All participants wore a protective foot pad (thickness = 12 mm) that is commonly used in competition and training. To simulate head impact in taekwondo, the target consisted of a Hybrid III 50th Percentile Crash Test Dummy (Hybrid III) head (mass = 5.1 kg) and neck (fitted with taekwondo headgear) secured to an aluminum support frame and positioned to each athlete’s standing height. The Hybrid III head form was instrumented with a 500 g tri-axial accelerometer (PCB Piezotronics) mounted to the head center of gravity to obtain resultant linear accelerations (RLA). Rotational accelerations were collected using three angular rate sensors mounted orthogonally to each other (Diversified Technical Systems ARS-12 K Angular Rate Sensor). The accelerometers were interfaced via a 3-channel, battery-powered integrated circuit piezoelectric sensor signal conditioner (PCB Piezotronics) and connected to a desktop computer for analysis. Acceleration data were captured using LABVIEW Signal Express and processed in accordance with SAE J211-1 channel frequency class 1000. Head injury criteria values (HIC) were calculated using the VSRSoftware. A one-way analysis of variance was used to determine differences between kicks, while the Tukey HSD test was employed for pairwise comparisons. The level of significance was set to an effect size of 0.20. All statistical analyses were done using R 3.1.0. Results: A statistically significant difference was observed in RLA (p = 0.00075); however, these differences were not clinically meaningful (η² = 0.04, 95% CI: -0.94 to 1.03). No differences were identified with ROTA (p = 0.734, η² = 0.0004, 95% CI: -0.98 to 0.98). A statistically significant difference (p < 0.001) between kicks in HIC was observed, with a medium effect (η2= 0.08, 95% CI: -0.98 to 1.07). However, the confidence interval of this difference indicates uncertainty. Tukey HSD test identified differences (p < 0.001) between kicking techniques in RLA and HIC. Conclusion: This study observed head impact levels that were comparable to previous studies of similar objectives and methodology. These data are important as impact measures from this study may be more representative of impact levels experienced by non-elite competitors. Although the clench axe kick elicited a lower RLA, the ROTA of this technique was higher than levels from other techniques (although not large differences in reference to effect sizes). As the axe kick has been reported to cause severe head injury, future studies may consider further study of this kick important.Keywords: Taekwondo, head injury, biomechanics, kicking
Procedia PDF Downloads 2663 Fort Conger: A Virtual Museum and Virtual Interactive World for Exploring Science in the 19th Century
Authors: Richard Levy, Peter Dawson
Abstract:
Ft. Conger, located in the Canadian Arctic was one of the most remote 19th-century scientific stations. Established in 1881 on Ellesmere Island, a wood framed structure established a permanent base from which to conduct scientific research. Under the charge of Lt. Greely, Ft. Conger was one of 14 expeditions conducted during the First International Polar Year (FIPY). Our research project “From Science to Survival: Using Virtual Exhibits to Communicate the Significance of Polar Heritage Sites in the Canadian Arctic” focused on the creation of a virtual museum website dedicated to one of the most important polar heritage site in the Canadian Arctic. This website was developed under a grant from Virtual Museum of Canada and enables visitors to explore the fort’s site from 1875 to the present, http://fortconger.org. Heritage sites are often viewed as static places. A goal of this project was to present the change that occurred over time as each new group of explorers adapted the site to their needs. The site was first visited by British explorer George Nares in 1875 – 76. Only later did the United States government select this site for the Lady Franklin Bay Expedition (1881-84) with research to be conducted under the FIPY (1882 – 83). Still later Robert Peary and Matthew Henson attempted to reach the North Pole from Ft. Conger in 1899, 1905 and 1908. A central focus of this research is on the virtual reconstruction of the Ft. Conger. In the summer of 2010, a Zoller+Fröhlich Imager 5006i and Minolta Vivid 910 laser scanner were used to scan terrain and artifacts. Once the scanning was completed, the point clouds were registered and edited to form the basis of a virtual reconstruction. A goal of this project has been to allow visitors to step back in time and explore the interior of these buildings with all of its artifacts. Links to text, historic documents, animations, panorama images, computer games and virtual labs provide explanations of how science was conducted during the 19th century. A major feature of this virtual world is the timeline. Visitors to the website can begin to explore the site when George Nares, in his ship the HMS Discovery, appeared in the harbor in 1875. With the emergence of Lt Greely’s expedition in 1881, we can track the progress made in establishing a scientific outpost. Still later in 1901, with Peary’s presence, the site is transformed again, with the huts having been built from materials salvaged from Greely’s main building. Still later in 2010, we can visit the site during its present state of deterioration and learn about the laser scanning technology which was used to document the site. The Science and Survival at Fort Conger project represents one of the first attempts to use virtual worlds to communicate the historical and scientific significance of polar heritage sites where opportunities for first-hand visitor experiences are not possible because of remote location.Keywords: 3D imaging, multimedia, virtual reality, arctic
Procedia PDF Downloads 42062 Modern Technology-Based Methods in Neurorehabilitation for Social Competence Deficit in Children with Acquired Brain Injury
Authors: M. Saard, A. Kolk, K. Sepp, L. Pertens, L. Reinart, C. Kööp
Abstract:
Introduction: Social competence is often impaired in children with acquired brain injury (ABI), but evidence-based rehabilitation for social skills has remained undeveloped. Modern technology-based methods create effective and safe learning environments for pediatric social skills remediation. The aim of the study was to implement our structured model of neuro rehab for socio-cognitive deficit using multitouch-multiuser tabletop (MMT) computer-based platforms and virtual reality (VR) technology. Methods: 40 children aged 8-13 years (yrs) have participated in the pilot study: 30 with ABI -epilepsy, traumatic brain injury and/or tic disorder- and 10 healthy age-matched controls. From the patients, 12 have completed the training (M = 11.10 yrs, SD = 1.543) and 20 are still in training or in the waiting-list group (M = 10.69 yrs, SD = 1.704). All children performed the first individual and paired assessments. For patients, second evaluations were performed after the intervention period. Two interactive applications were implemented into rehabilitation design: Snowflake software on MMT tabletop and NoProblem on DiamondTouch Table (DTT), which allowed paired training (2 children at once). Also, in individual training sessions, HTC Vive VR device was used with VR metaphors of difficult social situations to treat social anxiety and train social skills. Results: At baseline (B) evaluations, patients had higher deficits in executive functions on the BRIEF parents’ questionnaire (M = 117, SD = 23.594) compared to healthy controls (M = 22, SD = 18.385). The most impaired components of social competence were emotion recognition, Theory of Mind skills (ToM), cooperation, verbal/non-verbal communication, and pragmatics (Friendship Observation Scale scores only 25-50% out of 100% for patients). In Sentence Completion Task and Spence Anxiety Scale, the patients reported a lack of friends, behavioral problems, bullying in school, and social anxiety. Outcome evaluations: Snowflake on MMT improved executive and cooperation skills and DTT developed communication skills, metacognitive skills, and coping. VR, video modelling and role-plays improved social attention, emotional attitude, gestural behaviors, and decreased social anxiety. NEPSY-II showed improvement in Affect Recognition [B = 7, SD = 5.01 vs outcome (O) = 10, SD = 5.85], Verbal ToM (B = 8, SD = 3.06 vs O = 10, SD = 4.08), Contextual ToM (B = 8, SD = 3.15 vs O = 11, SD = 2.87). ToM Stories test showed an improved understanding of Intentional Lying (B = 7, SD = 2.20 vs O = 10, SD = 0.50), and Sarcasm (B=6, SD = 2.20 vs O = 7, SD = 2.50). Conclusion: Neurorehabilitation based on the Structured Model of Neurorehab for Socio-Cognitive Deficit in children with ABI were effective in social skills remediation. The model helps to understand theoretical connections between components of social competence and modern interactive computerized platforms. We encourage therapists to implement these next-generation devices into the rehabilitation process as MMT and VR interfaces are motivating for children, thus ensuring good compliance. Improving children’s social skills is important for their and their families’ quality of life and social capital.Keywords: acquired brain injury, children, social skills deficit, technology-based neurorehabilitation
Procedia PDF Downloads 12061 Investigation of Software Integration for Simulations of Buoyancy-Driven Heat Transfer in a Vehicle Underhood during Thermal Soak
Authors: R. Yuan, S. Sivasankaran, N. Dutta, K. Ebrahimi
Abstract:
This paper investigates the software capability and computer-aided engineering (CAE) method of modelling transient heat transfer process occurred in the vehicle underhood region during vehicle thermal soak phase. The heat retention from the soak period will be beneficial to the cold start with reduced friction loss for the second 14°C worldwide harmonized light-duty vehicle test procedure (WLTP) cycle, therefore provides benefits on both CO₂ emission reduction and fuel economy. When vehicle undergoes soak stage, the airflow and the associated convective heat transfer around and inside the engine bay is driven by the buoyancy effect. This effect along with thermal radiation and conduction are the key factors to the thermal simulation of the engine bay to obtain the accurate fluids and metal temperature cool-down trajectories and to predict the temperatures at the end of the soak period. Method development has been investigated in this study on a light-duty passenger vehicle using coupled aerodynamic-heat transfer thermal transient modelling method for the full vehicle under 9 hours of thermal soak. The 3D underhood flow dynamics were solved inherently transient by the Lattice-Boltzmann Method (LBM) method using the PowerFlow software. This was further coupled with heat transfer modelling using the PowerTHERM software provided by Exa Corporation. The particle-based LBM method was capable of accurately handling extremely complicated transient flow behavior on complex surface geometries. The detailed thermal modelling, including heat conduction, radiation, and buoyancy-driven heat convection, were integrated solved by PowerTHERM. The 9 hours cool-down period was simulated and compared with the vehicle testing data of the key fluid (coolant, oil) and metal temperatures. The developed CAE method was able to predict the cool-down behaviour of the key fluids and components in agreement with the experimental data and also visualised the air leakage paths and thermal retention around the engine bay. The cool-down trajectories of the key components obtained for the 9 hours thermal soak period provide vital information and a basis for the further development of reduced-order modelling studies in future work. This allows a fast-running model to be developed and be further imbedded with the holistic study of vehicle energy modelling and thermal management. It is also found that the buoyancy effect plays an important part at the first stage of the 9 hours soak and the flow development during this stage is vital to accurately predict the heat transfer coefficients for the heat retention modelling. The developed method has demonstrated the software integration for simulating buoyancy-driven heat transfer in a vehicle underhood region during thermal soak with satisfying accuracy and efficient computing time. The CAE method developed will allow integration of the design of engine encapsulations for improving fuel consumption and reducing CO₂ emissions in a timely and robust manner, aiding the development of low-carbon transport technologies.Keywords: ATCT/WLTC driving cycle, buoyancy-driven heat transfer, CAE method, heat retention, underhood modeling, vehicle thermal soak
Procedia PDF Downloads 15360 Development and Clinical Application of a Cochlear Implant Mapping Assistance System
Authors: Hong Mengdi, Li Jianan, Ji Fei, Chen Aiting, Wang Qian
Abstract:
Objective: To overcome the communication barriers that audiologists encounter during cochlear implant mapping, particularly the challenge of eliciting subjective feedback from recipients regarding electrical stimulation, and to enhance the capabilities of existing technologies, we teamed up with software engineers to design an interactive approach for patient-audiologist communication. This approach employs a tablet (PAD) as the interface for a communication and feedback system between patients and audiologists during the mapping process, known as the Cochlear Implant Mapping Assistance System. Methods: Capitalizing on the touchscreen functionality of the PAD, the recipients' subjective feedback during cochlear implant mapping is instantly transmitted to the audiologist's mapping computer. The system acts as a platform for auditory assessment instruments, facilitating immediate evaluation of recipients' post-mapping hearing and speech discrimination capabilities. Furthermore, the system is designed to augment the visual reinforcement audiometry (VRA) process. The system consists of six modules, including three testing projects: loudness testing, hearing threshold testing, and loudness balance testing; two assessment projects: warble tone testing and digit speech testing; and one VRA animation project. It also incorporates speech-to-text and text input display functions tailored to accommodate speech communication difficulties in hearing-impaired individuals, with pre-installed common exchange content between audiologists and recipients. Audiologists can input sentences by selecting options. The system supports switching between Chinese and English versions, suitable for audiologists and recipients who use English, facilitating international application of the system. Results: The Cochlear Implant Mapping Assistance System has been in use for over a year in the Auditory Implant Center of the Department of Otology and Neurotology, Medical Center of Otology and Head & Neck Surgery, Chinese PLA General Hospital, with more than 300 recipients using this mapping system. Currently, the system operates stably, with both audiologists and recipients providing positive feedback, indicating a significant improvement over previous methods. It is particularly well-received by pediatric recipients, significantly enhancing the work efficiency of audiologists and improving the feedback efficiency and accuracy of recipients. The system enhances the comprehensibility for cochlear implant recipients, improves wearing comfort and user experience, facilitates cochlear implant auditory mapping, and increases the collection of previously challenging-to-obtain data during the existing assisted mapping process, such as loudness testing data, electrical stimulation testing data, warble tone testing data, loudness balance testing data, digit speech testing data, and visual reinforcement audiometry testing data. Real-time data recording improves the accuracy of assisted mapping. The interface design is meticulously crafted to accommodate patients of varying ages and cognitive abilities, featuring an intuitive design that allows for effortless, guidance-free use by patients.Keywords: audiologist, subjective feedback, mapping, cochlear implant
Procedia PDF Downloads 2059 Improved Elastoplastic Bounding Surface Model for the Mathematical Modeling of Geomaterials
Authors: Andres Nieto-Leal, Victor N. Kaliakin, Tania P. Molina
Abstract:
The nature of most engineering materials is quite complex. It is, therefore, difficult to devise a general mathematical model that will cover all possible ranges and types of excitation and behavior of a given material. As a result, the development of mathematical models is based upon simplifying assumptions regarding material behavior. Such simplifications result in some material idealization; for example, one of the simplest material idealization is to assume that the material behavior obeys the elasticity. However, soils are nonhomogeneous, anisotropic, path-dependent materials that exhibit nonlinear stress-strain relationships, changes in volume under shear, dilatancy, as well as time-, rate- and temperature-dependent behavior. Over the years, many constitutive models, possessing different levels of sophistication, have been developed to simulate the behavior geomaterials, particularly cohesive soils. Early in the development of constitutive models, it became evident that elastic or standard elastoplastic formulations, employing purely isotropic hardening and predicated in the existence of a yield surface surrounding a purely elastic domain, were incapable of realistically simulating the behavior of geomaterials. Accordingly, more sophisticated constitutive models have been developed; for example, the bounding surface elastoplasticity. The essence of the bounding surface concept is the hypothesis that plastic deformations can occur for stress states either within or on the bounding surface. Thus, unlike classical yield surface elastoplasticity, the plastic states are not restricted only to those lying on a surface. Elastoplastic bounding surface models have been improved; however, there is still need to improve their capabilities in simulating the response of anisotropically consolidated cohesive soils, especially the response in extension tests. Thus, in this work an improved constitutive model that can more accurately predict diverse stress-strain phenomena exhibited by cohesive soils was developed. Particularly, an improved rotational hardening rule that better simulate the response of cohesive soils in extension. The generalized definition of the bounding surface model provides a convenient and elegant framework for unifying various previous versions of the model for anisotropically consolidated cohesive soils. The Generalized Bounding Surface Model for cohesive soils is a fully three-dimensional, time-dependent model that accounts for both inherent and stress induced anisotropy employing a non-associative flow rule. The model numerical implementation in a computer code followed an adaptive multistep integration scheme in conjunction with local iteration and radial return. The one-step trapezoidal rule was used to get the stiffness matrix that defines the relationship between the stress increment and the strain increment. After testing the model in simulating the response of cohesive soils through extensive comparisons of model simulations to experimental data, it has been shown to give quite good simulations. The new model successfully simulates the response of different cohesive soils; for example, Cardiff Kaolin, Spestone Kaolin, and Lower Cromer Till. The simulated undrained stress paths, stress-strain response, and excess pore pressures are in very good agreement with the experimental values, especially in extension.Keywords: bounding surface elastoplasticity, cohesive soils, constitutive model, modeling of geomaterials
Procedia PDF Downloads 31558 E-Waste Generation in Bangladesh: Present and Future Estimation by Material Flow Analysis Method
Authors: Rowshan Mamtaz, Shuvo Ahmed, Imran Noor, Sumaiya Rahman, Prithvi Shams, Fahmida Gulshan
Abstract:
Last few decades have witnessed a phenomenal rise in the use of electrical and electronic equipment globally in our everyday life. As these items reach the end of their lifecycle, they turn into e-wastes and contribute to the waste stream. Bangladesh, in conformity with the global trend and due to its ongoing rapid growth, is also using electronics-based appliances and equipment at an increasing rate. This has caused a corresponding increase in the generation of e-wastes. Bangladesh is a developing country; its overall waste management system, is not yet efficient, nor is it environmentally sustainable. Most of its solid wastes are disposed of in a crude way at dumping sites. Addition of e-wastes, which often contain toxic heavy metals, into its waste stream has made the situation more difficult and challenging. Assessment of generation of e-wastes is an important step towards addressing the challenges posed by e-wastes, setting targets, and identifying the best practices for their management. Understanding and proper management of e-wastes is a stated item of the Sustainable Development Goals (SDG) campaign, and Bangladesh is committed to fulfilling it. A better understanding and availability of reliable baseline data on e-wastes will help in preventing illegal dumping, promote recycling, and create jobs in the recycling sectors and thus facilitate sustainable e-waste management. With this objective in mind, the present study has attempted to estimate the amount of e-wastes and its future generation trend in Bangladesh. To achieve this, sales data on eight selected electrical and electronic products (TV, Refrigerator, Fan, Mobile phone, Computer, IT equipment, CFL (Compact Fluorescent Lamp) bulbs, and Air Conditioner) have been collected from different sources. Primary and secondary data on the collection, recycling, and disposal of the e-wastes have also been gathered by questionnaire survey, field visits, interviews, and formal and informal meetings with the stakeholders. Material Flow Analysis (MFA) method has been applied, and mathematical models have been developed in the present study to estimate e-waste amounts and their future trends up to the year 2035 for the eight selected electrical and electronic equipment. End of life (EOL) method is adopted in the estimation. Model inputs are products’ annual sale/import data, past and future sales data, and average life span. From the model outputs, it is estimated that the generation of e-wastes in Bangladesh in 2018 is 0.40 million tons and by 2035 the amount will be 4.62 million tons with an average annual growth rate of 20%. Among the eight selected products, the number of e-wastes generated from seven products are increasing whereas only one product, CFL bulb, showed a decreasing trend of waste generation. The average growth rate of e-waste from TV sets is the highest (28%) while those from Fans and IT equipment are the lowest (11%). Field surveys conducted in the e-waste recycling sector also revealed that every year around 0.0133 million tons of e-wastes enter into the recycling business in Bangladesh which may increase in the near future.Keywords: Bangladesh, end of life, e-waste, material flow analysis
Procedia PDF Downloads 19857 The Biosphere as a Supercomputer Directing and Controlling Evolutionary Processes
Authors: Igor A. Krichtafovitch
Abstract:
The evolutionary processes are not linear. Long periods of quiet and slow development turn to rather rapid emergences of new species and even phyla. During Cambrian explosion, 22 new phyla were added to the previously existed 3 phyla. Contrary to the common credence the natural selection or a survival of the fittest cannot be accounted for the dominant evolution vector which is steady and accelerated advent of more complex and more intelligent living organisms. Neither Darwinism nor alternative concepts including panspermia and intelligent design propose a satisfactory solution for these phenomena. The proposed hypothesis offers a logical and plausible explanation of the evolutionary processes in general. It is based on two postulates: a) the Biosphere is a single living organism, all parts of which are interconnected, and b) the Biosphere acts as a giant biological supercomputer, storing and processing the information in digital and analog forms. Such supercomputer surpasses all human-made computers by many orders of magnitude. Living organisms are the product of intelligent creative action of the biosphere supercomputer. The biological evolution is driven by growing amount of information stored in the living organisms and increasing complexity of the biosphere as a single organism. Main evolutionary vector is not a survival of the fittest but an accelerated growth of the computational complexity of the living organisms. The following postulates may summarize the proposed hypothesis: biological evolution as a natural life origin and development is a reality. Evolution is a coordinated and controlled process. One of evolution’s main development vectors is a growing computational complexity of the living organisms and the biosphere’s intelligence. The intelligent matter which conducts and controls global evolution is a gigantic bio-computer combining all living organisms on Earth. The information is acting like a software stored in and controlled by the biosphere. Random mutations trigger this software, as is stipulated by Darwinian Evolution Theories, and it is further stimulated by the growing demand for the Biosphere’s global memory storage and computational complexity. Greater memory volume requires a greater number and more intellectually advanced organisms for storing and handling it. More intricate organisms require the greater computational complexity of biosphere in order to keep control over the living world. This is an endless recursive endeavor with accelerated evolutionary dynamic. New species emerge when two conditions are met: a) crucial environmental changes occur and/or global memory storage volume comes to its limit and b) biosphere computational complexity reaches critical mass capable of producing more advanced creatures. The hypothesis presented here is a naturalistic concept of life creation and evolution. The hypothesis logically resolves many puzzling problems with the current state evolution theory such as speciation, as a result of GM purposeful design, evolution development vector, as a need for growing global intelligence, punctuated equilibrium, happening when two above conditions a) and b) are met, the Cambrian explosion, mass extinctions, happening when more intelligent species should replace outdated creatures.Keywords: supercomputer, biological evolution, Darwinism, speciation
Procedia PDF Downloads 16456 Comparative Study of Outcome of Patients with Wilms Tumor Treated with Upfront Chemotherapy and Upfront Surgery in Alexandria University Hospitals
Authors: Golson Mohamed, Yasmine Gamasy, Khaled EL-Khatib, Anas Al-Natour, Shady Fadel, Haytham Rashwan, Haytham Badawy, Nadia Farghaly
Abstract:
Introduction: Wilm's tumor is the most common malignant renal tumor in children. Much progress has been made in the management of patients with this malignancy over the last 3 decades. Today treatments are based on several trials and studies conducted by the International Society of Pediatric Oncology (SIOP) in Europe and National Wilm's Tumor Study Group (NWTS) in the USA. It is necessary for us to understand why do we follow either of the protocols, NWTS which follows the upfront surgery principle or the SIOP which follows the upfront chemotherapy principle in all stages of the disease. Objective: The aim of is to assess outcome in patients treated with preoperative chemotherapy and patients treated with upfront surgery to compare their effect on overall survival. Study design: to decide which protocol to follow, study was carried out on records for patients aged 1 day to 18 years old suffering from Wilm's tumor who were admitted to Alexandria University Hospital, pediatric oncology, pediatric urology and pediatric surgery departments, with a retrospective survey records from 2010 to 2015, Design and editing of the transfer sheet with a (PRISMA flow study) Preferred Reporting Items for Systematic Reviews and Meta-Analyses. Data were fed to the computer and analyzed using IBM SPSS software package version 20.0. (11) Qualitative data were described using number and percent. Quantitative data were described using Range (minimum and maximum), mean, standard deviation and median. Comparison between different groups regarding categorical variables was tested using Chi-square test. When more than 20% of the cells have expected count less than 5, correction for chi-square was conducted using Fisher’s Exact test or Monte Carlo correction. The distributions of quantitative variables were tested for normality using Kolmogorov-Smirnov test, Shapiro-Wilk test, and D'Agstino test, if it reveals normal data distribution, parametric tests were applied. If the data were abnormally distributed, non-parametric tests were used. For normally distributed data, a comparison between two independent populations was done using independent t-test. For abnormally distributed data, comparison between two independent populations was done using Mann-Whitney test. Significance of the obtained results was judged at the 5% level. Results: A significantly statistical difference was observed for survival between the two studied groups favoring the upfront chemotherapy(86.4%)as compared to the upfront surgery group (59.3%) where P=0.009. As regard complication, 20 cases (74.1%) out of 27 were complicated in the group of patients treated with upfront surgery. Meanwhile, 30 cases (68.2%) out of 44 had complications in patients treated with upfront chemotherapy. Also, the incidence of intraoperative complication (rupture) was less in upfront chemotherapy group as compared to upfront surgery group. Conclusion: Upfront chemotherapy has superiority over upfront surgery.As the patient who started with upfront chemotherapy shown, higher survival rate, less percent in complication, less percent needed for radiotherapy, and less rate in recurrence.Keywords: Wilm's tumor, renal tumor, chemotherapy, surgery
Procedia PDF Downloads 31755 MEIOSIS: Museum Specimens Shed Light in Biodiversity Shrinkage
Authors: Zografou Konstantina, Anagnostellis Konstantinos, Brokaki Marina, Kaltsouni Eleftheria, Dimaki Maria, Kati Vassiliki
Abstract:
Body size is crucial to ecology, influencing everything from individual reproductive success to the dynamics of communities and ecosystems. Understanding how temperature affects variations in body size is vital for both theoretical and practical purposes, as changes in size can modify trophic interactions by altering predator-prey size ratios and changing the distribution and transfer of biomass, which ultimately impacts food web stability and ecosystem functioning. Notably, a decrease in body size is frequently mentioned as the third "universal" response to climate warming, alongside shifts in distribution and changes in phenology. This trend is backed by ecological theories like the temperature-size rule (TSR) and Bergmann's rule, which have been observed in numerous species, indicating that many species are likely to shrink in size as temperatures rise. However, the thermal responses related to body size are still contradictory, and further exploration is needed. To tackle this challenge, we developed the MEIOSIS project, aimed at providing valuable insights into the relationship between the body size of species, species’ traits, environmental factors, and their response to climate change. We combined a digitized collection of butterflies from the Swiss Federal Institute of Technology in Zürich with our newly digitized butterfly collection from Goulandris Natural History Museum in Greece to analyse trends in time. For a total of 23868 images, the length of the right forewing was measured using ImageJ software. Each forewing was measured from the point at which the wing meets the thorax to the apex of the wing. The forewing length of museum specimens has been shown to have a strong correlation with wing surface area and has been utilized in prior studies as a proxy for overall body size. Temperature data corresponding to the years of collection were also incorporated into the datasets. A second dataset was generated when a custom computer vision tool was implemented for the automated morphological measuring of samples for the digitized collection in Zürich. Using the second dataset, we corrected manual measurements with ImageJ, and a final dataset containing 31922 samples was used for analysis. Setting time as a smoother variable, species identity as a random factor, and the length of right-wing size (a proxy for body size) as the response variable, we ran a global model for a maximum period of 110 years (1900 – 2010). Then, we investigated functional variability between different terrestrial biomes in a second model. Both models confirmed our initial hypothesis and resulted in a decreasing trend in body size over the years. We expect that this first output can be provided as basic data for the next challenge, i.e., to identify the ecological traits that influence species' temperature-size responses, enabling us to predict the direction and intensity of a species' reaction to rising temperatures more accurately.Keywords: butterflies, shrinking body size, museum specimens, climate change
Procedia PDF Downloads 1054 Worldwide GIS Based Earthquake Information System/Alarming System for Microzonation/Liquefaction and It’s Application for Infrastructure Development
Authors: Rajinder Kumar Gupta, Rajni Kant Agrawal, Jaganniwas
Abstract:
One of the most frightening phenomena of nature is the occurrence of earthquake as it has terrible and disastrous effects. Many earthquakes occur every day worldwide. There is need to have knowledge regarding the trends in earthquake occurrence worldwide. The recoding and interpretation of data obtained from the establishment of the worldwide system of seismological stations made this possible. From the analysis of recorded earthquake data, the earthquake parameters and source parameters can be computed and the earthquake catalogues can be prepared. These catalogues provide information on origin, time, epicenter locations (in term of latitude and longitudes) focal depths, magnitude and other related details of the recorded earthquakes. Theses catalogues are used for seismic hazard estimation. Manual interpretation and analysis of these data is tedious and time consuming. A geographical information system is a computer based system designed to store, analyzes and display geographic information. The implementation of integrated GIS technology provides an approach which permits rapid evaluation of complex inventor database under a variety of earthquake scenario and allows the user to interactively view results almost immediately. GIS technology provides a powerful tool for displaying outputs and permit to users to see graphical distribution of impacts of different earthquake scenarios and assumptions. An endeavor has been made in present study to compile the earthquake data for the whole world in visual Basic on ARC GIS Plate form so that it can be used easily for further analysis to be carried out by earthquake engineers. The basic data on time of occurrence, location and size of earthquake has been compiled for further querying based on various parameters. A preliminary analysis tool is also provided in the user interface to interpret the earthquake recurrence in region. The user interface also includes the seismic hazard information already worked out under GHSAP program. The seismic hazard in terms of probability of exceedance in definite return periods is provided for the world. The seismic zones of the Indian region are included in the user interface from IS 1893-2002 code on earthquake resistant design of buildings. The City wise satellite images has been inserted in Map and based on actual data the following information could be extracted in real time: • Analysis of soil parameters and its effect • Microzonation information • Seismic hazard and strong ground motion • Soil liquefaction and its effect in surrounding area • Impacts of liquefaction on buildings and infrastructure • Occurrence of earthquake in future and effect on existing soil • Propagation of earth vibration due of occurrence of Earthquake GIS based earthquake information system has been prepared for whole world in Visual Basic on ARC GIS Plate form and further extended micro level based on actual soil parameters. Individual tools has been developed for liquefaction, earthquake frequency etc. All information could be used for development of infrastructure i.e. multi story structure, Irrigation Dam & Its components, Hydro-power etc in real time for present and future.Keywords: GIS based earthquake information system, microzonation, analysis and real time information about liquefaction, infrastructure development
Procedia PDF Downloads 31653 Performance of CALPUFF Dispersion Model for Investigation the Dispersion of the Pollutants Emitted from an Industrial Complex, Daura Refinery, to an Urban Area in Baghdad
Authors: Ramiz M. Shubbar, Dong In Lee, Hatem A. Gzar, Arthur S. Rood
Abstract:
Air pollution is one of the biggest environmental problems in Baghdad, Iraq. The Daura refinery located nearest the center of Baghdad, represents the largest industrial area, which transmits enormous amounts of pollutants, therefore study the gaseous pollutants and particulate matter are very important to the environment and the health of the workers in refinery and the people whom leaving in areas around the refinery. Actually, some studies investigated the studied area before, but it depended on the basic Gaussian equation in a simple computer programs, however, that kind of work at that time is very useful and important, but during the last two decades new largest production units were added to the Daura refinery such as, PU_3 (Power unit_3 (Boiler 11&12)), CDU_1 (Crude Distillation unit_70000 barrel_1), and CDU_2 (Crude Distillation unit_70000 barrel_2). Therefore, it is necessary to use new advanced model to study air pollution at the region for the new current years, and calculation the monthly emission rate of pollutants through actual amounts of fuel which consumed in production unit, this may be lead to accurate concentration values of pollutants and the behavior of dispersion or transport in study area. In this study to the best of author’s knowledge CALPUFF model was used and examined for first time in Iraq. CALPUFF is an advanced non-steady-state meteorological and air quality modeling system, was applied to investigate the pollutants concentration of SO2, NO2, CO, and PM1-10μm, at areas adjacent to Daura refinery which located in the center of Baghdad in Iraq. The CALPUFF modeling system includes three main components: CALMET is a diagnostic 3-dimensional meteorological model, CALPUFF (an air quality dispersion model), CALPOST is a post processing package, and an extensive set of preprocessing programs produced to interface the model to standard routinely available meteorological and geophysical datasets. The targets of this work are modeling and simulation the four pollutants (SO2, NO2, CO, and PM1-10μm) which emitted from Daura refinery within one year. Emission rates of these pollutants were calculated for twelve units includes thirty plants, and 35 stacks by using monthly average of the fuel amount consumption at this production units. Assess the performance of CALPUFF model in this study and detect if it is appropriate and get out predictions of good accuracy compared with available pollutants observation. CALPUFF model was investigated at three stability classes (stable, neutral, and unstable) to indicate the dispersion of the pollutants within deferent meteorological conditions. The simulation of the CALPUFF model showed the deferent kind of dispersion of these pollutants in this region depends on the stability conditions and the environment of the study area, monthly, and annual averages of pollutants were applied to view the dispersion of pollutants in the contour maps. High values of pollutants were noticed in this area, therefore this study recommends to more investigate and analyze of the pollutants, reducing the emission rate of pollutants by using modern techniques and natural gas, increasing the stack height of units, and increasing the exit gas velocity from stacks.Keywords: CALPUFF, daura refinery, Iraq, pollutants
Procedia PDF Downloads 198