Search results for: machine modelling
512 Study of Mini Steel Re-Rolling and Pickling Mills for the Reduction of Accidents and Health Hazards
Authors: S. P. Rana
Abstract:
Objectives: For the manufacture of a very thin strip or a strip with a high-quality finish, the stainless steel sheet that is called billet is re-rolled in re-rolling mill to make stainless steel sheet of 18 gauges. The rolls of re-rolling mill exert tremendous pressure over the sheet and there is likely chance of breaking of stainless steel strip from the sheet. The objective of the study was to minimise the number of accidents in steel re-rolling mills due to ejection of stainless steel strip and to minimize the pollution caused by the pickling process used in these units. Methods: Looking into the high rate of frequency and severity of accidents as well as pollution hazard in re-rolling and pickling mills, it becomes essential to make necessary arrangements for prevention of accidents in such type of industry. The author carried out survey/inspections of a large number of re-rolling and pickling mills and allied units. During the course of inspection, the working of these steel re-rolling and pickling mills was closely studied and monitored. A number of accidents involving re-rolling mills were investigated and subsequently remedial measures to prevent the occurrence of such accidents were suggested. Assessment of occupational safety and health system of these units was carried out and compliance level of the statutory requirements was checked. The workers were medically examined and monitored to ascertain their health conditions. Results: Proper use of safety gadgets by workers, machine guarding and regular training brought down the risk to an acceptable level and discharged effluent pollution was brought down to permissible limits. The fatal accidents have been reduced by 83%. Conclusions: Effective enforcement and implementation of the directions/suggestions given to the managements of such units brought down the no. of accidents to a rational level. The number of fatal accidents has reduced by 83% during the study period. The effective implementation of pollution control device curtailed the pollution level to an acceptable level.Keywords: re-rolling mill, hazard, accident, health hazards
Procedia PDF Downloads 442511 A Validated Estimation Method to Predict the Interior Wall of Residential Buildings Based on Easy to Collect Variables
Authors: B. Gepts, E. Meex, E. Nuyts, E. Knaepen, G. Verbeeck
Abstract:
The importance of resource efficiency and environmental impact assessment has raised the interest in knowing the amount of materials used in buildings. If no BIM model or energy performance certificate is available, material quantities can be obtained through an estimation or time-consuming calculation. For the interior wall area, no validated estimation method exists. However, in the case of environmental impact assessment or evaluating the existing building stock as future material banks, knowledge of the material quantities used in interior walls is indispensable. This paper presents a validated method for the estimation of the interior wall area for dwellings based on easy-to-collect building characteristics. A database of 4963 residential buildings spread all over Belgium is used. The data are collected through onsite measurements of the buildings during the construction phase (between mid-2010 and mid-2017). The interior wall area refers to the area of all interior walls in the building, including the inner leaf of exterior (party) walls, minus the area of windows and doors, unless mentioned otherwise. The two predictive modelling techniques used are 1) a (stepwise) linear regression and 2) a decision tree. The best estimation method is selected based on the best R² k-fold (5) fit. The research shows that the building volume is by far the most important variable to estimate the interior wall area. A stepwise regression based on building volume per building, building typology, and type of house provides the best fit, with R² k-fold (5) = 0.88. Although the best R² k-fold value is obtained when the other parameters ‘building typology’ and ‘type of house’ are included, the contribution of these variables can be seen as statistically significant but practically irrelevant. Thus, if these parameters are not available, a simplified estimation method based on only the volume of the building can also be applied (R² k-fold = 0.87). The robustness and precision of the method (output) are validated three times. Firstly, the prediction of the interior wall area is checked by means of alternative calculations of the building volume and of the interior wall area; thus, other definitions are applied to the same data. Secondly, the output is tested on an extension of the database, so it has the same definitions but on other data. Thirdly, the output is checked on an unrelated database with other definitions and other data. The validation of the estimation methods demonstrates that the methods remain accurate when underlying data are changed. The method can support environmental as well as economic dimensions of impact assessment, as it can be used in early design. As it allows the prediction of the amount of interior wall materials to be produced in the future or that might become available after demolition, the presented estimation method can be part of material flow analyses on input and on output.Keywords: buildings as material banks, building stock, estimation method, interior wall area
Procedia PDF Downloads 30510 A World Map of Seabed Sediment Based on 50 Years of Knowledge
Authors: T. Garlan, I. Gabelotaud, S. Lucas, E. Marchès
Abstract:
Production of a global sedimentological seabed map has been initiated in 1995 to provide the necessary tool for searches of aircraft and boats lost at sea, to give sedimentary information for nautical charts, and to provide input data for acoustic propagation modelling. This original approach had already been initiated one century ago when the French hydrographic service and the University of Nancy had produced maps of the distribution of marine sediments of the French coasts and then sediment maps of the continental shelves of Europe and North America. The current map of the sediment of oceans presented was initiated with a UNESCO's general map of the deep ocean floor. This map was adapted using a unique sediment classification to present all types of sediments: from beaches to the deep seabed and from glacial deposits to tropical sediments. In order to allow good visualization and to be adapted to the different applications, only the granularity of sediments is represented. The published seabed maps are studied, if they present an interest, the nature of the seabed is extracted from them, the sediment classification is transcribed and the resulted map is integrated in the world map. Data come also from interpretations of Multibeam Echo Sounder (MES) imagery of large hydrographic surveys of deep-ocean. These allow a very high-quality mapping of areas that until then were represented as homogeneous. The third and principal source of data comes from the integration of regional maps produced specifically for this project. These regional maps are carried out using all the bathymetric and sedimentary data of a region. This step makes it possible to produce a regional synthesis map, with the realization of generalizations in the case of over-precise data. 86 regional maps of the Atlantic Ocean, the Mediterranean Sea, and the Indian Ocean have been produced and integrated into the world sedimentary map. This work is permanent and permits a digital version every two years, with the integration of some new maps. This article describes the choices made in terms of sediment classification, the scale of source data and the zonation of the variability of the quality. This map is the final step in a system comprising the Shom Sedimentary Database, enriched by more than one million punctual and surface items of data, and four series of coastal seabed maps at 1:10,000, 1:50,000, 1:200,000 and 1:1,000,000. This step by step approach makes it possible to take into account the progresses in knowledge made in the field of seabed characterization during the last decades. Thus, the arrival of new classification systems for seafloor has improved the recent seabed maps, and the compilation of these new maps with those previously published allows a gradual enrichment of the world sedimentary map. But there is still a lot of work to enhance some regions, which are still based on data acquired more than half a century ago.Keywords: marine sedimentology, seabed map, sediment classification, world ocean
Procedia PDF Downloads 232509 Designing the Management Plan for Health Care (Medical) Wastes in the Cities of Semnan, Mahdishahr and Shahmirzad
Authors: Rasouli Divkalaee Zeinab, Kalteh Safa, Roudbari Aliakbar
Abstract:
Introduction: Medical waste can lead to the generation and transmission of many infectious and contagious diseases due to the presence of pathogenic agents, thereby necessitating the need for special management to collect, decontaminate, and finally dispose of such products. This study aimed to design a centralized health care (medical) waste management program for the cities of Semnan, Mahdishahr, and Shahmirzad. Methods: This descriptive-analytical study was conducted for six months in the cities of Semnan, Mahdishahr, and Shahmirzad. In this study, the quantitative and qualitative characteristics of the generated wastes were determined by taking samples from all medical waste production centers. Then, the equipment, devices, and machines required for separate collection of the waste from the production centers and for their subsequent decontamination were estimated. Next, the investment costs, current costs, and working capital required for collection, decontamination, and final disposal of the wastes were determined. Finally, the payment for proper waste management of each category of medical waste-producing centers was determined. Results: 1021 kilograms of medical waste are produced daily in the cities of Semnan, Mahdishahr, and Shahmirzad. It was estimated that a 1000-liter autoclave, a machine for collecting medical waste, four 60-liter bins, four 120-liter bins, and four 1200-liter bins were required for implementing the study plan. Also, the estimated total annual medical waste management costs for Semnan City were determined (23,283,903,720 Iranian Rials). Conclusion: The study results showed that establishing a proper management system for medical wastes generated in the three studied cities will cost between 334,280 and 1,253,715 Iranian Rials in fees for the medical centers. The findings of this study provided comprehensive data regarding medical wastes from the generation point to the landfill site, which is vital for the government and the private sector.Keywords: clinics, decontamination, management, medical waste
Procedia PDF Downloads 78508 Conflict Resolution in Fuzzy Rule Base Systems Using Temporal Modalities Inference
Authors: Nasser S. Shebka
Abstract:
Fuzzy logic is used in complex adaptive systems where classical tools of representing knowledge are unproductive. Nevertheless, the incorporation of fuzzy logic, as it’s the case with all artificial intelligence tools, raised some inconsistencies and limitations in dealing with increased complexity systems and rules that apply to real-life situations and hinders the ability of the inference process of such systems, but it also faces some inconsistencies between inferences generated fuzzy rules of complex or imprecise knowledge-based systems. The use of fuzzy logic enhanced the capability of knowledge representation in such applications that requires fuzzy representation of truth values or similar multi-value constant parameters derived from multi-valued logic, which set the basis for the three t-norms and their based connectives which are actually continuous functions and any other continuous t-norm can be described as an ordinal sum of these three basic ones. However, some of the attempts to solve this dilemma were an alteration to fuzzy logic by means of non-monotonic logic, which is used to deal with the defeasible inference of expert systems reasoning, for example, to allow for inference retraction upon additional data. However, even the introduction of non-monotonic fuzzy reasoning faces a major issue of conflict resolution for which many principles were introduced, such as; the specificity principle and the weakest link principle. The aim of our work is to improve the logical representation and functional modelling of AI systems by presenting a method of resolving existing and potential rule conflicts by representing temporal modalities within defeasible inference rule-based systems. Our paper investigates the possibility of resolving fuzzy rules conflict in a non-monotonic fuzzy reasoning-based system by introducing temporal modalities and Kripke's general weak modal logic operators in order to expand its knowledge representation capabilities by means of flexibility in classifying newly generated rules, and hence, resolving potential conflicts between these fuzzy rules. We were able to address the aforementioned problem of our investigation by restructuring the inference process of the fuzzy rule-based system. This is achieved by using time-branching temporal logic in combination with restricted first-order logic quantifiers, as well as propositional logic to represent classical temporal modality operators. The resulting findings not only enhance the flexibility of complex rule-base systems inference process but contributes to the fundamental methods of building rule bases in such a manner that will allow for a wider range of applicable real-life situations derived from a quantitative and qualitative knowledge representational perspective.Keywords: fuzzy rule-based systems, fuzzy tense inference, intelligent systems, temporal modalities
Procedia PDF Downloads 91507 Intersection of Racial and Gender Microaggressions: Social Support as a Coping Strategy among Indigenous LGBTQ People in Taiwan
Authors: Ciwang Teyra, A. H. Y. Lai
Abstract:
Introduction: Indigenous LGBTQ individuals face with significant life stress such as racial and gender discrimination and microaggressions, which may lead to negative impacts of their mental health. Although studies relevant to Taiwanese indigenous LGBTQpeople gradually increase, most of them are primarily conceptual or qualitative in nature. This research aims to fulfill the gap by offering empirical quantitative evidence, especially investigating the impact of racial and gender microaggressions on mental health among Taiwanese indigenous LGBTQindividuals with an intersectional perspective, as well as examine whether social support can help them to cope with microaggressions. Methods: Participants were (n=200; mean age=29.51; Female=31%, Male=61%, Others=8%). A cross-sectional quantitative design was implemented using data collected in the year 2020. Standardised measurements was used, including Racial Microaggression Scale (10 items), Gender Microaggression Scale (9 items), Social Support Questionnaire-SF(6 items); Patient Health Questionnaire(9-item); and Generalised Anxiety Disorder(7-item). Covariates were age, gender, and perceived economic hardships. Structural equation modelling (SEM) was employed using Mplus 8.0 with the latent variables of depression and anxiety as outcomes. A main effect SEM model was first established (Model1).To test the moderation effects of perceived social support, an interaction effect model (Model 2) was created with interaction terms entered into Model1. Numerical integration was used with maximum likelihood estimation to estimate the interaction model. Results: Model fit statistics of the Model 1:X2(df)=1308.1 (795), p<.05; CFI/TLI=0.92/0.91; RMSEA=0.06; SRMR=0.06. For Model, the AIC and BIC values of Model 2 improved slightly compared to Model 1(AIC =15631 (Model1) vs. 15629 (Model2); BIC=16098 (Model1) vs. 16103 (Model2)). Model 2 was adopted as the final model. In main effect model 1, racialmicroaggressionand perceived social support were associated with depression and anxiety, but not sexual orientation microaggression(Indigenous microaggression: b = 0.27 for depression; b=0.38 for anxiety; Social support: b=-0.37 for depression; b=-0.34 for anxiety). Thus, an interaction term between social support and indigenous microaggression was added in Model 2. In the final Model 2, indigenous microaggression and perceived social support continues to be statistically significant predictors of both depression and anxiety. Social support moderated the effect of indigenous microaggression of depression (b=-0.22), but not anxiety. All covariates were not statistically significant. Implications: Results indicated that racial microaggressions have a significant impact on indigenous LGBTQ people’s mental health. Social support plays as a crucial role to buffer the negative impact of racial microaggression. To promote indigenous LGBTQ people’s wellbeing, it is important to consider how to support them to develop social support network systems.Keywords: microaggressions, intersectionality, indigenous population, mental health, social support
Procedia PDF Downloads 146506 Relationship between Institutional Perspective and Safety Performance: A Case on Ready-Made Garments Manufacturing Industry
Authors: Fahad Ibrahim, Raphaël Akamavi
Abstract:
Bangladesh has encountered several industrial disasters (e.g. fire and building collapse tragedies) leading to the loss of valuable human lives. Irrespective of various institutions’ making effort to improve the safety situation, industry compliance and safety behaviour have not yet been improved. Hence, one question remains, to what extent does the institutional elements efficient enough to make any difference in improving safety behaviours? Thus, this study explores the relationship between institutional perspective and safety performance. Structural equation modelling results, using survey data from 256 RMG workers’ of 128 garments manufacturing factories in Bangladesh, show that institutional facets strongly influence management safety commitment to induce workers participation in safety activities and reduce workplace accident rates. The study also found that by upholding industrial standards and inspecting the safety situations, institutions facets significantly and directly affect workers involvement in safety participations and rate of workplace accidents. Additionally, workers involvement to safety practices significantly predicts the safety environment of the workplace. Subsequently, our findings demonstrate that institutional culture, norms, and regulations enact play an important role in altering management commitment to set-up a safer workplace environment. As a result, when workers’ perceive their management having high level of commitment to safety, they are inspired to be involved more in the safety practices, which significantly alter the workplace safety situation and lessen injury experiences. Due to the fact that institutions have strong influence on management commitment, legislative members should endorse, regulate, and strictly monitor workplace safety laws to be exercised by the factory owners. Further, management should take initiatives for adopting OHS features and conceive strategic directions (i.e., set up safety committees, risk assessments, innovative training) for promoting a positive safety climate to provide a safe workplace environment. Arguably, an inclusive public-private partnership is recommended for ensuring better and safer workplace for RMG workers. However, as our data were under a cross-sectional design; the respondents’ perceptions might get changed over a period of time and hence, a longitudinal study is recommended. Finally, further research is needed to determine the impact of improvement mechanisms on workplace safety performance, such as how workplace design, safety training programs, and institutional enforcement policies protect the well-being of workers.Keywords: institutional perspective, management commitment, safety participation, work injury, safety performance, occupational health and safety
Procedia PDF Downloads 206505 Technical Efficiency in Organic and Conventional Wheat Farms: Evidence from a Primary Survey from Two Districts of Ganga River Basin, India
Authors: S. P. Singh, Priya, Komal Sajwan
Abstract:
With the increasing spread of organic farming in India, costs, returns, efficiency, and social and environmental sustainability of organic vis-a-vis conventional farming systems have become topics of interest among agriculture scientists, economists, and policy analysts. A study on technical efficiency estimation under these farming systems, particularly in the Ganga River Basin, where the promotion of organic farming is incentivized, can help to understand whether the inputs are utilized to their maximum possible level and what measures can be taken to improve the efficiency. This paper, therefore, analyses the technical efficiency of wheat farms operating under organic and conventional farming systems. The study is based on a primary survey of 600 farms (300 organic ad 300 conventional) conducted in 2021 in two districts located in the Middle Ganga River Basin, India. Technical, managerial, and scale efficiencies of individual farms are estimated by applying the data envelopment analysis (DEA) methodology. The per hectare value of wheat production is taken as an output variable, and values of seeds, human labour, machine cost, plant nutrients, farm yard manure (FYM), plant protection, and irrigation charges are considered input variables for estimating the farm-level efficiencies. The post-DEA analysis is conducted using the Tobit regression model to know the efficiency determining factors. The results show that technical efficiency is significantly higher in conventional than organic farming systems due to a higher gap in scale efficiency than managerial efficiency. Further, 9.8% conventional and only 1.0% organic farms are found operating at the most productive scale size (MPSS), and 99% organic and 81% conventional farms at IRS. Organic farms perform well in managerial efficiency, but their technical efficiency is lower than conventional farms, mainly due to their relatively lower scale size. The paper suggests that technical efficiency in organic wheat can be increased by upscaling the farm size by incentivizing group/collective farming in clusters.Keywords: organic, conventional, technical efficiency, determinants, DEA, Tobit regression
Procedia PDF Downloads 99504 Artificial Intelligence Impact on Strategic Stability
Authors: Darius Jakimavicius
Abstract:
Artificial intelligence is the subject of intense debate in the international arena, identified both as a technological breakthrough and as a component of the strategic stability effect. Both the kinetic and non-kinetic development of AI and its application in the national strategies of the great powers may trigger a change in the security situation. Artificial intelligence is generally faster, more capable and more efficient than humans, and there is a temptation to transfer decision-making and control responsibilities to artificial intelligence. Artificial intelligence, which, once activated, can select and act on targets without further intervention by a human operator, blurs the boundary between human or robot (machine) warfare, or perhaps human and robot together. Artificial intelligence acts as a force multiplier that speeds up decision-making and reaction times on the battlefield. The role of humans is increasingly moving away from direct decision-making and away from command and control processes involving the use of force. It is worth noting that the autonomy and precision of AI systems make the process of strategic stability more complex. Deterrence theory is currently in a phase of development in which deterrence is undergoing further strain and crisis due to the complexity of the evolving models enabled by artificial intelligence. Based on the concept of strategic stability and deterrence theory, it is appropriate to develop further research on the development and impact of AI in order to assess AI from both a scientific and technical perspective: to capture a new niche in the scientific literature and academic terminology, to clarify the conditions for deterrence, and to identify the potential uses, impacts and possibly quantities of AI. The research problem is the impact of artificial intelligence developed by great powers on strategic stability. This thesis seeks to assess the impact of AI on strategic stability and deterrence principles, with human exclusion from the decision-making and control loop as a key axis. The interaction between AI and human actions and interests can determine fundamental changes in great powers' defense and deterrence, and the development and application of AI-based great powers strategies can lead to a change in strategic stability.Keywords: artificial inteligence, strategic stability, deterrence theory, decision making loop
Procedia PDF Downloads 41503 Intelligent Indoor Localization Using WLAN Fingerprinting
Authors: Gideon C. Joseph
Abstract:
The ability to localize mobile devices is quite important, as some applications may require location information of these devices to operate or deliver better services to the users. Although there are several ways of acquiring location data of mobile devices, the WLAN fingerprinting approach has been considered in this work. This approach uses the Received Signal Strength Indicator (RSSI) measurement as a function of the position of the mobile device. RSSI is a quantitative technique of describing the radio frequency power carried by a signal. RSSI may be used to determine RF link quality and is very useful in dense traffic scenarios where interference is of major concern, for example, indoor environments. This research aims to design a system that can predict the location of a mobile device, when supplied with the mobile’s RSSIs. The developed system takes as input the RSSIs relating to the mobile device, and outputs parameters that describe the location of the device such as the longitude, latitude, floor, and building. The relationship between the Received Signal Strengths (RSSs) of mobile devices and their corresponding locations is meant to be modelled; hence, subsequent locations of mobile devices can be predicted using the developed model. It is obvious that describing mathematical relationships between the RSSIs measurements and localization parameters is one option to modelling the problem, but the complexity of such an approach is a serious turn-off. In contrast, we propose an intelligent system that can learn the mapping of such RSSIs measurements to the localization parameters to be predicted. The system is capable of upgrading its performance as more experiential knowledge is acquired. The most appealing consideration to using such a system for this task is that complicated mathematical analysis and theoretical frameworks are excluded or not needed; the intelligent system on its own learns the underlying relationship in the supplied data (RSSI levels) that corresponds to the localization parameters. These localization parameters to be predicted are of two different tasks: Longitude and latitude of mobile devices are real values (regression problem), while the floor and building of the mobile devices are of integer values or categorical (classification problem). This research work presents artificial neural network based intelligent systems to model the relationship between the RSSIs predictors and the mobile device localization parameters. The designed systems were trained and validated on the collected WLAN fingerprint database. The trained networks were then tested with another supplied database to obtain the performance of trained systems on achieved Mean Absolute Error (MAE) and error rates for the regression and classification tasks involved therein.Keywords: indoor localization, WLAN fingerprinting, neural networks, classification, regression
Procedia PDF Downloads 347502 Winter Wheat Yield Forecasting Using Sentinel-2 Imagery at the Early Stages
Authors: Chunhua Liao, Jinfei Wang, Bo Shan, Yang Song, Yongjun He, Taifeng Dong
Abstract:
Winter wheat is one of the main crops in Canada. Forecasting of within-field variability of yield in winter wheat at the early stages is essential for precision farming. However, the crop yield modelling based on high spatial resolution satellite data is generally affected by the lack of continuous satellite observations, resulting in reducing the generalization ability of the models and increasing the difficulty of crop yield forecasting at the early stages. In this study, the correlations between Sentinel-2 data (vegetation indices and reflectance) and yield data collected by combine harvester were investigated and a generalized multivariate linear regression (MLR) model was built and tested with data acquired in different years. It was found that the four-band reflectance (blue, green, red, near-infrared) performed better than their vegetation indices (NDVI, EVI, WDRVI and OSAVI) in wheat yield prediction. The optimum phenological stage for wheat yield prediction with highest accuracy was at the growing stages from the end of the flowering to the beginning of the filling stage. The best MLR model was therefore built to predict wheat yield before harvest using Sentinel-2 data acquired at the end of the flowering stage. Further, to improve the ability of the yield prediction at the early stages, three simple unsupervised domain adaptation (DA) methods were adopted to transform the reflectance data at the early stages to the optimum phenological stage. The winter wheat yield prediction using multiple vegetation indices showed higher accuracy than using single vegetation index. The optimum stage for winter wheat yield forecasting varied with different fields when using vegetation indices, while it was consistent when using multispectral reflectance and the optimum stage for winter wheat yield prediction was at the end of flowering stage. The average testing RMSE of the MLR model at the end of the flowering stage was 604.48 kg/ha. Near the booting stage, the average testing RMSE of yield prediction using the best MLR was reduced to 799.18 kg/ha when applying the mean matching domain adaptation approach to transform the data to the target domain (at the end of the flowering) compared to that using the original data based on the models developed at the booting stage directly (“MLR at the early stage”) (RMSE =1140.64 kg/ha). This study demonstrated that the simple mean matching (MM) performed better than other DA methods and it was found that “DA then MLR at the optimum stage” performed better than “MLR directly at the early stages” for winter wheat yield forecasting at the early stages. The results indicated that the DA had a great potential in near real-time crop yield forecasting at the early stages. This study indicated that the simple domain adaptation methods had a great potential in crop yield prediction at the early stages using remote sensing data.Keywords: wheat yield prediction, domain adaptation, Sentinel-2, within-field scale
Procedia PDF Downloads 64501 Development of a Framework for Assessing Public Health Risk Due to Pluvial Flooding: A Case Study of Sukhumvit, Bangkok
Authors: Pratima Pokharel
Abstract:
When sewer overflow due to rainfall in urban areas, this leads to public health risks when an individual is exposed to that contaminated floodwater. Nevertheless, it is still unclear the extent to which the infections pose a risk to public health. This study analyzed reported diarrheal cases by month and age in Bangkok, Thailand. The results showed that the cases are reported higher in the wet season than in the dry season. It was also found that in Bangkok, the probability of infection with diarrheal diseases in the wet season is higher for the age group between 15 to 44. However, the probability of infection is highest for kids under 5 years, but they are not influenced by wet weather. Further, this study introduced a vulnerability that leads to health risks from urban flooding. This study has found some vulnerability variables that contribute to health risks from flooding. Thus, for vulnerability analysis, the study has chosen two variables, economic status, and age, that contribute to health risk. Assuming that the people's economic status depends on the types of houses they are living in, the study shows the spatial distribution of economic status in the vulnerability maps. The vulnerability map result shows that people living in Sukhumvit have low vulnerability to health risks with respect to the types of houses they are living in. In addition, from age the probability of infection of diarrhea was analyzed. Moreover, a field survey was carried out to validate the vulnerability of people. It showed that health vulnerability depends on economic status, income level, and education. The result depicts that people with low income and poor living conditions are more vulnerable to health risks. Further, the study also carried out 1D Hydrodynamic Advection-Dispersion modelling with 2-year rainfall events to simulate the dispersion of fecal coliform concentration in the drainage network as well as 1D/2D Hydrodynamic model to simulate the overland flow. The 1D result represents higher concentrations for dry weather flows and a large dilution of concentration on the commencement of a rainfall event, resulting in a drop of the concentration due to runoff generated after rainfall, whereas the model produced flood depth, flood duration, and fecal coliform concentration maps, which were transferred to ArcGIS to produce hazard and risk maps. In addition, the study also simulates the 5-year and 10-year rainfall simulations to show the variation in health hazards and risks. It was found that even though the hazard coverage is very high with a 10-year rainfall events among three rainfall events, the risk was observed to be the same with a 5-year and 10-year rainfall events.Keywords: urban flooding, risk, hazard, vulnerability, health risk, framework
Procedia PDF Downloads 75500 Heat-Induced Uncertainty of Industrial Computed Tomography Measuring a Stainless Steel Cylinder
Authors: Verena M. Moock, Darien E. Arce Chávez, Mariana M. Espejel González, Leopoldo Ruíz-Huerta, Crescencio García-Segundo
Abstract:
Uncertainty analysis in industrial computed tomography is commonly related to metrological trace tools, which offer precision measurements of external part features. Unfortunately, there is no such reference tool for internal measurements to profit from the unique imaging potential of X-rays. Uncertainty approximations for computed tomography are still based on general aspects of the industrial machine and do not adapt to acquisition parameters or part characteristics. The present study investigates the impact of the acquisition time on the dimensional uncertainty measuring a stainless steel cylinder with a circular tomography scan. The authors develop the figure difference method for X-ray radiography to evaluate the volumetric differences introduced within the projected absorption maps of the metal workpiece. The dimensional uncertainty is dominantly influenced by photon energy dissipated as heat causing the thermal expansion of the metal, as monitored by an infrared camera within the industrial tomograph. With the proposed methodology, we are able to show evolving temperature differences throughout the tomography acquisition. This is an early study showing that the number of projections in computer tomography induces dimensional error due to energy absorption. The error magnitude would depend on the thermal properties of the sample and the acquisition parameters by placing apparent non-uniform unwanted volumetric expansion. We introduce infrared imaging for the experimental display of metrological uncertainty in a particular metal part of symmetric geometry. We assess that the current results are of fundamental value to reach the balance between the number of projections and uncertainty tolerance when performing analysis with X-ray dimensional exploration in precision measurements with industrial tomography.Keywords: computed tomography, digital metrology, infrared imaging, thermal expansion
Procedia PDF Downloads 121499 A Meta-Analysis of School-Based Suicide Prevention for Adolescents and Meta-Regressions of Contextual and Intervention Factors
Authors: E. H. Walsh, J. McMahon, M. P. Herring
Abstract:
Post-primary school-based suicide prevention (PSSP) is a valuable avenue to reduce suicidal behaviours in adolescents. The aims of this meta-analysis and meta-regression were 1) to quantify the effect of PSSP interventions on adolescent suicide ideation (SI) and suicide attempts (SA), and 2) to explore how intervention effects may vary based on important contextual and intervention factors. This study provides further support to the benefits of PSSP by demonstrating lower suicide outcomes in over 30,000 adolescents following PSSP and mental health interventions and tentatively suggests that intervention effectiveness may potentially vary based on intervention factors. The protocol for this study is registered on PROSPERO (ID=CRD42020168883). Population, intervention, comparison, outcomes, and study design (PICOs) defined eligible studies as cluster randomised studies (n=12) containing PSSP and measuring suicide outcomes. Aggregate electronic database EBSCO host, Web of Science, and Cochrane Central Register of Controlled Trials databases were searched. Cochrane bias tools for cluster randomised studies demonstrated that half of the studies were rated as low risk of bias. The Egger’s Regression Test adapted for multi-level modelling indicated that publication bias was not an issue (all ps > .05). Crude and corresponding adjusted pooled log odds ratios (OR) were computed using the Metafor package in R, yielding 12 SA and 19 SI effects. Multi-level random-effects models accounting for dependencies of effects from the same study revealed that in crude models, compared to controls, interventions were significantly associated with 13% (OR=0.87, 95% confidence interval (CI), [0.78,0.96], Q18 =15.41, p=0.63) and 34% (OR=0.66, 95%CI [0.47,0.91], Q10=16.31, p=0.13) lower odds of SI and SA, respectively. Adjusted models showed similar odds reductions of 15% (OR=0.85, 95%CI[0.75,0.95], Q18=10.04, p=0.93) and 28% (OR=0.72, 95%CI[0.59,0.87], Q10=10.46, p=0.49) for SI and SA, respectively. Within-cluster heterogeneity ranged from no heterogeneity to low heterogeneity for SA across crude and adjusted models (0-9%). No heterogeneity was identified for SI across crude and adjusted models (0%). Pre-specified univariate moderator analyses were not significant for SA (all ps < 0.05). Variations in average pooled SA odds reductions across categories of various intervention characteristics were observed (all ps < 0.05), which preliminarily suggests that the effectiveness of interventions may potentially vary across intervention factors. These findings have practical implications for researchers, clinicians, educators, and decision-makers. Further investigation of important logical, theoretical, and empirical moderators on PSSP intervention effectiveness is recommended to establish how and when PSSP interventions best reduce adolescent suicidal behaviour.Keywords: adolescents, contextual factors, post-primary school-based suicide prevention, suicide ideation, suicide attempts
Procedia PDF Downloads 102498 Measuring the Impact of Implementing an Effective Practice Skills Training Model in Youth Detention
Authors: Phillipa Evans, Christopher Trotter
Abstract:
Aims: This study aims to examine the effectiveness of a practice skills framework implemented in three youth detention centres in Juvenile Justice in New South Wales (NSW), Australia. The study is supported by a grant from and Australian Research Council and NSW Juvenile Justice. Recent years have seen a number of incidents in youth detention centres in Australia and other places. These have led to inquiries and reviews with some suggesting that detention centres often do not even meet basic human rights and do little in terms of providing opportunities for rehabilitation of residents. While there is an increasing body of research suggesting that community based supervision can be effective in reducing recidivism if appropriate skills are used by supervisors, there has been less work considering worker skills in youth detention settings. The research that has been done, however, suggest that teaching interpersonal skills to youth officers may be effective in enhancing the rehabilitation culture of centres. Positive outcomes have been seen in a UK detention centre for example, from teaching staff to do five-minute problem-solving interventions. The aim of this project is to examine the effectiveness of training and coaching youth detention staff in three NSW detention centres in interpersonal practice skills. Effectiveness is defined in terms of reductions in the frequency of critical incidents and improvements in the well-being of staff and young people. The research is important as the results may lead to the development of more humane and rehabilitative experiences for young people. Method: The study involves training staff in core effective practice skills and supporting staff in the use of those skills through supervision and de-briefing. The core effective practice skills include role clarification, pro-social modelling, brief problem solving, and relationship skills. The training also addresses some of the background to criminal behaviour including trauma. Data regarding critical incidents and well-being before and after the program implementation are being collected. This involves interviews with staff and young people, the completion of well-being scales, and examination of departmental records regarding critical incidents. In addition to the before and after comparison a matched control group which is not offered the intervention is also being used. The study includes more than 400 young people and 100 youth officers across 6 centres including the control sites. Data collection includes interviews with workers and young people, critical incident data such as assaults, use of lock ups and confinement and school attendance. Data collection also includes analysing video-tapes of centre activities for changes in the use of staff skills. Results: The project is currently underway with ongoing training and supervision. Early results will be available for the conference.Keywords: custody, practice skills, training, youth workers
Procedia PDF Downloads 103497 Motion Planning and Simulation Design of a Redundant Robot for Sheet Metal Bending Processes
Authors: Chih-Jer Lin, Jian-Hong Hou
Abstract:
Industry 4.0 is a vision of integrated industry implemented by artificial intelligent computing, software, and Internet technologies. The main goal of industry 4.0 is to deal with the difficulty owing to competitive pressures in the marketplace. For today’s manufacturing factories, the type of production is changed from mass production (high quantity production with low product variety) to medium quantity-high variety production. To offer flexibility, better quality control, and improved productivity, robot manipulators are used to combine material processing, material handling, and part positioning systems into an integrated manufacturing system. To implement the automated system for sheet metal bending operations, motion planning of a 7-degrees of freedom (DOF) robot is studied in this paper. A virtual reality (VR) environment of a bending cell, which consists of the robot and a bending machine, is established using the virtual robot experimentation platform (V-REP) simulator. For sheet metal bending operations, the robot only needs six DOFs for the pick-and-place or tracking tasks. Therefore, this 7 DOF robot has more DOFs than the required to execute a specified task; it can be called a redundant robot. Therefore, this robot has kinematic redundancies to deal with the task-priority problems. For redundant robots, Pseudo-inverse of the Jacobian is the most popular motion planning method, but the pseudo-inverse methods usually lead to a kind of chaotic motion with unpredictable arm configurations as the Jacobian matrix lose ranks. To overcome the above problem, we proposed a method to formulate the motion planning problems as optimization problem. Moreover, a genetic algorithm (GA) based method is proposed to deal with motion planning of the redundant robot. Simulation results validate the proposed method feasible for motion planning of the redundant robot in an automated sheet-metal bending operations.Keywords: redundant robot, motion planning, genetic algorithm, obstacle avoidance
Procedia PDF Downloads 146496 Sliding Mode Power System Stabilizer for Synchronous Generator Stability Improvement
Authors: J. Ritonja, R. Brezovnik, M. Petrun, B. Polajžer
Abstract:
Many modern synchronous generators in power systems are extremely weakly damped. The reasons are cost optimization of the machine building and introduction of the additional control equipment into power systems. Oscillations of the synchronous generators and related stability problems of the power systems are harmful and can lead to failures in operation and to damages. The only useful solution to increase damping of the unwanted oscillations represents the implementation of the power system stabilizers. Power system stabilizers generate the additional control signal which changes synchronous generator field excitation voltage. Modern power system stabilizers are integrated into static excitation systems of the synchronous generators. Available commercial power system stabilizers are based on linear control theory. Due to the nonlinear dynamics of the synchronous generator, current stabilizers do not assure optimal damping of the synchronous generator’s oscillations in the entire operating range. For that reason the use of the robust power system stabilizers which are convenient for the entire operating range is reasonable. There are numerous robust techniques applicable for the power system stabilizers. In this paper the use of sliding mode control for synchronous generator stability improvement is studied. On the basis of the sliding mode theory, the robust power system stabilizer was developed. The main advantages of the sliding mode controller are simple realization of the control algorithm, robustness to parameter variations and elimination of disturbances. The advantage of the proposed sliding mode controller against conventional linear controller was tested for damping of the synchronous generator oscillations in the entire operating range. Obtained results show the improved damping in the entire operating range of the synchronous generator and the increase of the power system stability. The proposed study contributes to the progress in the development of the advanced stabilizer, which will replace conventional linear stabilizers and improve damping of the synchronous generators.Keywords: control theory, power system stabilizer, robust control, sliding mode control, stability, synchronous generator
Procedia PDF Downloads 223495 Understanding Evidence Dispersal Caused by the Effects of Using Unmanned Aerial Vehicles in Active Indoor Crime Scenes
Authors: Elizabeth Parrott, Harry Pointon, Frederic Bezombes, Heather Panter
Abstract:
Unmanned aerial vehicles (UAV’s) are making a profound effect within policing, forensic and fire service procedures worldwide. These intelligent devices have already proven useful in photographing and recording large-scale outdoor and indoor sites using orthomosaic and three-dimensional (3D) modelling techniques, for the purpose of capturing and recording sites during and post-incident. UAV’s are becoming an established tool as they are extending the reach of the photographer and offering new perspectives without the expense and restrictions of deploying full-scale aircraft. 3D reconstruction quality is directly linked to the resolution of captured images; therefore, close proximity flights are required for more detailed models. As technology advances deployment of UAVs in confined spaces is becoming more common. With this in mind, this study investigates the effects of UAV operation within active crimes scenes with regard to the dispersal of particulate evidence. To date, there has been little consideration given to the potential effects of using UAV’s within active crime scenes aside from a legislation point of view. Although potentially the technology can reduce the likelihood of contamination by replacing some of the roles of investigating practitioners. There is the risk of evidence dispersal caused by the effect of the strong airflow beneath the UAV, from the downwash of the propellers. The initial results of this study are therefore presented to determine the height of least effect at which to fly, and the commercial propeller type to choose to generate the smallest amount of disturbance from the dataset tested. In this study, a range of commercially available 4-inch propellers were chosen as a starting point due to the common availability and their small size makes them well suited for operation within confined spaces. To perform the testing, a rig was configured to support a single motor and propeller powered with a standalone mains power supply and controlled via a microcontroller. This was to mimic a complete throttle cycle and control the device to ensure repeatability. By removing the variances of battery packs and complex UAV structures to allow for a more robust setup. Therefore, the only changing factors were the propeller and operating height. The results were calculated via computer vision analysis of the recorded dispersal of the sample particles placed below the arm-mounted propeller. The aim of this initial study is to give practitioners an insight into the technology to use when operating within confined spaces as well as recognizing some of the issues caused by UAV’s within active crime scenes.Keywords: dispersal, evidence, propeller, UAV
Procedia PDF Downloads 163494 Human Factors Interventions for Risk and Reliability Management of Defence Systems
Authors: Chitra Rajagopal, Indra Deo Kumar, Ila Chauhan, Ruchi Joshi, Binoy Bhargavan
Abstract:
Reliability and safety are essential for the success of mission-critical and safety-critical defense systems. Humans are part of the entire life cycle of defense systems development and deployment. The majority of industrial accidents or disasters are attributed to human errors. Therefore, considerations of human performance and human reliability are critical in all complex systems, including defense systems. Defense systems are operating from the ground, naval and aerial platforms in diverse conditions impose unique physical and psychological challenges to the human operators. Some of the safety and mission-critical defense systems with human-machine interactions are fighter planes, submarines, warships, combat vehicles, aerial and naval platforms based missiles, etc. Human roles and responsibilities are also going through a transition due to the infusion of artificial intelligence and cyber technologies. Human operators, not accustomed to such challenges, are more likely to commit errors, which may lead to accidents or loss events. In such a scenario, it is imperative to understand the human factors in defense systems for better systems performance, safety, and cost-effectiveness. A case study using Task Analysis (TA) based methodology for assessment and reduction of human errors in the Air and Missile Defense System in the context of emerging technologies were presented. Action-oriented task analysis techniques such as Hierarchical Task Analysis (HTA) and Operator Action Event Tree (OAET) along with Critical Action and Decision Event Tree (CADET) for cognitive task analysis was used. Human factors assessment based on the task analysis helps in realizing safe and reliable defense systems. These techniques helped in the identification of human errors during different phases of Air and Missile Defence operations, leading to meet the requirement of a safe, reliable and cost-effective mission.Keywords: defence systems, reliability, risk, safety
Procedia PDF Downloads 135493 Comparison of Methodologies to Compute the Probabilistic Seismic Hazard Involving Faults and Associated Uncertainties
Authors: Aude Gounelle, Gloria Senfaute, Ludivine Saint-Mard, Thomas Chartier
Abstract:
The long-term deformation rates of faults are not fully captured by Probabilistic Seismic Hazard Assessment (PSHA). PSHA that use catalogues to develop area or smoothed-seismicity sources is limited by the data available to constraint future earthquakes activity rates. The integration of faults in PSHA can at least partially address the long-term deformation. However, careful treatment of fault sources is required, particularly, in low strain rate regions, where estimated seismic hazard levels are highly sensitive to assumptions concerning fault geometry, segmentation and slip rate. When integrating faults in PSHA various constraints on earthquake rates from geologic and seismologic data have to be satisfied. For low strain rate regions where such data is scarce it would be especially challenging. Faults in PSHA requires conversion of the geologic and seismologic data into fault geometries, slip rates and then into earthquake activity rates. Several approaches exist for translating slip rates into earthquake activity rates. In the most frequently used approach, the background earthquakes are handled using a truncated approach, in which earthquakes with a magnitude lower or equal to a threshold magnitude (Mw) occur in the background zone, with a rate defined by the rate in the earthquake catalogue. Although magnitudes higher than the threshold are located on the fault with a rate defined using the average slip rate of the fault. As high-lighted by several research, seismic events with magnitudes stronger than the selected magnitude threshold may potentially occur in the background and not only at the fault, especially in regions of slow tectonic deformation. It also has been known that several sections of a fault or several faults could rupture during a single fault-to-fault rupture. It is then essential to apply a consistent modelling procedure to allow for a large set of possible fault-to-fault ruptures to occur aleatory in the hazard model while reflecting the individual slip rate of each section of the fault. In 2019, a tool named SHERIFS (Seismic Hazard and Earthquake Rates in Fault Systems) was published. The tool is using a methodology to calculate the earthquake rates in a fault system where the slip-rate budget of each fault is conversed into rupture rates for all possible single faults and faultto-fault ruptures. The objective of this paper is to compare the SHERIFS method with one other frequently used model to analyse the impact on the seismic hazard and through sensibility studies better understand the influence of key parameters and assumptions. For this application, a simplified but realistic case study was selected, which is in an area of moderate to hight seismicity (South Est of France) and where the fault is supposed to have a low strain.Keywords: deformation rates, faults, probabilistic seismic hazard, PSHA
Procedia PDF Downloads 65492 Evaluation of Mechanical Properties and Surface Roughness of Nanofilled and Microhybrid Composites
Authors: Solmaz Eskandarion, Haniyeh Eftekhar, Amin Fallahi
Abstract:
Introduction: Nowadays cosmetic dentistry has gained greater attention because of the changing demands of dentistry patients. Composite resin restorations play an important role in the field of esthetic restorations. Due to the variation between the resin composites, it is important to be aware of their mechanical properties and surface roughness. So, the aim of this study was to compare the mechanical properties (surface hardness, compressive strength, diametral tensile strength) and surface roughness of four kinds of resin composites after thermal aging process. Materials and Method: 10 samples of each composite resins (Gradia-direct (GC), Filtek Z250 (3M), G-ænial (GC), Filtek Z350 (3M- filtek supreme) prepared for evaluation of each properties (totally 120 samples). Thermocycling (with temperature 5 and 55 degree of centigrade and 10000 cycles) were applied. Then, the samples were tested about their compressive strength and diametral tensile strength using UTM. And surface hardness was evaluated with Microhardness testing machine. Either surface roughness was evaluated with Scanning electron microscope after surface polishing. Result: About compressive strength (CS), Filtek Z250 showed the highest value. But there were not any significant differences between 4 groups about CS. Either Filtek Z250 detected as a composite with highest value of diametral tensile strength (DTS) and after that highest to lowest DTS was related to: Filtek Z350, G-ænial and Gradia-direct. And about DTS all of the groups showed significant differences (P<0.05). Vickers Hardness Number (VHN) of Filtek Z250 was the greatest. After that Filtek Z350, G-ænial and Gradia-direct followed it. The surface roughness of nano-filled composites was less than Microhybrid composites. Either the surface roughness of GC Ganial was a little greater than Filtek Z250. Conclusion: This study indicates that there is not any evident significant difference between the groups amoung their mechanical properties. But it seems that Filtek Z250 showed slightly better mechanical properties. About surface roughness, nanofilled composites were better that Microhybrid.Keywords: mechanical properties, surface roughness, resin composite, compressive strength, thermal aging
Procedia PDF Downloads 354491 Feature Engineering Based Detection of Buffer Overflow Vulnerability in Source Code Using Deep Neural Networks
Authors: Mst Shapna Akter, Hossain Shahriar
Abstract:
One of the most important challenges in the field of software code audit is the presence of vulnerabilities in software source code. Every year, more and more software flaws are found, either internally in proprietary code or revealed publicly. These flaws are highly likely exploited and lead to system compromise, data leakage, or denial of service. C and C++ open-source code are now available in order to create a largescale, machine-learning system for function-level vulnerability identification. We assembled a sizable dataset of millions of opensource functions that point to potential exploits. We developed an efficient and scalable vulnerability detection method based on deep neural network models that learn features extracted from the source codes. The source code is first converted into a minimal intermediate representation to remove the pointless components and shorten the dependency. Moreover, we keep the semantic and syntactic information using state-of-the-art word embedding algorithms such as glove and fastText. The embedded vectors are subsequently fed into deep learning networks such as LSTM, BilSTM, LSTM-Autoencoder, word2vec, BERT, and GPT-2 to classify the possible vulnerabilities. Furthermore, we proposed a neural network model which can overcome issues associated with traditional neural networks. Evaluation metrics such as f1 score, precision, recall, accuracy, and total execution time have been used to measure the performance. We made a comparative analysis between results derived from features containing a minimal text representation and semantic and syntactic information. We found that all of the deep learning models provide comparatively higher accuracy when we use semantic and syntactic information as the features but require higher execution time as the word embedding the algorithm puts on a bit of complexity to the overall system.Keywords: cyber security, vulnerability detection, neural networks, feature extraction
Procedia PDF Downloads 89490 Radar Fault Diagnosis Strategy Based on Deep Learning
Authors: Bin Feng, Zhulin Zong
Abstract:
Radar systems are critical in the modern military, aviation, and maritime operations, and their proper functioning is essential for the success of these operations. However, due to the complexity and sensitivity of radar systems, they are susceptible to various faults that can significantly affect their performance. Traditional radar fault diagnosis strategies rely on expert knowledge and rule-based approaches, which are often limited in effectiveness and require a lot of time and resources. Deep learning has recently emerged as a promising approach for fault diagnosis due to its ability to learn features and patterns from large amounts of data automatically. In this paper, we propose a radar fault diagnosis strategy based on deep learning that can accurately identify and classify faults in radar systems. Our approach uses convolutional neural networks (CNN) to extract features from radar signals and fault classify the features. The proposed strategy is trained and validated on a dataset of measured radar signals with various types of faults. The results show that it achieves high accuracy in fault diagnosis. To further evaluate the effectiveness of the proposed strategy, we compare it with traditional rule-based approaches and other machine learning-based methods, including decision trees, support vector machines (SVMs), and random forests. The results demonstrate that our deep learning-based approach outperforms the traditional approaches in terms of accuracy and efficiency. Finally, we discuss the potential applications and limitations of the proposed strategy, as well as future research directions. Our study highlights the importance and potential of deep learning for radar fault diagnosis. It suggests that it can be a valuable tool for improving the performance and reliability of radar systems. In summary, this paper presents a radar fault diagnosis strategy based on deep learning that achieves high accuracy and efficiency in identifying and classifying faults in radar systems. The proposed strategy has significant potential for practical applications and can pave the way for further research.Keywords: radar system, fault diagnosis, deep learning, radar fault
Procedia PDF Downloads 90489 Big Data Analytics and Public Policy: A Study in Rural India
Authors: Vasantha Gouri Prathapagiri
Abstract:
Innovations in ICT sector facilitate qualitative life style for citizens across the globe. Countries that facilitate usage of new techniques in ICT, i.e., big data analytics find it easier to fulfil the needs of their citizens. Big data is characterised by its volume, variety, and speed. Analytics involves its processing in a cost effective way in order to draw conclusion for their useful application. Big data also involves into the field of machine learning, artificial intelligence all leading to accuracy in data presentation useful for public policy making. Hence using data analytics in public policy making is a proper way to march towards all round development of any country. The data driven insights can help the government to take important strategic decisions with regard to socio-economic development of her country. Developed nations like UK and USA are already far ahead on the path of digitization with the support of Big Data analytics. India is a huge country and is currently on the path of massive digitization being realised through Digital India Mission. Internet connection per household is on the rise every year. This transforms into a massive data set that has the potential to improvise the public services delivery system into an effective service mechanism for Indian citizens. In fact, when compared to developed nations, this capacity is being underutilized in India. This is particularly true for administrative system in rural areas. The present paper focuses on the need for big data analytics adaptation in Indian rural administration and its contribution towards development of the country on a faster pace. Results of the research focussed on the need for increasing awareness and serious capacity building of the government personnel working for rural development with regard to big data analytics and its utility for development of the country. Multiple public policies are framed and implemented for rural development yet the results are not as effective as they should be. Big data has a major role to play in this context as can assist in improving both policy making and implementation aiming at all round development of the country.Keywords: Digital India Mission, public service delivery system, public policy, Indian administration
Procedia PDF Downloads 159488 Microstructure Dependent Fatigue Crack Growth in Aluminum Alloy
Authors: M. S. Nandana, K. Udaya Bhat, C. M. Manjunatha
Abstract:
In this study aluminum alloy 7010 was subjected to three different ageing treatments i.e., peak ageing (T6), over-ageing (T7451) and retrogression and re ageing (RRA) to study the influence of precipitate microstructure on the fatigue crack growth rate behavior. The microstructural modification was studied by using transmission electron microscope (TEM) to examine the change in the size and morphology of precipitates in the matrix and on the grain boundaries. The standard compact tension (CT) specimens were fabricated and tested under constant amplitude fatigue crack growth tests to evaluate the influence of heat treatment on the fatigue crack growth rate properties. The tests were performed in a computer-controlled servo-hydraulic test machine applying a load ratio, R = 0.1 at a loading frequency of 10 Hz as per ASTM E647. The fatigue crack growth was measured by adopting compliance technique using a CMOD gauge attached to the CT specimen. The average size of the matrix precipitates were found to be of 16-20 nm in T7451, 5-6 nm in RRA and 2-3 nm in T6 conditions respectively. The grain boundary precipitate which was continuous in T6, was disintegrated in RRA and T7451 condition. The PFZ width was lower in RRA compared to T7451 condition. The crack growth rate was higher in T7451 and lowest in RRA treated alloy. The RRA treated alloy also exhibits an increase in threshold stress intensity factor range (∆Kₜₕ). The ∆Kₜₕ measured was 11.1, 10.3 and 5.7 MPam¹/² in RRA, T6 and T7451 alloys respectively. The fatigue crack growth rate in RRA treated alloy was nearly 2-3 times lower than that in T6 and was one order lower than that observed in T7451 condition. The surface roughness of RRA treated alloy was more pronounced when compared to the other conditions. The reduction in fatigue crack growth rate in RRA alloy was majorly due to the increase in roughness and partially due to increase in spacing between the matrix precipitates. The reduction in crack growth rate and increase in threshold stress intensity range is expected to benefit the damage tolerant capability of aircraft structural components under service loads.Keywords: damage tolerance, fatigue, heat treatment, PFZ, RRA
Procedia PDF Downloads 153487 Ocean Planner: A Web-Based Decision Aid to Design Measures to Best Mitigate Underwater Noise
Authors: Thomas Folegot, Arnaud Levaufre, Léna Bourven, Nicolas Kermagoret, Alexis Caillard, Roger Gallou
Abstract:
Concern for negative impacts of anthropogenic noise on the ocean’s ecosystems has increased over the recent decades. This concern leads to a similar increased willingness to regulate noise-generating activities, of which shipping is one of the most significant. Dealing with ship noise requires not only knowledge about the noise from individual ships, but also how the ship noise is distributed in time and space within the habitats of concern. Marine mammals, but also fish, sea turtles, larvae and invertebrates are mostly dependent on the sounds they use to hunt, feed, avoid predators, during reproduction to socialize and communicate, or to defend a territory. In the marine environment, sight is only useful up to a few tens of meters, whereas sound can propagate over hundreds or even thousands of kilometers. Directive 2008/56/EC of the European Parliament and of the Council of June 17, 2008 called the Marine Strategy Framework Directive (MSFD) require the Member States of the European Union to take the necessary measures to reduce the impacts of maritime activities to achieve and maintain a good environmental status of the marine environment. The Ocean-Planner is a web-based platform that provides to regulators, managers of protected or sensitive areas, etc. with a decision support tool that enable to anticipate and quantify the effectiveness of management measures in terms of reduction or modification the distribution of underwater noise, in response to Descriptor 11 of the MSFD and to the Marine Spatial Planning Directive. Based on the operational sound modelling tool Quonops Online Service, Ocean-Planner allows the user via an intuitive geographical interface to define management measures at local (Marine Protected Area, Natura 2000 sites, Harbors, etc.) or global (Particularly Sensitive Sea Area) scales, seasonal (regulation over a period of time) or permanent, partial (focused to some maritime activities) or complete (all maritime activities), etc. Speed limit, exclusion area, traffic separation scheme (TSS), and vessel sound level limitation are among the measures supported be the tool. Ocean Planner help to decide on the most effective measure to apply to maintain or restore the biodiversity and the functioning of the ecosystems of the coastal seabed, maintain a good state of conservation of sensitive areas and maintain or restore the populations of marine species.Keywords: underwater noise, marine biodiversity, marine spatial planning, mitigation measures, prediction
Procedia PDF Downloads 122486 The Role of Home Composting in Waste Management Cost Reduction
Authors: Nahid Hassanshahi, Ayoub Karimi-Jashni, Nasser Talebbeydokhti
Abstract:
Due to the economic and environmental benefits of producing less waste, the US Environmental Protection Agency (EPA) introduces source reduction as one of the most important means to deal with the problems caused by increased landfills and pollution. Waste reduction involves all waste management methods, including source reduction, recycling, and composting, which reduce waste flow to landfills or other disposal facilities. Source reduction of waste can be studied from two perspectives: avoiding waste production, or reducing per capita waste production, and waste deviation that indicates the reduction of waste transfer to landfills. The present paper has investigated home composting as a managerial solution for reduction of waste transfer to landfills. Home composting has many benefits. The use of household waste for the production of compost will result in a much smaller amount of waste being sent to landfills, which in turn will reduce the costs of waste collection, transportation and burial. Reducing the volume of waste for disposal and using them for the production of compost and plant fertilizer might help to recycle the material in a shorter time and to use them effectively in order to preserve the environment and reduce contamination. Producing compost in a home-based manner requires very small piece of land for preparation and recycling compared with other methods. The final product of home-made compost is valuable and helps to grow crops and garden plants. It is also used for modifying the soil structure and maintaining its moisture. The food that is transferred to landfills will spoil and produce leachate after a while. It will also release methane and greenhouse gases. But, composting these materials at home is the best way to manage degradable materials, use them efficiently and reduce environmental pollution. Studies have shown that the benefits of the sale of produced compost and the reduced costs of collecting, transporting, and burying waste can well be responsive to the costs of purchasing home compost machine and the cost of related trainings. Moreover, the process of producing home compost may be profitable within 4 to 5 years and as a result, it will have a major role in reducing waste management.Keywords: compost, home compost, reducing waste, waste management
Procedia PDF Downloads 426485 Development of a Microfluidic Device for Low-Volume Sample Lysis
Authors: Abbas Ali Husseini, Ali Mohammad Yazdani, Fatemeh Ghadiri, Alper Şişman
Abstract:
We developed a microchip device that uses surface acoustic waves for rapid lysis of low level of cell samples. The device incorporates sharp-edge glass microparticles for improved performance. We optimized the lysis conditions for high efficiency and evaluated the device's feasibility for point-of-care applications. The microchip contains a 13-finger pair interdigital transducer with a 30-degree focused angle. It generates high-intensity acoustic beams that converge 6 mm away. The microchip operates at a frequency of 16 MHz, exciting Rayleigh waves with a 250 µm wavelength on the LiNbO3 substrate. Cell lysis occurs when Candida albicans cells and glass particles are placed within the focal area. The high-intensity surface acoustic waves induce centrifugal forces on the cells and glass particles, resulting in cell lysis through lateral forces from the sharp-edge glass particles. We conducted 42 pilot cell lysis experiments to optimize the surface acoustic wave-induced streaming. We varied electrical power, droplet volume, glass particle size, concentration, and lysis time. A regression machine-learning model determined the impact of each parameter on lysis efficiency. Based on these findings, we predicted optimal conditions: electrical signal of 2.5 W, sample volume of 20 µl, glass particle size below 10 µm, concentration of 0.2 µg, and a 5-minute lysis period. Downstream analysis successfully amplified a DNA target fragment directly from the lysate. The study presents an efficient microchip-based cell lysis method employing acoustic streaming and microparticle collisions within microdroplets. Integration of a surface acoustic wave-based lysis chip with an isothermal amplification method enables swift point-of-care applications.Keywords: cell lysis, surface acoustic wave, micro-glass particle, droplet
Procedia PDF Downloads 79484 Gene Expressions in Left Ventricle Heart Tissue of Rat after 150 Mev Proton Irradiation
Abstract:
Introduction: In mediastinal radiotherapy and to a lesser extend also in total-body irradiation (TBI) radiation exposure may lead to development of cardiac diseases. Radiation-induced heart disease is dose-dependent and it is characterized by a loss of cardiac function, associated with progressive heart cells degeneration. We aimed to determine the in-vivo radiation effects on fibronectin, ColaA1, ColaA2, galectin and TGFb1 gene expression levels in left ventricle heart tissues of rats after irradiation. Material and method: Four non-treatment adult Wistar rats as control group (group A) were selected. In group B, 4 adult Wistar rats irradiated to 20 Gy single dose of 150 Mev proton beam locally in heart only. In heart plus lung irradiate group (group C) 4 adult rats was irradiated by 50% of lung laterally plus heart radiation that mentioned in before group. At 8 weeks after radiation animals sacrificed and left ventricle heart dropped in liquid nitrogen for RNA extraction by Absolutely RNA® Miniprep Kit (Stratagen, Cat no. 400800). cDNA was synthesized using M-MLV reverse transcriptase (Life Technologies, Cat no. 28025-013). We used Bio-Rad machine (Bio Rad iQ5 Real Time PCR) for QPCR testing by relative standard curve method. Results: We found that gene expression of fibronectin in group C significantly increased compared to control group, but it was not showed significant change in group B compared to group A. The levels of gene expressions of Cola1 and Cola2 in mRNA did not show any significant changes between normal and radiation groups. Changes of expression of galectin target significantly increased only in group C compared to group A. TGFb1 expressions in group C more than group B showed significant enhancement compared to group A. Conclusion: In summary we can say that 20 Gy of proton exposure of heart tissue may lead to detectable damages in heart cells and may distribute function of them as a component of heart tissue structure in molecular level.Keywords: gene expression, heart damage, proton irradiation, radiotherapy
Procedia PDF Downloads 489483 Knowledge Management in the Tourism Industry in Project Management Paradigm
Authors: Olga A. Burukina
Abstract:
Tourism is a complex socio-economic phenomenon, partly regulated by national tourism industries. The sustainable development of tourism in a region, country or in tourist destination depends on a number of factors (political, economic, social, cultural, legal and technological), the understanding and correct interpretation of which is invariably anthropocentric. It is logical that for the successful functioning of a tour operating company, it is necessary to ensure its sustainable development. Sustainable tourism is defined as tourism that fully considers its current and future economic, social and environmental impacts, taking into account the needs of the industry, the environment and the host communities. For the business enterprise, sustainable development is defined as adopting business strategies and activities that meet the needs of the enterprise and its stakeholders today while protecting, sustaining and enhancing the human and natural resources that will be needed in the future. In addition to a systemic approach to the analysis of tourist destinations, each tourism project can and should be considered as a system characterized by a very high degree of variability, since each particular case of its implementation differs from the previous and subsequent ones, sometimes in a cardinal way. At the same time, it is important to understand that this variability is predominantly of anthropogenic nature (except for force majeure situations that are considered separately and afterwards). Knowledge management is the process of creating, sharing, using and managing the knowledge and information of an organization. It refers to a multidisciplinary approach to achieve organisational objectives by making the best use of knowledge. Knowledge management is seen as a key systems component that allows obtaining, storing, transferring, and maintaining information and knowledge in particular, in a long-term perspective. The study aims, firstly, to identify (1) the dynamic changes in the Italian travel industry in the last 5 years before the COVID19 pandemic, which can be considered the scope of force majeure circumstances, (2) the impact of the pandemic on the industry and (3) efforts required to restore it, and secondly, how project management tools can help to improve knowledge management in tour operating companies to maintain their sustainability, diminish potential risks and restore their pre-pandemic performance level as soon as possible. The pilot research is based upon a systems approach and has employed a pilot survey, semi-structured interviews, prior research analysis (aka literature review), comparative analysis, cross-case analysis, and modelling. The results obtained are very encouraging: PM tools can improve knowledge management in tour operating companies and secure the more sustainable development of the Italian tourism industry based on proper knowledge management and risk management.Keywords: knowledge management, project management, sustainable development, tourism industr
Procedia PDF Downloads 155