Search results for: forest cover-type dataset
283 Examination of Recreation Possibilities and Determination of Efficiency Zone in Bursa, Province Nilufer Creek
Authors: Zeynep Pirselimoglu Batman, Elvan Ender Altay, Murat Zencirkiran
Abstract:
Water and water resources are characteristic areas with their special ecosystems Their natural, cultural and economic value and recreation opportunities are high. Recreational activities differ according to the natural, cultural, socio-economic resource values of the areas. In this sense, water and water edge areas, which are important for their resource values, are also important landscape values for recreational activities. From these landscapes values, creeks and the surrounding areas have become a major source of daily life in the past, as well as a major attraction for people's leisure time. However, their qualities and quantities must be sufficient to enable these areas to be used effectively in a recreational sense and to be able to fulfill their recreational functions. The purpose of the study is to identify the recreational use of the water-based activities and identify effective service areas in dense urbanization zones along the creek and green spaces around them. For this purpose, the study was carried out in the vicinity of Nilufer Creek in Bursa. The study area and its immediate surroundings are in the boundaries of Osmangazi and Nilufer districts. The study was carried out in the green spaces along the creek with an individual interaction of 17.930m. These areas are Hudavendigar Urban Park, Atatürk Urban Forest, Bursa Zoo, Soganlı Botanical Park, Mihrapli Park, Nilufer Valley Park. In the first phase of the study, the efficiency zones of these locations were calculated according to international standards. 3200m of this locations are serving the city population and 800m are serving the district and neighborhood population. These calculations are processed on the digitized map by the AUTOCAD program using the satellite image. The efficiency zone of these green spaces in the city were calculated as 71.04 km². In the second phase of the study, water-based current activities were determined by evaluating the recreational potential of these green spaces, which are located along the Nilufer Creek, where efficiency zones have been identified. It has been determined that water-based activities are used intensively in Hudavendigar Urban Park and interacted with Nilufer Creek. Within the scope of effective zones for the study area, appropriate recreational planning proposals have been developed and water-based activities have been suggested.Keywords: Bursa, efficiency zone, Nilufer Creek, recreation, water-based activities
Procedia PDF Downloads 161282 Dna Barcoding Of Selected Fin Fishes From Imo River In Rivers State, South South Nigeria Using Cytochrome C Oxidase Subunit 1 Gene
Authors: Anukwu John Uchenn, Nwamba, Helen Ogochukwu, Achikanu Cosmas, Chiaha Emiliana
Abstract:
The continuous decline in biodiversity is worrisome. This decline is predominant in fish population. Although taxonomy has been an age-long field of science, there are still undiscovered members of species and new species are waiting to be uncovered. The failure of traditional taxonomic method to address this issue has resulted to the adoption of a molecular approach-DNA barcoding. It was proposed that DNA barcoding using mitochondrion cytochrome oxidase subunit I (COI) gene has the capability to serve as a barcode for fishes. The aim of this study was to use DNA barcoding in the identification of fish species in Imo River, Rivers State. A total of eight (8) fish samples were collected and used for this study. Quick DNA Miniprep Plus kit (D4068, Zymo Research) was used for the DNA extraction which was followed by PCR amplification and sequencing. BLAST result shows correlation between the sequence queried and the biological sequences with the NCBI database. The names of the samples, percentage ID, predicted organisms and GenBank Accession numbers were clearly identified. A total of 16 sequences (all > 600bp) belonging to 7 species, 7 genera, 7 families and 4 orders were validated and submitted to the NCBI database. Each nucleotide peak was represented by a single colour with various percentage occurrences. Four (50%) out of the 8 original samples analyzed corresponded with the predicted organisms from BLAST result. Pairwise sequence alignment showed different consensus positions and a total of 11 mutations found in Chrysichthys nigrodigitatus (n=4), Oreochromis niloticus (n=2) and Clarias gariepinus (n=5). The whole mutations were substitution (transition and transversion) with no deletion and insertion. However, the transition mutation (n=6) was more in number compared to the transversion (n=5) mutation. There were a total of 834 positions in the final dataset. This work will facilitate more research in other keys areas such as identification of mislabeled fish products, illegal trading of endangered species and effective tracking of fish biodiversity.Keywords: DNA barcoding, Imo river, phylogenetic tree, pairwise DNA alignment
Procedia PDF Downloads 7281 Understanding Help Seeking among Black Women with Clinically Significant Posttraumatic Stress Symptoms
Authors: Glenda Wrenn, Juliet Muzere, Meldra Hall, Allyson Belton, Kisha Holden, Chanita Hughes-Halbert, Martha Kent, Bekh Bradley
Abstract:
Understanding the help seeking decision making process and experiences of health disparity populations with posttraumatic stress disorder (PTSD) is central to development of trauma-informed, culturally centered, and patient focused services. Yet, little is known about the decision making process among adult Black women who are non-treatment seekers as they are, by definition, not engaged in services. Methods: Audiotaped interviews were conducted with 30 African American adult women with clinically significant PTSD symptoms who were engaged in primary care, but not in treatment for PTSD despite symptom burden. A qualitative interview guide was used to elucidate key themes. Independent coding of themes mapped to theory and identification of emergent themes were conducted using qualitative methods. An existing quantitative dataset was analyzed to contextualize responses and provide a descriptive summary of the sample. Results: Emergent themes revealed that active mental avoidance, the intermittent nature of distress, ambivalence, and self-identified resilience as undermining to help seeking decisions. Participants were stuck within the help-seeking phase of ‘recognition’ of illness and retained a sense of “it is my decision” despite endorsing significant social and environmental negative influencers. Participants distinguished ‘help acceptance’ from ‘help seeking’ with greater willingness to accept help and importance placed on being of help to others. Conclusions: Elucidation of the decision-making process from the perspective of non-treatment seekers has implications for outreach and treatment within models of integrated and specialty systems care. The salience of responses to trauma symptoms and stagnation in the help seeking recognition phase are findings relevant to integrated care service design and community engagement.Keywords: culture, help-seeking, integrated care, PTSD
Procedia PDF Downloads 236280 Next Generation Radiation Risk Assessment and Prediction Tools Generation Applying AI-Machine (Deep) Learning Algorithms
Authors: Selim M. Khan
Abstract:
Indoor air quality is strongly influenced by the presence of radioactive radon (222Rn) gas. Indeed, exposure to high 222Rn concentrations is unequivocally linked to DNA damage and lung cancer and is a worsening issue in North American and European built environments, having increased over time within newer housing stocks as a function of as yet unclear variables. Indoor air radon concentration can be influenced by a wide range of environmental, structural, and behavioral factors. As some of these factors are quantitative while others are qualitative, no single statistical model can determine indoor radon level precisely while simultaneously considering all these variables across a complex and highly diverse dataset. The ability of AI- machine (deep) learning to simultaneously analyze multiple quantitative and qualitative features makes it suitable to predict radon with a high degree of precision. Using Canadian and Swedish long-term indoor air radon exposure data, we are using artificial deep neural network models with random weights and polynomial statistical models in MATLAB to assess and predict radon health risk to human as a function of geospatial, human behavioral, and built environmental metrics. Our initial artificial neural network with random weights model run by sigmoid activation tested different combinations of variables and showed the highest prediction accuracy (>96%) within the reasonable iterations. Here, we present details of these emerging methods and discuss strengths and weaknesses compared to the traditional artificial neural network and statistical methods commonly used to predict indoor air quality in different countries. We propose an artificial deep neural network with random weights as a highly effective method for assessing and predicting indoor radon.Keywords: radon, radiation protection, lung cancer, aI-machine deep learnng, risk assessment, risk prediction, Europe, North America
Procedia PDF Downloads 96279 Computational Pipeline for Lynch Syndrome Detection: Integrating Alignment, Variant Calling, and Annotations
Authors: Rofida Gamal, Mostafa Mohammed, Mariam Adel, Marwa Gamal, Marwa kamal, Ayat Saber, Maha Mamdouh, Amira Emad, Mai Ramadan
Abstract:
Lynch Syndrome is an inherited genetic condition associated with an increased risk of colorectal and other cancers. Detecting Lynch Syndrome in individuals is crucial for early intervention and preventive measures. This study proposes a computational pipeline for Lynch Syndrome detection by integrating alignment, variant calling, and annotation. The pipeline leverages popular tools such as FastQC, Trimmomatic, BWA, bcftools, and ANNOVAR to process the input FASTQ file, perform quality trimming, align reads to the reference genome, call variants, and annotate them. It is believed that the computational pipeline was applied to a dataset of Lynch Syndrome cases, and its performance was evaluated. It is believed that the quality check step ensured the integrity of the sequencing data, while the trimming process is thought to have removed low-quality bases and adaptors. In the alignment step, it is believed that the reads were accurately mapped to the reference genome, and the subsequent variant calling step is believed to have identified potential genetic variants. The annotation step is believed to have provided functional insights into the detected variants, including their effects on known Lynch Syndrome-associated genes. The results obtained from the pipeline revealed Lynch Syndrome-related positions in the genome, providing valuable information for further investigation and clinical decision-making. The pipeline's effectiveness was demonstrated through its ability to streamline the analysis workflow and identify potential genetic markers associated with Lynch Syndrome. It is believed that the computational pipeline presents a comprehensive and efficient approach to Lynch Syndrome detection, contributing to early diagnosis and intervention. The modularity and flexibility of the pipeline are believed to enable customization and adaptation to various datasets and research settings. Further optimization and validation are believed to be necessary to enhance performance and applicability across diverse populations.Keywords: Lynch Syndrome, computational pipeline, alignment, variant calling, annotation, genetic markers
Procedia PDF Downloads 78278 Speckle-Based Phase Contrast Micro-Computed Tomography with Neural Network Reconstruction
Authors: Y. Zheng, M. Busi, A. F. Pedersen, M. A. Beltran, C. Gundlach
Abstract:
X-ray phase contrast imaging has shown to yield a better contrast compared to conventional attenuation X-ray imaging, especially for soft tissues in the medical imaging energy range. This can potentially lead to better diagnosis for patients. However, phase contrast imaging has mainly been performed using highly brilliant Synchrotron radiation, as it requires high coherence X-rays. Many research teams have demonstrated that it is also feasible using a laboratory source, bringing it one step closer to clinical use. Nevertheless, the requirement of fine gratings and high precision stepping motors when using a laboratory source prevents it from being widely used. Recently, a random phase object has been proposed as an analyzer. This method requires a much less robust experimental setup. However, previous studies were done using a particular X-ray source (liquid-metal jet micro-focus source) or high precision motors for stepping. We have been working on a much simpler setup with just small modification of a commercial bench-top micro-CT (computed tomography) scanner, by introducing a piece of sandpaper as the phase analyzer in front of the X-ray source. However, it needs a suitable algorithm for speckle tracking and 3D reconstructions. The precision and sensitivity of speckle tracking algorithm determine the resolution of the system, while the 3D reconstruction algorithm will affect the minimum number of projections required, thus limiting the temporal resolution. As phase contrast imaging methods usually require much longer exposure time than traditional absorption based X-ray imaging technologies, a dynamic phase contrast micro-CT with a high temporal resolution is particularly challenging. Different reconstruction methods, including neural network based techniques, will be evaluated in this project to increase the temporal resolution of the phase contrast micro-CT. A Monte Carlo ray tracing simulation (McXtrace) was used to generate a large dataset to train the neural network, in order to address the issue that neural networks require large amount of training data to get high-quality reconstructions.Keywords: micro-ct, neural networks, reconstruction, speckle-based x-ray phase contrast
Procedia PDF Downloads 260277 Identification of Cocoa-Based Agroforestry Systems in Northern Madagascar: Pillar of Sustainable Management
Authors: Marizia Roberta Rasoanandrasana, Hery Lisy Tiana. Ranarijaona, Herintsitohaina Razakamanarivo, Eric Delaitre, Nandrianina Ramifehiarivo
Abstract:
Madagascar is one of the producer’s countries of world's fine cocoa. Cocoa-based agroforestry systems (CBAS) plays a very important economic role for over 75% of the population in the north of Madagascar, the island's main cocoa-producing area. It is also viewed as a key factor in the deforestation of local protected areas. It is therefore urgent to establish a compromise between cocoa production and forest conservation in this region which is difficult due to a lack of accurate cocoa agro-systems data. In order to fill these gaps and to response to these socio-economic and environmental concerns, this study aims to describe CBAS by providing precise data on their characteristics and to establish a typology. To achieve this, 150 farms were surveyed and observed to characterize CBAS based on 11 agronomic and 6 socio-economic data. Also, 30 representative plots of CBAS among the 150 farms were inventoried for providing accurate ecological data (6 variables) as an additional data for the typology determination. The results showed that Madagascar’s CBAS systems are generally extensive and practiced by smallholders. Four types of cocoa-based agroforestry system were identified, with significant differences between the following variables: yield, planting age, cocoa density, density of associated trees, preceding crop, associated crops, Shannon-Wiener indices and species richness in the upper stratum. Type 1 is characterized by old systems (>45 years) with low crop density (425 cocoa trees/ha), installed after conversion of crops other than coffee (> 50%) and giving low yields (427 kg/ha/year). Type 2 consists of simple agroforestry systems (no associated crop 0%), fairly young (20 years) with low density of associated trees (77 trees/ha) and low species diversity (H'=1.17). Type 3 is characterized by high crop density (778 trees/ha and 175 trees/ha for cocoa and associated trees respectively) and a medium level of species diversity (H'=1.74, 8 species). Type 4 is particularly characterized by orchard regeneration method involving replanting and tree lopping (100%). Analysis of the potential of these four types has identified Type 4 as a promising practice for sustainable agriculture.Keywords: conservation, practices, productivity, protect areas, smallholder, trade-off, typology
Procedia PDF Downloads 117276 3-Dimensional Contamination Conceptual Site Model: A Case Study Illustrating the Multiple Applications of Developing and Maintaining a 3D Contamination Model during an Active Remediation Project on a Former Urban Gasworks Site
Authors: Duncan Fraser
Abstract:
A 3-Dimensional (3D) conceptual site model was developed using the Leapfrog Works® platform utilising a comprehensive historical dataset for a large former Gasworks site in Fitzroy, Melbourne. The gasworks had been constructed across two fractured geological units with varying hydraulic conductivities. A Newer Volcanic (basaltic) outcrop covered approximately half of the site and was overlying a fractured Melbourne formation (Siltstone) bedrock outcropping over the remaining portion. During the investigative phase of works, a dense non-aqueous phase liquid (DNAPL) plume (coal tar) was identified within both geological units in the subsurface originating from multiple sources, including gasholders, tar wells, condensers, and leaking pipework. The first stage of model development was undertaken to determine the horizontal and vertical extents of the coal tar in the subsurface and assess the potential causality between potential sources, plume location, and site geology. Concentrations of key contaminants of interest (COIs) were also interpolated within Leapfrog to refine the distribution of contaminated soils. The model was subsequently used to develop a robust soil remediation strategy and achieve endorsement from an Environmental Auditor. A change in project scope, following the removal and validation of the three former gasholders, necessitated the additional excavation of a significant volume of residual contaminated rock to allow for the future construction of two-story underground basements. To assess financial liabilities associated with the offsite disposal or thermal treatment of material, the 3D model was updated with three years of additional analytical data from the active remediation phase of works. Chemical concentrations and the residual tar plume within the rock fractures were modelled to pre-classify the in-situ material and enhance separation strategies to prevent the unnecessary treatment of material and reduce costs.Keywords: 3D model, contaminated land, Leapfrog, remediation
Procedia PDF Downloads 136275 The Mapping of Pastoral Area as a Basis of Ecological for Beef Cattle in Pinrang Regency, South Sulawesi, Indonesia
Authors: Jasmal A. Syamsu, Muhammad Yusuf, Hikmah M. Ali, Mawardi A. Asja, Zulkharnaim
Abstract:
This study was conducted and aimed in identifying and mapping the pasture as an ecological base of beef cattle. A survey was carried out during a period of April to June 2016, in Suppa, Mattirobulu, the district of Pinrang, South Sulawesi province. The mapping process of grazing area was conducted in several stages; inputting and tracking of data points into Google Earth Pro (version 7.1.4.1529), affirmation and confirmation of tracking line visualized by satellite with a variety of records at the point, a certain point and tracking input data into ArcMap Application (ArcGIS version 10.1), data processing DEM/SRTM (S04E119) with respect to the location of the grazing areas, creation of a contour map (a distance of 5 m) and mapping tilt (slope) of land and land cover map-making. Analysis of land cover, particularly the state of the vegetation was done through the identification procedure NDVI (Normalized Differences Vegetation Index). This procedure was performed by making use of the Landsat-8. The results showed that the topography of the grazing areas of hills and some sloping surfaces and flat with elevation vary from 74 to 145 above sea level (asl), while the requirements for growing superior grass and legume is an altitude of up to 143-159 asl. Slope varied between 0 - > 40% and was dominated by a slope of 0-15%, according to the slope/topography pasture maximum of 15%. The range of NDVI values for pasture image analysis results was between 0.1 and 0.27. Characteristics of vegetation cover of pasture land in the category of vegetation density were low, 70% of the land was the land for cattle grazing, while the remaining approximately 30% was a grove and forest included plant water where the place for shelter of the cattle during the heat and drinking water supply. There are seven types of graminae and 5 types of legume that was dominant in the region. Proportionally, graminae class dominated up 75.6% and legume crops up to 22.1% and the remaining 2.3% was another plant trees that grow in the region. The dominant weed species in the region were Cromolaenaodorata and Lantana camara, besides that there were 6 types of floor plant that did not include as forage fodder.Keywords: pastoral, ecology, mapping, beef cattle
Procedia PDF Downloads 355274 Comparing Quality of Care in Family Planning Services in Primary Public and Private Health Care Facilities in Ethiopia
Authors: Gizachew Assefa Tessema, Mohammad Afzal Mahmood, Judith Streak Gomersall, Caroline O. Laurence
Abstract:
Introduction: Improving access to quality family planning services is the key to improving health of women and children. However, there is currently little evidence on the quality and scope of family planning services provided by private facilities, and this compares to the services provided in public facilities in Ethiopia. This is important, particularly in determining whether the government should further expand the roles of the private sector in the delivery of family planning facility. Methods: This study used the 2014 Ethiopian Services Provision Assessment Plus (ESPA+) survey dataset for comparing the structural aspects of quality of care in family planning services. The present analysis used a weighted sample of 1093 primary health care facilities (955 public and 138 private). This study employed logistic regression analysis to compare key structural variables between public and private facilities. While taking the structural variables as an outcome for comparison, the facility type (public vs private) were used as the key exposure of interest. Results: When comparing availability of basic amenities (infrastructure), public facilities were less likely to have functional cell phones (AOR=0.12; 95% CI: 0.07-0.21), and water supply (AOR=0.29; 95% CI: 0.15-0.58) than private facilities. However, public facilities were more likely to have staff available 24 hours in the facility (AOR=0.12; 95% CI: 0.07-0.21), providers having family planning related training in the past 24 months (AOR=4.4; 95% CI: 2.51, 7.64) and possessing guidelines/protocols (AOR= 3.1 95% CI: 1.87, 5.24) than private facilities. Moreover, comparing the availability of equipment, public facilities had higher odds of having pelvic model for IUD demonstration (AOR=2.60; 95% CI: 1.35, 5.01) and penile model for condom demonstration (AOR=2.51; 95% CI: 1.32, 4.78) than private facilities. Conclusion: The present study suggests that Ethiopian government needs to provide emphasis towards the private sector in terms of providing family planning guidelines and training on family planning services for their staff. It is also worthwhile for the public health facilities to allocate funding for improving the availability of basic amenities. Implications for policy and/ or practice: This study calls policy makers to design appropriate strategies in providing opportunities for training a health care providers working in private health facility.Keywords: quality of care, family planning, public-private, Ethiopia
Procedia PDF Downloads 356273 Infestation in Omani Date Palm Orchards by Dubas Bug Is Related to Tree Density
Authors: Lalit Kumar, Rashid Al Shidi
Abstract:
Phoenix dactylifera (date palm) is a major crop in many middle-eastern countries, including Oman. The Dubas bug Ommatissus lybicus is the main pest that affects date palm crops. However not all plantations are infested. It is still uncertain why some plantations get infested while others are not. This research investigated whether tree density and the system of planting (random versus systematic) had any relationship with infestation and levels of infestation. Remote Sensing and Geographic Information Systems were used to determine the density of trees (number of trees per unit area) while infestation levels were determined by manual counting of insects on 40 leaflets from two fronds on each tree, with a total of 20-60 trees in each village. The infestation was recorded as the average number of insects per leaflet. For tree density estimation, WorldView-3 scenes, with eight bands and 2m spatial resolution, were used. The Local maxima method, which depends on locating of the pixel of highest brightness inside a certain exploration window, was used to identify the trees in the image and delineating individual trees. This information was then used to determine whether the plantation was random or systematic. The ordinary least square regression (OLS) was used to test the global correlation between tree density and infestation level and the Geographic Weight Regression (GWR) was used to find the local spatial relationship. The accuracy of detecting trees varied from 83–99% in agricultural lands with systematic planting patterns to 50–70% in natural forest areas. Results revealed that the density of the trees in most of the villages was higher than the recommended planting number (120–125 trees/hectare). For infestation correlations, the GWR model showed a good positive significant relationship between infestation and tree density in the spring season with R² = 0.60 and medium positive significant relationship in the autumn season, with R² = 0.30. In contrast, the OLS model results showed a weaker positive significant relationship in the spring season with R² = 0.02, p < 0.05 and insignificant relationship in the autumn season with R² = 0.01, p > 0.05. The results showed a positive correlation between infestation and tree density, which suggests the infestation severity increased as the density of date palm trees increased. The correlation result showed that the density alone was responsible for about 60% of the increase in the infestation. This information can be used by the relevant authorities to better control infestations as well as to manage their pesticide spraying programs.Keywords: dubas bug, date palm, tree density, infestation levels
Procedia PDF Downloads 193272 Parametric Approach for Reserve Liability Estimate in Mortgage Insurance
Authors: Rajinder Singh, Ram Valluru
Abstract:
Chain Ladder (CL) method, Expected Loss Ratio (ELR) method and Bornhuetter-Ferguson (BF) method, in addition to more complex transition-rate modeling, are commonly used actuarial reserving methods in general insurance. There is limited published research about their relative performance in the context of Mortgage Insurance (MI). In our experience, these traditional techniques pose unique challenges and do not provide stable claim estimates for medium to longer term liabilities. The relative strengths and weaknesses among various alternative approaches revolve around: stability in the recent loss development pattern, sufficiency and reliability of loss development data, and agreement/disagreement between reported losses to date and ultimate loss estimate. CL method results in volatile reserve estimates, especially for accident periods with little development experience. The ELR method breaks down especially when ultimate loss ratios are not stable and predictable. While the BF method provides a good tradeoff between the loss development approach (CL) and ELR, the approach generates claim development and ultimate reserves that are disconnected from the ever-to-date (ETD) development experience for some accident years that have more development experience. Further, BF is based on subjective a priori assumption. The fundamental shortcoming of these methods is their inability to model exogenous factors, like the economy, which impact various cohorts at the same chronological time but at staggered points along their life-time development. This paper proposes an alternative approach of parametrizing the loss development curve and using logistic regression to generate the ultimate loss estimate for each homogeneous group (accident year or delinquency period). The methodology was tested on an actual MI claim development dataset where various cohorts followed a sigmoidal trend, but levels varied substantially depending upon the economic and operational conditions during the development period spanning over many years. The proposed approach provides the ability to indirectly incorporate such exogenous factors and produce more stable loss forecasts for reserving purposes as compared to the traditional CL and BF methods.Keywords: actuarial loss reserving techniques, logistic regression, parametric function, volatility
Procedia PDF Downloads 132271 The Role and Effects of Communication on Occupational Safety: A Review
Authors: Pieter A. Cornelissen, Joris J. Van Hoof
Abstract:
The interest in improving occupational safety started almost simultaneously with the beginning of the Industrial Revolution. Yet, it was not until the late 1970’s before the role of communication was considered in scientific research regarding occupational safety. In recent years the importance of communication as a means to improve occupational safety has increased. Not only as communication might have a direct effect on safety performance and safety outcomes, but also as it can be viewed as a major component of other important safety-related elements (e.g., training, safety meetings, leadership). And while safety communication is an increasingly important topic in research, its operationalization is often vague and differs among studies. This is not only problematic when comparing results, but also in applying these results to practice and the work floor. By means of an in-depth analysis—building on an existing dataset—this review aims to overcome these problems. The initial database search yielded 25.527 articles, which was reduced to a research corpus of 176 articles. Focusing on the 37 articles of this corpus that addressed communication (related to safety outcomes and safety performance), the current study will provide a comprehensive overview of the role and effects of safety communication and outlines the conditions under which communication contributes to a safer work environment. The study shows that in literature a distinction is commonly made between safety communication (i.e., the exchange or dissemination of safety-related information) and feedback (i.e. a reactive form of communication). And although there is a consensus among researchers that both communication and feedback positively affect safety performance, there is a debate about the directness of this relationship. Whereas some researchers assume a direct relationship between safety communication and safety performance, others state that this relationship is mediated by safety climate. One of the key findings is that despite the strongly present view that safety communication is a formal and top-down safety management tool, researchers stress the importance of open communication that encourages and allows employees to express their worries, experiences, views, and share information. This raises questions with regard to other directions (e.g., bottom-up, horizontal) and forms of communication (e.g., informal). The current review proposes a framework to overcome the often vague and different operationalizations of safety communication. The proposed framework can be used to characterize safety communication in terms of stakeholders, direction, and characteristics of communication (e.g., medium usage).Keywords: communication, feedback, occupational safety, review
Procedia PDF Downloads 303270 Characterization and Modelling of Groundwater Flow towards a Public Drinking Water Well Field: A Case Study of Ter Kamerenbos Well Field
Authors: Buruk Kitachew Wossenyeleh
Abstract:
Groundwater is the largest freshwater reservoir in the world. Like the other reservoirs of the hydrologic cycle, it is a finite resource. This study focused on the groundwater modeling of the Ter Kamerenbos well field to understand the groundwater flow system and the impact of different scenarios. The study area covers 68.9Km2 in the Brussels Capital Region and is situated in two river catchments, i.e., Zenne River and Woluwe Stream. The aquifer system has three layers, but in the modeling, they are considered as one layer due to their hydrogeological properties. The catchment aquifer system is replenished by direct recharge from rainfall. The groundwater recharge of the catchment is determined using the spatially distributed water balance model called WetSpass, and it varies annually from zero to 340mm. This groundwater recharge is used as the top boundary condition for the groundwater modeling of the study area. During the groundwater modeling using Processing MODFLOW, constant head boundary conditions are used in the north and south boundaries of the study area. For the east and west boundaries of the study area, head-dependent flow boundary conditions are used. The groundwater model is calibrated manually and automatically using observed hydraulic heads in 12 observation wells. The model performance evaluation showed that the root means the square error is 1.89m and that the NSE is 0.98. The head contour map of the simulated hydraulic heads indicates the flow direction in the catchment, mainly from the Woluwe to Zenne catchment. The simulated head in the study area varies from 13m to 78m. The higher hydraulic heads are found in the southwest of the study area, which has the forest as a land-use type. This calibrated model was run for the climate change scenario and well operation scenario. Climate change may cause the groundwater recharge to increase by 43% and decrease by 30% in 2100 from current conditions for the high and low climate change scenario, respectively. The groundwater head varies for a high climate change scenario from 13m to 82m, whereas for a low climate change scenario, it varies from 13m to 76m. If doubling of the pumping discharge assumed, the groundwater head varies from 13m to 76.5m. However, if the shutdown of the pumps is assumed, the head varies in the range of 13m to 79m. It is concluded that the groundwater model is done in a satisfactory way with some limitations, and the model output can be used to understand the aquifer system under steady-state conditions. Finally, some recommendations are made for the future use and improvement of the model.Keywords: Ter Kamerenbos, groundwater modelling, WetSpass, climate change, well operation
Procedia PDF Downloads 153269 Geospatial Multi-Criteria Evaluation to Predict Landslide Hazard Potential in the Catchment of Lake Naivasha, Kenya
Authors: Abdel Rahman Khider Hassan
Abstract:
This paper describes a multi-criteria geospatial model for prediction of landslide hazard zonation (LHZ) for Lake Naivasha catchment (Kenya), based on spatial analysis of integrated datasets of location intrinsic parameters (slope stability factors) and external landslides triggering factors (natural and man-made factors). The intrinsic dataset included: lithology, geometry of slope (slope inclination, aspect, elevation, and curvature) and land use/land cover. The landslides triggering factors included: rainfall as the climatic factor, in addition to the destructive effects reflected by proximity of roads and drainage network to areas that are susceptible to landslides. No published study on landslides has been obtained for this area. Thus, digital datasets of the above spatial parameters were conveniently acquired, stored, manipulated and analyzed in a Geographical Information System (GIS) using a multi-criteria grid overlay technique (in ArcGIS 10.2.2 environment). Deduction of landslide hazard zonation is done by applying weights based on relative contribution of each parameter to the slope instability, and finally, the weighted parameters grids were overlaid together to generate a map of the potential landslide hazard zonation (LHZ) for the lake catchment. From the total surface of 3200 km² of the lake catchment, most of the region (78.7 %; 2518.4 km²) is susceptible to moderate landslide hazards, whilst about 13% (416 km²) is occurring under high hazards. Only 1.0% (32 km²) of the catchment is displaying very high landslide hazards, and the remaining area (7.3 %; 233.6 km²) displays low probability of landslide hazards. This result confirms the importance of steep slope angles, lithology, vegetation land cover and slope orientation (aspect) as the major determining factors of slope failures. The information provided by the produced map of landslide hazard zonation (LHZ) could lay the basis for decision making as well as mitigation and applications in avoiding potential losses caused by landslides in the Lake Naivasha catchment in the Kenya Highlands.Keywords: decision making, geospatial, landslide, multi-criteria, Naivasha
Procedia PDF Downloads 207268 A Systematic Map of the Research Trends in Wildfire Management in Mediterranean-Climate Regions
Authors: Renata Martins Pacheco, João Claro
Abstract:
Wildfires are becoming an increasing concern worldwide, causing substantial social, economic, and environmental disruptions. This situation is especially relevant in Mediterranean-climate regions, present in all the five continents of the world, in which fire is not only a natural component of the environment but also perhaps one of the most important evolutionary forces. The rise in wildfire occurrences and their associated impacts suggests the need for identifying knowledge gaps and enhancing the basis of scientific evidence on how managers and policymakers may act effectively to address them. Considering that the main goal of a systematic map is to collate and catalog a body of evidence to describe the state of knowledge for a specific topic, it is a suitable approach to be used for this purpose. In this context, the aim of this study is to systematically map the research trends in wildfire management practices in Mediterranean-climate regions. A total of 201 wildfire management studies were analyzed and systematically mapped in terms of their: Year of publication; Place of study; Scientific outlet; Research area (Web of Science) or Research field (Scopus); Wildfire phase; Central research topic; Main objective of the study; Research methods; and Main conclusions or contributions. The results indicate that there is an increasing number of studies being developed on the topic (most from the last 10 years), but more than half of them are conducted in few Mediterranean countries (60% of the analyzed studies were conducted in Spain, Portugal, Greece, Italy or France), and more than 50% are focused on pre-fire issues, such as prevention and fuel management. In contrast, only 12% of the studies focused on “Economic modeling” or “Human factors and issues,” which suggests that the triple bottom line of the sustainability argument (social, environmental, and economic) is not being fully addressed by fire management research. More than one-fourth of the studies had their objective related to testing new approaches in fire or forest management, suggesting that new knowledge is being produced on the field. Nevertheless, the results indicate that most studies (about 84%) employed quantitative research methods, and only 3% of the studies used research methods that tackled social issues or addressed expert and practitioner’s knowledge. Perhaps this lack of multidisciplinary studies is one of the factors hindering more progress from being made in terms of reducing wildfire occurrences and their impacts.Keywords: wildfire, Mediterranean-climate regions, management, policy
Procedia PDF Downloads 124267 Historical Tree Height Growth Associated with Climate Change in Western North America
Authors: Yassine Messaoud, Gordon Nigh, Faouzi Messaoud, Han Chen
Abstract:
The effect of climate change on tree growth in boreal and temperate forests has received increased interest in the context of global warming. However, most studies were conducted in small areas and with a limited number of tree species. Here, we examined the height growth responses of seventeen tree species to climate change in Western North America. 37009 stands from forest inventory databases in Canada and USA with varying establishment date were selected. Dominant and co-dominant trees from each stand were sampled to determine top tree height at 50 years breast height age. Height was related to historical mean annual and summer temperatures, annual and summer Palmer Drought Severity Index, tree establishment date, slope, aspect, soil fertility as determined by the rate of carbon organic matter decomposition (carbon/nitrogen), geographic locations (latitude, longitude, and elevation), species range (coastal, interior, and both ranges), shade tolerance and leaf form (needle leaves, deciduous needle leaves, and broadleaves). Climate change had mostly a positive effect on tree height growth. The results explained 62.4% of the height growth variance. Since 1880, height growth increase was greater for coastal, high shade tolerant, and broadleaf species. Height growth increased more on steep slopes and high soil fertility soils. Greater height growth was mostly observed at the leading range and upward. Conversely, some species showed the opposite pattern probably due to the increase of drought (coastal Mediterranean area), precipitation and cloudiness (Alaska and British Columbia) and peculiarity (higher latitudes-lower elevations and vice versa) of western North America topography. This study highlights the role of the species ecological amplitude and traits, and geographic locations as the main factors determining the growth response and its magnitude to the recent global climate change.Keywords: Height growth, global climate change, species range, species characteristics, species ecological amplitude, geographic locations, western North America
Procedia PDF Downloads 187266 Private and Public Health Sector Difference on Client Satisfaction: Results from Secondary Data Analysis in Sindh, Pakistan
Authors: Wajiha Javed, Arsalan Jabbar, Nelofer Mehboob, Muhammad Tafseer, Zahid Memon
Abstract:
Introduction: Researchers globally have strived to explore diverse factors that augment the continuation and uptake of family planning methods. Clients’ satisfaction is one of the core determinants facilitating continuation of family planning methods. There is a major debate yet scanty evidence to contrast public and private sectors with respect to client satisfaction. The objective of this study is to compare quality-of-care provided by public and private sectors of Pakistan through a client satisfaction lens. Methods: We used Pakistan Demographic Heath Survey 2012-13 dataset (Sindh province) on a total of 3133 Married Women of Reproductive Age (MWRA) aged 15-49 years. Source of family planning (public/private sector) was the main exposure variable. Outcome variable was client satisfaction judged by ten different dimensions of client satisfaction. Means and standard deviations were calculated for continuous variable while for categorical variable frequencies and percentages were computed. For univariate analysis, Chi-square/Fisher Exact test was used to find an association between clients’ satisfaction in public and private sectors. Ten different multivariate models were made. Variables were checked for multi-collinearity, confounding, and interaction, and then advanced logistic regression was used to explore the relationship between client satisfaction and dependent outcome after adjusting for all known confounding factors and results are presented as OR and AOR (95% CI). Results: Multivariate analyses showed that clients were less satisfied in contraceptive provision from private sector as compared to public sector (AOR 0.92,95% CI 0.63-1.68) even though the result was not statistically significant. Clients were more satisfied from private sector as compared to the public sector with respect to other determinants of quality-of-care (follow-up care (AOR 3.29, 95% CI 1.95-5.55), infection prevention (AOR 2.41, 95% CI 1.60-3.62), counseling services (AOR 2.01, 95% CI 1.27-3.18, timely treatment (AOR 3.37, 95% CI 2.20-5.15), attitude of staff (AOR 2.23, 95% CI 1.50-3.33), punctuality of staff (AOR 2.28, 95% CI 1.92-4.13), timely referring (AOR 2.34, 95% CI 1.63-3.35), staff cooperation (AOR 1.75, 95% CI 1.22-2.51) and complications handling (AOR 2.27, 95% CI 1.56-3.29).Keywords: client satisfaction, family planning, public private partnership, quality of care
Procedia PDF Downloads 420265 Management as a Proxy for Firm Quality
Authors: Petar Dobrev
Abstract:
There is no agreed-upon definition of firm quality. While profitability and stock performance often qualify as popular proxies of quality, in this project, we aim to identify quality without relying on a firm’s financial statements or stock returns as selection criteria. Instead, we use firm-level data on management practices across small to medium-sized U.S. manufacturing firms from the World Management Survey (WMS) to measure firm quality. Each firm in the WMS dataset is assigned a mean management score from 0 to 5, with higher scores identifying better-managed firms. This management score serves as our proxy for firm quality and is the sole criteria we use to separate firms into portfolios comprised of high-quality and low-quality firms. We define high-quality (low-quality) firms as those firms with a management score of one standard deviation above (below) the mean. To study whether this proxy for firm quality can identify better-performing firms, we link this data to Compustat and The Center for Research in Security Prices (CRSP) to obtain firm-level data on financial performance and monthly stock returns, respectively. We find that from 1999 to 2019 (our sample data period), firms in the high-quality portfolio are consistently more profitable — higher operating profitability and return on equity compared to low-quality firms. In addition, high-quality firms also exhibit a lower risk of bankruptcy — a higher Altman Z-score. Next, we test whether the stocks of the firms in the high-quality portfolio earn superior risk-adjusted excess returns. We regress the monthly excess returns on each portfolio on the Fama-French 3-factor, 4-factor, and 5-factor models, the betting-against-beta factor, and the quality-minus-junk factor. We find no statistically significant differences in excess returns between both portfolios, suggesting that stocks of high-quality (well managed) firms do not earn superior risk-adjusted returns compared to low-quality (poorly managed) firms. In short, our proxy for firm quality, the WMS management score, can identify firms with superior financial performance (higher profitability and reduced risk of bankruptcy). However, our management proxy cannot identify stocks that earn superior risk-adjusted returns, suggesting no statistically significant relationship between managerial quality and stock performance.Keywords: excess stock returns, management, profitability, quality
Procedia PDF Downloads 93264 Methodology for the Multi-Objective Analysis of Data Sets in Freight Delivery
Authors: Dale Dzemydiene, Aurelija Burinskiene, Arunas Miliauskas, Kristina Ciziuniene
Abstract:
Data flow and the purpose of reporting the data are different and dependent on business needs. Different parameters are reported and transferred regularly during freight delivery. This business practices form the dataset constructed for each time point and contain all required information for freight moving decisions. As a significant amount of these data is used for various purposes, an integrating methodological approach must be developed to respond to the indicated problem. The proposed methodology contains several steps: (1) collecting context data sets and data validation; (2) multi-objective analysis for optimizing freight transfer services. For data validation, the study involves Grubbs outliers analysis, particularly for data cleaning and the identification of statistical significance of data reporting event cases. The Grubbs test is often used as it measures one external value at a time exceeding the boundaries of standard normal distribution. In the study area, the test was not widely applied by authors, except when the Grubbs test for outlier detection was used to identify outsiders in fuel consumption data. In the study, the authors applied the method with a confidence level of 99%. For the multi-objective analysis, the authors would like to select the forms of construction of the genetic algorithms, which have more possibilities to extract the best solution. For freight delivery management, the schemas of genetic algorithms' structure are used as a more effective technique. Due to that, the adaptable genetic algorithm is applied for the description of choosing process of the effective transportation corridor. In this study, the multi-objective genetic algorithm methods are used to optimize the data evaluation and select the appropriate transport corridor. The authors suggest a methodology for the multi-objective analysis, which evaluates collected context data sets and uses this evaluation to determine a delivery corridor for freight transfer service in the multi-modal transportation network. In the multi-objective analysis, authors include safety components, the number of accidents a year, and freight delivery time in the multi-modal transportation network. The proposed methodology has practical value in the management of multi-modal transportation processes.Keywords: multi-objective, analysis, data flow, freight delivery, methodology
Procedia PDF Downloads 180263 Land Suitability Analysis Based on Ecosystems Service Approach for Wind Farm Location in South-Central Chile: Net Primary Production as Proxy
Authors: Yenisleidy Martínez-Martínez, Yannay Casas-Ledón, Jo Dewulf
Abstract:
Wind power constitutes a cleaner energy source with smaller unfavorable impacts on the environment than fossil fuels. Its development could be an alternative to fight climate change while meeting energy demands. However, wind energy development requires first determining the existing potential and areas with aptitude. Also, potential socio-economic and environmental impacts should be analyzed to prevent social rejection of this technology. In this context, this work performs a suitability assessment on a GIS environment to locate suitable areas for wind energy expansion in South-Central Chile. In addition, suitable areas were characterized in terms of potential goods and services to be produced as a proxy for analyzing potential impacts and trade-offs. First, layers of annual wind speed were generated as they represent the resource potential, and layer representing previously defined territorial constraints were created. Zones depicting territorial constraints were removed from resource measurement layers to identify suitable sites. Then, the appropriation of the primary production in suitable sites was determined to measure potential ecosystem services derived from human interventions in those areas. Results show that approximately 52% of the total surface of the study area has a good aptitude to install wind farms. In this area, provisioning services like food crops production, timber, and other forest resources like firewood play a key role in the regional economy and thus are the main cause of human interventions. This is reflected by human appropriation of the primary production values of 0.71 KgC/m².yr, 0.36 KgC/m².yr, and 0.14 KgC/m².yr, respectively. In this sense, wind energy development could be compatible with croplands, which is the predominant land use in suitable areas, and provide farmers with cheaper energy and extra income. Also, studies have reported changes in local temperature associated with wind turbines, which could be beneficial to crop growth. The results obtained in this study prove to be useful for identifying available areas for wind development, which could be very useful in decision-making processes related to energy planning.Keywords: net primary productivity, provisioning services, suitability assessment, wind energy
Procedia PDF Downloads 158262 Chemical Pollution of Water: Waste Water, Sewage Water, and Pollutant Water
Authors: Nabiyeva Jamala
Abstract:
We divide water into drinking, mineral, industrial, technical and thermal-energetic types according to its use and purpose. Drinking water must comply with sanitary requirements and norms according to organoleptic devices and physical and chemical properties. Mineral water - must comply with the norms due to some components having therapeutic properties. Industrial water must fulfill its normative requirements by being used in the industrial field. Technical water should be suitable for use in the field of agriculture, household, and irrigation, and the normative requirements should be met. Heat-energy water is used in the national economy, and it consists of thermal and energy water. Water is a filter-accumulator of all types of pollutants entering the environment. This is explained by the fact that it has the property of dissolving compounds of mineral and gaseous water and regular water circulation. Environmentally clean, pure, non-toxic water is vital for the normal life activity of humans, animals and other living beings. Chemical pollutants enter water basins mainly with wastewater from non-ferrous and ferrous metallurgy, oil, gas, chemical, stone, coal, pulp and paper and forest materials processing industries and make them unusable. Wastewater from the chemical, electric power, woodworking and machine-building industries plays a huge role in the pollution of water sources. Chlorine compounds, phenols, and chloride-containing substances have a strong lethal-toxic effect on organisms when mixed with water. Heavy metals - lead, cadmium, mercury, nickel, copper, selenium, chromium, tin, etc. water mixed with ingredients cause poisoning in humans, animals and other living beings. Thus, the mixing of selenium with water causes liver diseases in people, the mixing of mercury with the nervous system, and the mixing of cadmium with kidney diseases. Pollution of the World's ocean waters and other water basins with oil and oil products is one of the most dangerous environmental problems facing humanity today. So, mixing even the smallest amount of oil and its products in drinking water gives it a bad, unpleasant smell. Mixing one ton of oil with water creates a special layer that covers the water surface in an area of 2.6 km2. As a result, the flood of light, photosynthesis and oxygen supply of water is getting weak and there is a great danger to the lives of living beings.Keywords: chemical pollutants, wastewater, SSAM, polyacrylamide
Procedia PDF Downloads 73261 Housing Price Dynamics: Comparative Study of 1980-1999 and the New Millenium
Authors: Janne Engblom, Elias Oikarinen
Abstract:
The understanding of housing price dynamics is of importance to a great number of agents: to portfolio investors, banks, real estate brokers and construction companies as well as to policy makers and households. A panel dataset is one that follows a given sample of individuals over time, and thus provides multiple observations on each individual in the sample. Panel data models include a variety of fixed and random effects models which form a wide range of linear models. A special case of panel data models is dynamic in nature. A complication regarding a dynamic panel data model that includes the lagged dependent variable is endogeneity bias of estimates. Several approaches have been developed to account for this problem. In this paper, the panel models were estimated using the Common Correlated Effects estimator (CCE) of dynamic panel data which also accounts for cross-sectional dependence which is caused by common structures of the economy. In presence of cross-sectional dependence standard OLS gives biased estimates. In this study, U.S housing price dynamics were examined empirically using the dynamic CCE estimator with first-difference of housing price as the dependent and first-differences of per capita income, interest rate, housing stock and lagged price together with deviation of housing prices from their long-run equilibrium level as independents. These deviations were also estimated from the data. The aim of the analysis was to provide estimates with comparisons of estimates between 1980-1999 and 2000-2012. Based on data of 50 U.S cities over 1980-2012 differences of short-run housing price dynamics estimates were mostly significant when two time periods were compared. Significance tests of differences were provided by the model containing interaction terms of independents and time dummy variable. Residual analysis showed very low cross-sectional correlation of the model residuals compared with the standard OLS approach. This means a good fit of CCE estimator model. Estimates of the dynamic panel data model were in line with the theory of housing price dynamics. Results also suggest that dynamics of a housing market is evolving over time.Keywords: dynamic model, panel data, cross-sectional dependence, interaction model
Procedia PDF Downloads 252260 Developing A Third Degree Of Freedom For Opinion Dynamics Models Using Scales
Authors: Dino Carpentras, Alejandro Dinkelberg, Michael Quayle
Abstract:
Opinion dynamics models use an agent-based modeling approach to model people’s opinions. Model's properties are usually explored by testing the two 'degrees of freedom': the interaction rule and the network topology. The latter defines the connection, and thus the possible interaction, among agents. The interaction rule, instead, determines how agents select each other and update their own opinion. Here we show the existence of the third degree of freedom. This can be used for turning one model into each other or to change the model’s output up to 100% of its initial value. Opinion dynamics models represent the evolution of real-world opinions parsimoniously. Thus, it is fundamental to know how real-world opinion (e.g., supporting a candidate) could be turned into a number. Specifically, we want to know if, by choosing a different opinion-to-number transformation, the model’s dynamics would be preserved. This transformation is typically not addressed in opinion dynamics literature. However, it has already been studied in psychometrics, a branch of psychology. In this field, real-world opinions are converted into numbers using abstract objects called 'scales.' These scales can be converted one into the other, in the same way as we convert meters to feet. Thus, in our work, we analyze how this scale transformation may affect opinion dynamics models. We perform our analysis both using mathematical modeling and validating it via agent-based simulations. To distinguish between scale transformation and measurement error, we first analyze the case of perfect scales (i.e., no error or noise). Here we show that a scale transformation may change the model’s dynamics up to a qualitative level. Meaning that a researcher may reach a totally different conclusion, even using the same dataset just by slightly changing the way data are pre-processed. Indeed, we quantify that this effect may alter the model’s output by 100%. By using two models from the standard literature, we show that a scale transformation can transform one model into the other. This transformation is exact, and it holds for every result. Lastly, we also test the case of using real-world data (i.e., finite precision). We perform this test using a 7-points Likert scale, showing how even a small scale change may result in different predictions or a number of opinion clusters. Because of this, we think that scale transformation should be considered as a third-degree of freedom for opinion dynamics. Indeed, its properties have a strong impact both on theoretical models and for their application to real-world data.Keywords: degrees of freedom, empirical validation, opinion scale, opinion dynamics
Procedia PDF Downloads 155259 Characterizing Nasal Microbiota in COVID-19 Patients: Insights from Nanopore Technology and Comparative Analysis
Authors: David Pinzauti, Simon De Jaegher, Maria D'Aguano, Manuele Biazzo
Abstract:
The COVID-19 pandemic has left an indelible mark on global health, leading to a pressing need for understanding the intricate interactions between the virus and the human microbiome. This study focuses on characterizing the nasal microbiota of patients affected by COVID-19, with a specific emphasis on the comparison with unaffected individuals, to shed light on the crucial role of the microbiome in the development of this viral disease. To achieve this objective, Nanopore technology was employed to analyze the bacterial 16s rRNA full-length gene present in nasal swabs collected in Malta between January 2021 and August 2022. A comprehensive dataset consisting of 268 samples (126 SARS-negative samples and 142 SARS-positive samples) was subjected to a comparative analysis using an in-house, custom pipeline. The findings from this study revealed that individuals affected by COVID-19 possess a nasal microbiota that is significantly less diverse, as evidenced by lower α diversity, and is characterized by distinct microbial communities compared to unaffected individuals. The beta diversity analyses were carried out at different taxonomic resolutions. At the phylum level, Bacteroidota was found to be more prevalent in SARS-negative samples, suggesting a potential decrease during the course of viral infection. At the species level, the identification of several specific biomarkers further underscores the critical role of the nasal microbiota in COVID-19 pathogenesis. Notably, species such as Finegoldia magna, Moraxella catarrhalis, and others exhibited relative abundance in SARS-positive samples, potentially serving as significant indicators of the disease. This study presents valuable insights into the relationship between COVID-19 and the nasal microbiota. The identification of distinct microbial communities and potential biomarkers associated with the disease offers promising avenues for further research and therapeutic interventions aimed at enhancing public health outcomes in the context of COVID-19.Keywords: COVID-19, nasal microbiota, nanopore technology, 16s rRNA gene, biomarkers
Procedia PDF Downloads 71258 Occupational Safety and Health in the Wake of Drones
Authors: Hoda Rahmani, Gary Weckman
Abstract:
The body of research examining the integration of drones into various industries is expanding rapidly. Despite progress made in addressing the cybersecurity concerns for commercial drones, knowledge deficits remain in determining potential occupational hazards and risks of drone use to employees’ well-being and health in the workplace. This creates difficulty in identifying key approaches to risk mitigation strategies and thus reflects the need for raising awareness among employers, safety professionals, and policymakers about workplace drone-related accidents. The purpose of this study is to investigate the prevalence of and possible risk factors for drone-related mishaps by comparing the application of drones in construction with manufacturing industries. The chief reason for considering these specific sectors is to ascertain whether there exists any significant difference between indoor and outdoor flights since most construction sites use drones outside and vice versa. Therefore, the current research seeks to examine the causes and patterns of workplace drone-related mishaps and suggest possible ergonomic interventions through data collection. Potential ergonomic practices to mitigate hazards associated with flying drones could include providing operators with professional pieces of training, conducting a risk analysis, and promoting the use of personal protective equipment. For the purpose of data analysis, two data mining techniques, the random forest and association rule mining algorithms, will be performed to find meaningful associations and trends in data as well as influential features that have an impact on the occurrence of drone-related accidents in construction and manufacturing sectors. In addition, Spearman’s correlation and chi-square tests will be used to measure the possible correlation between different variables. Indeed, by recognizing risks and hazards, occupational safety stakeholders will be able to pursue data-driven and evidence-based policy change with the aim of reducing drone mishaps, increasing productivity, creating a safer work environment, and extending human performance in safe and fulfilling ways. This research study was supported by the National Institute for Occupational Safety and Health through the Pilot Research Project Training Program of the University of Cincinnati Education and Research Center Grant #T42OH008432.Keywords: commercial drones, ergonomic interventions, occupational safety, pattern recognition
Procedia PDF Downloads 211257 Community Policing Interventions in the Tribal Hamlets as a Positive Criminal Justice and Social Justice Strategy: A Study Based on the Community Policing Project of the Government of Kerala
Authors: Bharathadas Sandhya
Abstract:
Janamaithri Suraksha Project is the community policing project of Kerala police, fully sponsored by the Government of Kerala and in vogue in Kerala for the last ten years. The socio-economically weaker areas in the hilly terrains consisting of tribal hamlets are given special importance under the project. These hamlets are visited by the beat police officers, and they intervene in various issues in the hamlets. This study is based on data collected from 350 respondents living in the tribal hamlets of the Nilambur area in the District of Malappuram. The respondents were personally interviewed by the research team using a questionnaire consisting of 183 questions, seeking the details regarding their interaction with beat police officers, their ability to prevent or detect crimes, the menace of Maoists (extremist) presence, their interventions in other socio-economic problems like alcoholism, school dropout issues, lack of facilities for preparation for competitive examinations for educated youth, etc. The perception of the tribal population regarding the effectiveness of police intervention in their criminal justice complaints, the attitude of the police officers towards the tribal population when they approach the police station with a criminal complaint, are also studied. The general socio-economic problems of the tribal population as perceived by them are also brought out. Being the visible agency of the government, the police person coming on beat duty to the hamlet is generally seen by the tribal population as a representative to whom they can communicate the issues, even if it’s solution rests with another department like the forest or agriculture. The analysis of the primary data is carried out using computer applications. The amount of social justice benefits the tribal hamlets received through various government schemes, and their deficiencies are brought out in the study. From the conclusions of the study, certain suggestions for positive criminal justice and social justice intervention strategies are made out. The need for various government departments to work in tandem with each other so as to bring out more effectiveness in the socio-economic projects is evident from the study. Whether it is the need to obtain a transport to go to school or problem of drinking water or even opening a bank account, at least occasionally, the visiting beat police officer is of help to the tribal population. Mostly the tribal population feels free to approach the police with a criminal complaint without any inhibitions.Keywords: community policing, beat police officer, criminal justice, social justice
Procedia PDF Downloads 154256 High-Throughput Artificial Guide RNA Sequence Design for Type I, II and III CRISPR/Cas-Mediated Genome Editing
Authors: Farahnaz Sadat Golestan Hashemi, Mohd Razi Ismail, Mohd Y. Rafii
Abstract:
A huge revolution has emerged in genome engineering by the discovery of CRISPR (clustered regularly interspaced palindromic repeats) and CRISPR-associated system genes (Cas) in bacteria. The function of type II Streptococcus pyogenes (Sp) CRISPR/Cas9 system has been confirmed in various species. Other S. thermophilus (St) CRISPR-Cas systems, CRISPR1-Cas and CRISPR3-Cas, have been also reported for preventing phage infection. The CRISPR1-Cas system interferes by cleaving foreign dsDNA entering the cell in a length-specific and orientation-dependant manner. The S. thermophilus CRISPR3-Cas system also acts by cleaving phage dsDNA genomes at the same specific position inside the targeted protospacer as observed in the CRISPR1-Cas system. It is worth mentioning, for the effective DNA cleavage activity, RNA-guided Cas9 orthologs require their own specific PAM (protospacer adjacent motif) sequences. Activity levels are based on the sequence of the protospacer and specific combinations of favorable PAM bases. Therefore, based on the specific length and sequence of PAM followed by a constant length of target site for the three orthogonals of Cas9 protein, a well-organized procedure will be required for high-throughput and accurate mining of possible target sites in a large genomic dataset. Consequently, we created a reliable procedure to explore potential gRNA sequences for type I (Streptococcus thermophiles), II (Streptococcus pyogenes), and III (Streptococcus thermophiles) CRISPR/Cas systems. To mine CRISPR target sites, four different searching modes of sgRNA binding to target DNA strand were applied. These searching modes are as follows: i) coding strand searching, ii) anti-coding strand searching, iii) both strand searching, and iv) paired-gRNA searching. The output of such procedure highlights the power of comparative genome mining for different CRISPR/Cas systems. This could yield a repertoire of Cas9 variants with expanded capabilities of gRNA design, and will pave the way for further advance genome and epigenome engineering.Keywords: CRISPR/Cas systems, gRNA mining, Streptococcus pyogenes, Streptococcus thermophiles
Procedia PDF Downloads 257255 Analysis of Extreme Rainfall Trends in Central Italy
Authors: Renato Morbidelli, Carla Saltalippi, Alessia Flammini, Marco Cifrodelli, Corrado Corradini
Abstract:
The trend of magnitude and frequency of extreme rainfalls seems to be different depending on the investigated area of the world. In this work, the impact of climate change on extreme rainfalls in Umbria, an inland region of central Italy, is examined using data recorded during the period 1921-2015 by 10 representative rain gauge stations. The study area is characterized by a complex orography, with altitude ranging from 200 to more than 2000 m asl. The climate is very different from zone to zone, with mean annual rainfall ranging from 650 to 1450 mm and mean annual air temperature from 3.3 to 14.2°C. Over the past 15 years, this region has been affected by four significant droughts as well as by six dangerous flood events, all with very large impact in economic terms. A least-squares linear trend analysis of annual maximums over 60 time series selected considering 6 different durations (1 h, 3 h, 6 h, 12 h, 24 h, 48 h) showed about 50% of positive and 50% of negative cases. For the same time series the non-parametrical Mann-Kendall test with a significance level 0.05 evidenced only 3% of cases characterized by a negative trend and no positive case. Further investigations have also demonstrated that the variance and covariance of each time series can be considered almost stationary. Therefore, the analysis on the magnitude of extreme rainfalls supplies the indication that an evident trend in the change of values in the Umbria region does not exist. However, also the frequency of rainfall events, with particularly high rainfall depths values, occurred during a fixed period has also to be considered. For all selected stations the 2-day rainfall events that exceed 50 mm were counted for each year, starting from the first monitored year to the end of 2015. Also, this analysis did not show predominant trends. Specifically, for all selected rain gauge stations the annual number of 2-day rainfall events that exceed the threshold value (50 mm) was slowly decreasing in time, while the annual cumulated rainfall depths corresponding to the same events evidenced trends that were not statistically significant. Overall, by using a wide available dataset and adopting simple methods, the influence of climate change on the heavy rainfalls in the Umbria region is not detected.Keywords: climate changes, rainfall extremes, rainfall magnitude and frequency, central Italy
Procedia PDF Downloads 236254 Deep Learning for Renewable Power Forecasting: An Approach Using LSTM Neural Networks
Authors: Fazıl Gökgöz, Fahrettin Filiz
Abstract:
Load forecasting has become crucial in recent years and become popular in forecasting area. Many different power forecasting models have been tried out for this purpose. Electricity load forecasting is necessary for energy policies, healthy and reliable grid systems. Effective power forecasting of renewable energy load leads the decision makers to minimize the costs of electric utilities and power plants. Forecasting tools are required that can be used to predict how much renewable energy can be utilized. The purpose of this study is to explore the effectiveness of LSTM-based neural networks for estimating renewable energy loads. In this study, we present models for predicting renewable energy loads based on deep neural networks, especially the Long Term Memory (LSTM) algorithms. Deep learning allows multiple layers of models to learn representation of data. LSTM algorithms are able to store information for long periods of time. Deep learning models have recently been used to forecast the renewable energy sources such as predicting wind and solar energy power. Historical load and weather information represent the most important variables for the inputs within the power forecasting models. The dataset contained power consumption measurements are gathered between January 2016 and December 2017 with one-hour resolution. Models use publicly available data from the Turkish Renewable Energy Resources Support Mechanism. Forecasting studies have been carried out with these data via deep neural networks approach including LSTM technique for Turkish electricity markets. 432 different models are created by changing layers cell count and dropout. The adaptive moment estimation (ADAM) algorithm is used for training as a gradient-based optimizer instead of SGD (stochastic gradient). ADAM performed better than SGD in terms of faster convergence and lower error rates. Models performance is compared according to MAE (Mean Absolute Error) and MSE (Mean Squared Error). Best five MAE results out of 432 tested models are 0.66, 0.74, 0.85 and 1.09. The forecasting performance of the proposed LSTM models gives successful results compared to literature searches.Keywords: deep learning, long short term memory, energy, renewable energy load forecasting
Procedia PDF Downloads 267