Search results for: Simon Yobanny Reyes Lopez
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 530

Search results for: Simon Yobanny Reyes Lopez

110 The Quality of Life, Situations and Emerging Concerns of Parents of Children with Neurodevelopmental Disorders in Philippine Children's Medical Center during the Covid-19 Pandemic

Authors: Annelyn Fatima Lopez, Ermenilda Avendano, Aileen Marie Vargas, Lara Baylon, Rorilee Angeles

Abstract:

BACKGROUND: The COVID-19 resulted in a public health emergency and quarantine measures which may negatively impact psychosocial and environmental aspects of vulnerable populations. OBJECTIVES: This study intended to determine the quality of life, situations and emerging concerns of parents of children with neurodevelopmental disorders during the ongoing coronavirus pandemic. METHODOLOGY: Parents of patients seen in the PCMC Neurodevelopmental Pediatrics OPD clinic were recruited to fill out questionnaires on parent and child characteristics, survey on situations and emerging concerns during the coronavirus pandemic and WHOQOL-BREF (Filipino version) for parental quality of life. RESULTS: Data from 115 respondents showed a lower score in the environmental domain. The child characteristics that are statistically comparable with the QoL scores include sex, severity of ID and ADHD while the parent characteristics that are statistically comparable with the QoL scores include educational attainment, monthly family income, father’s employment status and family structure (P-value <0.05). Most respondents reported physical distancing (82.61%) and curfew (80.87%) as measures implemented due to the pandemic. Inability to access essential services (43.48-74.48%) were further compounded by limited financial resources (51.30%) and public transport (60%). Government responses received include quarantine pass (90.43%), food allowance or relief package (86.09%), disinfection (60.87%), DSWD-SAP (42.61%) and cash distribution (41.74%). Concerns encountered include socio-environmental issues (i.e. no available transportation, effect on the ability to earn, inadequate food/medicine rations, disruptions in basic social services) and patient concerns (i.e. access to education, medical, developmental and behavioral services, nutrition and sleep). RECOMMENDATIONS: Programs and policies should be planned accordingly to provide improvement of quality of life for both parents and the child with a neurodevelopmental disorder.

Keywords: covid-19, neurodevelopmental disorder, parental quality of life, whoqol-bref

Procedia PDF Downloads 195
109 DEMs: A Multivariate Comparison Approach

Authors: Juan Francisco Reinoso Gordo, Francisco Javier Ariza-López, José Rodríguez Avi, Domingo Barrera Rosillo

Abstract:

The evaluation of the quality of a data product is based on the comparison of the product with a reference of greater accuracy. In the case of MDE data products, quality assessment usually focuses on positional accuracy and few studies consider other terrain characteristics, such as slope and orientation. The proposal that is made consists of evaluating the similarity of two DEMs (a product and a reference), through the joint analysis of the distribution functions of the variables of interest, for example, elevations, slopes and orientations. This is a multivariable approach that focuses on distribution functions, not on single parameters such as mean values or dispersions (e.g. root mean squared error or variance). This is considered to be a more holistic approach. The use of the Kolmogorov-Smirnov test is proposed due to its non-parametric nature, since the distributions of the variables of interest cannot always be adequately modeled by parametric models (e.g. the Normal distribution model). In addition, its application to the multivariate case is carried out jointly by means of a single test on the convolution of the distribution functions of the variables considered, which avoids the use of corrections such as Bonferroni when several statistics hypothesis tests are carried out together. In this work, two DEM products have been considered, DEM02 with a resolution of 2x2 meters and DEM05 with a resolution of 5x5 meters, both generated by the National Geographic Institute of Spain. DEM02 is considered as the reference and DEM05 as the product to be evaluated. In addition, the slope and aspect derived models have been calculated by GIS operations on the two DEM datasets. Through sample simulation processes, the adequate behavior of the Kolmogorov-Smirnov statistical test has been verified when the null hypothesis is true, which allows calibrating the value of the statistic for the desired significance value (e.g. 5%). Once the process has been calibrated, the same process can be applied to compare the similarity of different DEM data sets (e.g. the DEM05 versus the DEM02). In summary, an innovative alternative for the comparison of DEM data sets based on a multinomial non-parametric perspective has been proposed by means of a single Kolmogorov-Smirnov test. This new approach could be extended to other DEM features of interest (e.g. curvature, etc.) and to more than three variables

Keywords: data quality, DEM, kolmogorov-smirnov test, multivariate DEM comparison

Procedia PDF Downloads 101
108 Reducing Flood Risk through Value Capture and Risk Communication: A Case Study in Cocody-Abidjan

Authors: Dedjo Yao Simon, Takahiro Saito, Norikazu Inuzuka, Ikuo Sugiyama

Abstract:

Abidjan city (Republic of Ivory Coast) is an emerging megacity and an urban coastal area where the number of floods reported is on a rapid increase due to climate change and unplanned urbanization. However, comprehensive disaster mitigation plans, policies, and financial resources are still lacking as the population ignores the extent and location of the flood zones; making them unprepared to mitigate the damages. Considering the existing condition, this paper aims to discuss an approach for flood risk reduction in Cocody Commune through value capture strategy and flood risk communication. Using geospatial techniques and hydrological simulation, we start our study by delineating flood zones and depths under several return periods in the study area. Then, through a questionnaire a field survey is conducted in order to validate the flood maps, to estimate the flood risk and to collect some sample of the opinion of residents on how the flood risk information disclosure could affect the values of property located inside and outside the flood zones. The results indicate that the study area is highly vulnerable to 5-year floods and more, which can cause serious harm to human lives and to properties as demonstrated by the extent of the 5-year flood of 2014. Also, it is revealed there is a high probability that the values of property located within flood zones could decline, and the values of surrounding property in the safe area could increase when risk information disclosure commences. However in order to raise public awareness of flood disaster and to prevent future housing promotion in high-risk prospective areas, flood risk information should be disseminated through the establishment of an early warning system. In order to reduce the effect of risk information disclosure and to protect the values of property within the high-risk zone, we propose that property tax increments in flood free zones should be captured and be utilized for infrastructure development and to maintain the early warning system that will benefit people living in flood prone areas. Through this case study, it is shown that combination of value capture strategy and risk communication could be an effective tool to educate citizen and to invest in flood risk reduction in emerging countries.

Keywords: Cocody-Abidjan, flood, geospatial techniques, risk communication, value capture

Procedia PDF Downloads 256
107 Counting Fishes in Aquaculture Ponds: Application of Imaging Sonars

Authors: Juan C. Gutierrez-Estrada, Inmaculada Pulido-Calvo, Ignacio De La Rosa, Antonio Peregrin, Fernando Gomez-Bravo, Samuel Lopez-Dominguez, Alejandro Garrocho-Cruz, Jairo Castro-Gutierrez

Abstract:

The semi-intensive aquaculture in traditional earth ponds is the main rearing system in Southern Spain. These fish rearing systems are approximately two thirds of aquatic production in this area which has made a significant contribution to the regional economy in recent years. In this type of rearing system, a crucial aspect is the correct quantification and control of the fish abundance in the ponds because the fish farmer knows how many fishes he puts in the ponds but doesn’t know how many fishes will harvest at the end of the rear period. This is a consequence of the mortality induced by different causes as pathogen agents as parasites, viruses and bacteria and other factors as predation of fish-eating birds and poaching. Track the fish abundance in these installations is very difficult because usually the ponds take up a large area of land and the management of the water flow is not automatized. Therefore, there is a very high degree of uncertainty on the abundance fishes which strongly hinders the management and planning of the sales. A novel and non-invasive procedure to count fishes in the ponds is by the means of imaging sonars, particularly fixed systems and/or linked to aquatic vehicles as Remotely Operated Vehicles (ROVs). In this work, a method based on census stations procedures is proposed to evaluate the fish abundance estimation accuracy using images obtained of multibeam sonars. The results indicate that it is possible to obtain a realistic approach about the number of fishes, sizes and therefore the biomass contained in the ponds. This research is included in the framework of the KTTSeaDrones Project (‘Conocimiento y transferencia de tecnología sobre vehículos aéreos y acuáticos para el desarrollo transfronterizo de ciencias marinas y pesqueras 0622-KTTSEADRONES-5-E’) financed by the European Regional Development Fund (ERDF) through the Interreg V-A Spain-Portugal Programme (POCTEP) 2014-2020.

Keywords: census station procedure, fish biomass, semi-intensive aquaculture, multibeam sonars

Procedia PDF Downloads 205
106 Effect of Cooking Process on the Antioxidant Activity of Different Variants of Tomato-Based Sofrito

Authors: Ana Beltran Sanahuja, A. Valdés García, Saray Lopez De Pablo Gallego, Maria Soledad Prats Moya

Abstract:

Tomato consumption has greatly increased worldwide in the last few years, mostly due to a growing demand for products like sofrito. In this sense, regular consumption of tomato-based products has been consistently associated with a reduction in the incidence of chronic degenerative diseases. The sofrito is a homemade tomato sauce typical of the Mediterranean area, which contains as main ingredients: tomato, onion, garlic and olive oil. There are also sofrito’s variations by adding other spices which bring at the same time not only color, flavor, smell and or aroma; they also provide medicinal properties, due to their antioxidant power. This protective effect has mainly been attributed to the predominant bioactive compounds present in sofrito, such as lycopene and other carotenoids as well as more than 40 different polyphenols. Regarding the cooking process, it is known that it can modify the properties and the availability of nutrients in sofrito; however, there is not enough information regarding this issue. For this reason, the aim of the present work is to evaluate the cooking effect on the antioxidant capacity of different variants of tomato-based sofrito combined with other spices, through the analysis of total phenols content (TPC) and to evaluate the antioxidant capacity by using the method of free radical 2,2-diphenyl-1-picrylhydrazyl (DPPH). Based on the results obtained, it can be confirmed that the basic sofrito composed of tomato, onion, garlic and olive oil and the sofrito with 1 g of rosemary added, are the ones with the highest content of phenols presenting greater antioxidant power than other industrial sofrito, and that of other variables of sofrito with added thyme or higher amounts of garlic. Moreover, it has been observed that in the elaboration of the tomato-based sofrito, it is possible to cook until 60 minutes, since the cooking process increases the bioavailability of the carotenoids when breaking the cell walls, which weakens the binding forces between the carotenoids and increases the levels of antioxidants present, confirmed both with the TPC and DPPH methods. It can be concluded that the cooking process of different variants of tomato-based sofrito, including spices, can improve the antioxidant capacity. The synergistic effects of different antioxidants may have a greater protective effect; increasing, also, the digestibility of proteins. In addition, the antioxidants help to deactivate the free radicals of diseases such as atherosclerosis, aging, immune suppression, cancer, and diabetes.

Keywords: antioxidants, cooking process, phenols sofrito

Procedia PDF Downloads 128
105 Machine Learning Model to Predict TB Bacteria-Resistant Drugs from TB Isolates

Authors: Rosa Tsegaye Aga, Xuan Jiang, Pavel Vazquez Faci, Siqing Liu, Simon Rayner, Endalkachew Alemu, Markos Abebe

Abstract:

Tuberculosis (TB) is a major cause of disease globally. In most cases, TB is treatable and curable, but only with the proper treatment. There is a time when drug-resistant TB occurs when bacteria become resistant to the drugs that are used to treat TB. Current strategies to identify drug-resistant TB bacteria are laboratory-based, and it takes a longer time to identify the drug-resistant bacteria and treat the patient accordingly. But machine learning (ML) and data science approaches can offer new approaches to the problem. In this study, we propose to develop an ML-based model to predict the antibiotic resistance phenotypes of TB isolates in minutes and give the right treatment to the patient immediately. The study has been using the whole genome sequence (WGS) of TB isolates as training data that have been extracted from the NCBI repository and contain different countries’ samples to build the ML models. The reason that different countries’ samples have been included is to generalize the large group of TB isolates from different regions in the world. This supports the model to train different behaviors of the TB bacteria and makes the model robust. The model training has been considering three pieces of information that have been extracted from the WGS data to train the model. These are all variants that have been found within the candidate genes (F1), predetermined resistance-associated variants (F2), and only resistance-associated gene information for the particular drug. Two major datasets have been constructed using these three information. F1 and F2 information have been considered as two independent datasets, and the third information is used as a class to label the two datasets. Five machine learning algorithms have been considered to train the model. These are Support Vector Machine (SVM), Random forest (RF), Logistic regression (LR), Gradient Boosting, and Ada boost algorithms. The models have been trained on the datasets F1, F2, and F1F2 that is the F1 and the F2 dataset merged. Additionally, an ensemble approach has been used to train the model. The ensemble approach has been considered to run F1 and F2 datasets on gradient boosting algorithm and use the output as one dataset that is called F1F2 ensemble dataset and train a model using this dataset on the five algorithms. As the experiment shows, the ensemble approach model that has been trained on the Gradient Boosting algorithm outperformed the rest of the models. In conclusion, this study suggests the ensemble approach, that is, the RF + Gradient boosting model, to predict the antibiotic resistance phenotypes of TB isolates by outperforming the rest of the models.

Keywords: machine learning, MTB, WGS, drug resistant TB

Procedia PDF Downloads 34
104 Simons, Ehrlichs and the Case for Polycentricity – Why Growth-Enthusiasts and Growth-Sceptics Must Embrace Polycentricity

Authors: Justus Enninga

Abstract:

Enthusiasts and skeptics about economic growth have not much in common in their preference for institutional arrangements that solve ecological conflicts. This paper argues that agreement between both opposing schools can be found in the Bloomington Schools’ concept of polycentricity. Growth-enthusiasts who will be referred to as Simons after the economist Julian Simon and growth-skeptics named Ehrlichs after the ecologist Paul R. Ehrlich both profit from a governance structure where many officials and decision structures are assigned limited and relatively autonomous prerogatives to determine, enforce and alter legal relationships. The paper advances this argument in four steps. First, it will provide clarification of what Simons and Ehrlichs mean when they talk about growth and what the arguments for and against growth-enhancing or degrowth policies are for them and for the other site. Secondly, the paper advances the concept of polycentricity as first introduced by Michael Polanyi and later refined to the study of governance by the Bloomington School of institutional analysis around the Nobel Prize laureate Elinor Ostrom. The Bloomington School defines polycentricity as a non-hierarchical, institutional, and cultural framework that makes possible the coexistence of multiple centers of decision making with different objectives and values, that sets the stage for an evolutionary competition between the complementary ideas and methods of those different decision centers. In the third and fourth parts, it is shown how the concept of polycentricity is of crucial importance for growth-enthusiasts and growth-skeptics alike. The shorter third part demonstrates the literature on growth-enhancing policies and argues that large parts of the literature already accept that polycentric forms of governance like markets, the rule of law and federalism are an important part of economic growth. Part four delves into the more nuanced question of how a stagnant steady-state economy or even an economy that de-grows will still find polycentric governance desirable. While the majority of degrowth proposals follow a top-down approach by requiring direct governmental control, a contrasting bottom-up approach is advanced. A decentralized, polycentric approach is desirable because it allows for the utilization of tacit information dispersed in society and an institutionalized discovery process for new solutions to the problem of ecological collective action – no matter whether you belong to the Simons or Ehrlichs in a green political economy.

Keywords: degrowth, green political theory, polycentricity, institutional robustness

Procedia PDF Downloads 167
103 Developing an Intervention Program to Promote Healthy Eating in a Catering System Based on Qualitative Research Results

Authors: O. Katz-Shufan, T. Simon-Tuval, L. Sabag, L. Granek, D. R. Shahar

Abstract:

Meals provided at catering systems are a common source of workers' nutrition and were found as contributing high amounts calories and fat. Thus, eating daily catering food can lead to overweight and chronic diseases. On the other hand, the institutional dining room may be an ideal environment for implementation of intervention programs that promote healthy eating. This may improve diners' lifestyle and reduce their prevalence of overweight, obesity and chronic diseases. The significance of this study is in developing an intervention program based on the diners’ dietary habits, preferences and their attitudes towards various intervention programs. In addition, a successful catering-based intervention program may have a significant effect simultaneously on a large group of diners, leading to improved nutrition, healthier lifestyle, and disease-prevention on a large scale. In order to develop the intervention program, we conducted a qualitative study. We interviewed 13 diners who eat regularly at catering systems, using a semi-structured interview. The interviews were recorded, transcribed and then analyzed by the thematic method, which identifies, analyzes and reports themes within the data. The interviews revealed several major themes, including expectation of diners to be provided with healthy food choices; their request for nutrition-expert involvement in planning the meals; the diners' feel that there is a conflict between sensory attractiveness of the food and its' nutritional quality. In the context of the catering-based intervention programs, the diners prefer scientific and clear messages focusing on labeling healthy dishes only, as opposed to the labeling of unhealthy dishes; they were interested in a nutritional education program to accompany the intervention program. Based on these findings, we have developed an intervention program that includes: changes in food served such as replacing several menu items and nutritional improvement of some of the recipes; as well as, environmental changes such as changing the location of some food items presented on the buffet, placing positive nutritional labels on healthy dishes and an ongoing healthy nutrition campaign, all accompanied by a nutrition education program. The intervention program is currently being tested for its impact on health outcomes and its cost-effectiveness.

Keywords: catering system, food services, intervention, nutrition policy, public health, qualitative research

Procedia PDF Downloads 181
102 Towards Creative Movie Title Generation Using Deep Neural Models

Authors: Simon Espigolé, Igor Shalyminov, Helen Hastie

Abstract:

Deep machine learning techniques including deep neural networks (DNN) have been used to model language and dialogue for conversational agents to perform tasks, such as giving technical support and also for general chit-chat. They have been shown to be capable of generating long, diverse and coherent sentences in end-to-end dialogue systems and natural language generation. However, these systems tend to imitate the training data and will only generate the concepts and language within the scope of what they have been trained on. This work explores how deep neural networks can be used in a task that would normally require human creativity, whereby the human would read the movie description and/or watch the movie and come up with a compelling, interesting movie title. This task differs from simple summarization in that the movie title may not necessarily be derivable from the content or semantics of the movie description. Here, we train a type of DNN called a sequence-to-sequence model (seq2seq) that takes as input a short textual movie description and some information on e.g. genre of the movie. It then learns to output a movie title. The idea is that the DNN will learn certain techniques and approaches that the human movie titler may deploy that may not be immediately obvious to the human-eye. To give an example of a generated movie title, for the movie synopsis: ‘A hitman concludes his legacy with one more job, only to discover he may be the one getting hit.’; the original, true title is ‘The Driver’ and the one generated by the model is ‘The Masquerade’. A human evaluation was conducted where the DNN output was compared to the true human-generated title, as well as a number of baselines, on three 5-point Likert scales: ‘creativity’, ‘naturalness’ and ‘suitability’. Subjects were also asked which of the two systems they preferred. The scores of the DNN model were comparable to the scores of the human-generated movie title, with means m=3.11, m=3.12, respectively. There is room for improvement in these models as they were rated significantly less ‘natural’ and ‘suitable’ when compared to the human title. In addition, the human-generated title was preferred overall 58% of the time when pitted against the DNN model. These results, however, are encouraging given the comparison with a highly-considered, well-crafted human-generated movie title. Movie titles go through a rigorous process of assessment by experts and focus groups, who have watched the movie. This process is in place due to the large amount of money at stake and the importance of creating an effective title that captures the audiences’ attention. Our work shows progress towards automating this process, which in turn may lead to a better understanding of creativity itself.

Keywords: creativity, deep machine learning, natural language generation, movies

Procedia PDF Downloads 314
101 Anaerobic Co-digestion in Two-Phase TPAD System of Sewage Sludge and Fish Waste

Authors: Rocio López, Miriam Tena, Montserrat Pérez, Rosario Solera

Abstract:

Biotransformation of organic waste into biogas is considered an interesting alternative for the production of clean energy from renewable sources by reducing the volume and organic content of waste Anaerobic digestion is considered one of the most efficient technologies to transform waste into fertilizer and biogas in order to obtain electrical energy or biofuel within the concept of the circular economy. Currently, three types of anaerobic processes have been developed on a commercial scale: (1) single-stage process where sludge bioconversion is completed in a single chamber, (2) two-stage process where the acidogenic and methanogenic stages are separated into two chambers and, finally, (3) temperature-phase sequencing (TPAD) process that combines a thermophilic pretreatment unit prior to mesophilic anaerobic digestion. Two-stage processes can provide hydrogen and methane with easier control of the first and second stage conditions producing higher total energy recovery and substrate degradation than single-stage processes. On the other hand, co-digestion is the simultaneous anaerobic digestion of a mixture of two or more substrates. The technology is similar to anaerobic digestion but is a more attractive option as it produces increased methane yields due to the positive synergism of the mixtures in the digestion medium thus increasing the economic viability of biogas plants. The present study focuses on the energy recovery by anaerobic co-digestion of sewage sludge and waste from the aquaculture-fishing sector. The valorization is approached through the application of a temperature sequential phase process or TPAD technology (Temperature - Phased Anaerobic Digestion). Moreover, two-phase of microorganisms is considered. Thus, the selected process allows the development of a thermophilic acidogenic phase followed by a mesophilic methanogenic phase to obtain hydrogen (H₂) in the first stage and methane (CH₄) in the second stage. The combination of these technologies makes it possible to unify all the advantages of these anaerobic digestion processes individually. To achieve these objectives, a sequential study has been carried out in which the biochemical potential of hydrogen (BHP) is tested followed by a BMP test, which will allow checking the feasibility of the two-stage process. The best results obtained were high total and soluble COD yields (59.8% and 82.67%, respectively) as well as H₂ production rates of 12LH₂/kg SVadded and methane of 28.76 L CH₄/kg SVadded for TPAD.

Keywords: anaerobic co-digestion, TPAD, two-phase, BHP, BMP, sewage sludge, fish waste

Procedia PDF Downloads 140
100 The Automatisation of Dictionary-Based Annotation in a Parallel Corpus of Old English

Authors: Ana Elvira Ojanguren Lopez, Javier Martin Arista

Abstract:

The aims of this paper are to present the automatisation procedure adopted in the implementation of a parallel corpus of Old English, as well as, to assess the progress of automatisation with respect to tagging, annotation, and lemmatisation. The corpus consists of an aligned parallel text with word-for-word comparison Old English-English that provides the Old English segment with inflectional form tagging (gloss, lemma, category, and inflection) and lemma annotation (spelling, meaning, inflectional class, paradigm, word-formation and secondary sources). This parallel corpus is intended to fill a gap in the field of Old English, in which no parallel and/or lemmatised corpora are available, while the average amount of corpus annotation is low. With this background, this presentation has two main parts. The first part, which focuses on tagging and annotation, selects the layouts and fields of lexical databases that are relevant for these tasks. Most information used for the annotation of the corpus can be retrieved from the lexical and morphological database Nerthus and the database of secondary sources Freya. These are the sources of linguistic and metalinguistic information that will be used for the annotation of the lemmas of the corpus, including morphological and semantic aspects as well as the references to the secondary sources that deal with the lemmas in question. Although substantially adapted and re-interpreted, the lemmatised part of these databases draws on the standard dictionaries of Old English, including The Student's Dictionary of Anglo-Saxon, An Anglo-Saxon Dictionary, and A Concise Anglo-Saxon Dictionary. The second part of this paper deals with lemmatisation. It presents the lemmatiser Norna, which has been implemented on Filemaker software. It is based on a concordance and an index to the Dictionary of Old English Corpus, which comprises around three thousand texts and three million words. In its present state, the lemmatiser Norna can assign lemma to around 80% of textual forms on an automatic basis, by searching the index and the concordance for prefixes, stems and inflectional endings. The conclusions of this presentation insist on the limits of the automatisation of dictionary-based annotation in a parallel corpus. While the tagging and annotation are largely automatic even at the present stage, the automatisation of alignment is pending for future research. Lemmatisation and morphological tagging are expected to be fully automatic in the near future, once the database of secondary sources Freya and the lemmatiser Norna have been completed.

Keywords: corpus linguistics, historical linguistics, old English, parallel corpus

Procedia PDF Downloads 199
99 Evaluation of Dry Matter Yield of Panicum maximum Intercropped with Pigeonpea and Sesbania Sesban

Authors: Misheck Musokwa, Paramu Mafongoya, Simon Lorentz

Abstract:

Seasonal shortages of fodder during the dry season is a major constraint to smallholder livestock farmers in South Africa. To mitigate the shortage of fodder, legume trees can be intercropped with pastures which can diversify the sources of feed and increase the amount of protein for grazing animals. The objective was to evaluate dry matter yield of Panicum maximum and land productivity under different fodder production systems during 2016/17-2017/18 seasons at Empangeni (28.6391° S and 31.9400° E). A randomized complete block design, replicated three times was used, the treatments were sole Panicum maximum, Panicum maximum + Sesbania sesban, Panicum maximum + pigeonpea, sole Sesbania sesban, Sole pigeonpea. Three months S.sesbania seedlings were transplanted whilst pigeonpea was direct seeded at spacing of 1m x 1m. P. maximum seeds were drilled at a respective rate of 7.5 kg/ha having an inter-row spacing of 0.25 m apart. In between rows of trees P. maximum seeds were drilled. The dry matter yield harvesting times were separated by six months’ timeframe. A 0.25 m² quadrant randomly placed on 3 points on the plot was used as sampling area during harvesting P. maximum. There was significant difference P < 0.05 across 3 harvests and total dry matter. P. maximum had higher dry matter yield as compared to both intercrops at first harvest and total. The second and third harvest had no significant difference with pigeonpea intercrop. The results was in this order for all 3 harvest: P. maximum (541.2c, 1209.3b and 1557b) kg ha¹ ≥ P. maximum + pigeonpea (157.2b, 926.7b and 1129b) kg ha¹ > P. maximum + S. sesban (36.3a, 282a and 555a) kg ha¹. Total accumulation of dry matter yield of P. maximum (3307c kg ha¹) > P. maximum + pigeonpea (2212 kg ha¹) ≥ P. maximum + S. sesban (874 kg ha¹). There was a significant difference (P< 0.05) on seed yield for trees. Pigeonpea (1240.3 kg ha¹) ≥ Pigeonpea + P. maximum (862.7 kg ha¹) > S.sesbania (391.9 kg ha¹) ≥ S.sesbania + P. maximum. The Land Equivalent Ratio (LER) was in the following order P. maximum + pigeonpea (1.37) > P. maximum + S. sesban (0.84) > Pigeonpea (0.59) ≥ S. Sesbania (0.57) > P. maximum (0.26). Results indicates that it is beneficial to have P. maximum intercropped with pigeonpea because of higher land productivity. Planting grass with pigeonpea was more beneficial than S. sesban with grass or sole cropping in terms of saving the shortage of arable land. P. maximum + pigeonpea saves a substantial (37%) land which can be subsequently be used for other crop production. Pigeonpea is recommended as an intercrop with P. maximum due to its higher LER and combined production of livestock feed, human food, and firewood. Panicum grass is low in crude protein though high in carbohydrates, there is a need for intercropping it with legume trees. A farmer who buys concentrates can reduce costs by combining P. maximum with pigeonpea this will provide a balanced diet at low cost.

Keywords: fodder, livestock, productivity, smallholder farmers

Procedia PDF Downloads 136
98 Wastewater Treatment in the Abrasives Industry via Fenton and Photo-Fenton Oxidation Processes: A Case Study from Peru

Authors: Hernan Arturo Blas López, Gustavo Henndel Lopes, Antonio Carlos Silva Costa Teixeira, Carmen Elena Flores Barreda, Patricia Araujo Pantoja

Abstract:

Phenols are toxic for life and the environment and may come from many sources. Uncured phenolic monomers present in phenolic resins used as binders in grinding wheels and emery paper can contaminate industrial wastewaters in abrasives manufacture plants. Furthermore, vestiges of resol and novolacs resins generated by wear and tear of abrasives are also possible sources of water contamination by phenolics in these facilities. Fortunately, advanced oxidation by dark Fenton and photo-Fenton techniques are capable of oxidizing phenols and their degradation products up to their mineralization into H₂O and CO₂. The maximal allowable concentrations for phenols in Peruvian waterbodies is very low, such that insufficiently treated effluents from the abrasives industry are a potential environmental noncompliance. The current case study highlights findings obtained during the lab-scale application of Fenton’s and photo-assisted Fenton’s chemistries to real industrial wastewater samples from an abrasives manufacture plant in Peru. The goal was to reduce the phenolic content and sample toxicity. For this purpose, two independent variables-reaction time and effect of ultraviolet radiation–were studied as for their impacts on the concentration of total phenols, total organic carbon (TOC), biological oxygen demand (BOD) and chemical oxygen demand (COD). In this study, diluted samples (1 L) of the industrial effluent were treated with Fenton’s reagent (H₂O₂ and Fe²⁺ from FeSO₄.H₂O) during 10 min in a photochemical batch reactor (Alphatec RFS-500, Brazil) at pH 2.92. In the case of photo-Fenton tests with ultraviolet lamps of 9 W, UV-A, UV-B and UV-C lamps were evaluated. All process conditions achieved 100% of phenols degraded within 5 minutes. TOC, BOD and COD decreased by 49%, 52% and 86% respectively (all processes together). However, Fenton treatment was not capable of reducing BOD, COD and TOC below a certain value even after 10 minutes, contrarily to photo-Fenton. It was also possible to conclude that the processes here studied degrade other compounds in addition to phenols, what is an advantage. In all cases, elevated effluent dilution factors and high amounts of oxidant agent impact negatively the overall economy of the processes here investigated.

Keywords: fenton oxidation, wastewater treatment, phenols, abrasives industry

Procedia PDF Downloads 301
97 Velma-ARC’s Rehabilitation of Repentant Cybercriminals in Nigeria

Authors: Umukoro Omonigho Simon, Ashaolu David ‘Diya, Aroyewun-Olaleye Temitope Folashade

Abstract:

The VELMA Action to Reduce Cybercrime (ARC) is an initiative, the first of its kind in Nigeria, designed to identify, rehabilitate and empower repentant cybercrime offenders popularly known as ‘yahoo boys’ in Nigerian parlance. Velma ARC provides social inclusion boot camps with the goal of rehabilitating cybercriminals via psychotherapeutic interventions, improving their IT skills, and empowering them to make constructive contributions to society. This report highlights the psychological interventions provided for participants of the maiden edition of the Velma ARC boot camp and presents the outcomes of these interventions. The boot camp was set up in a hotel premises which was booked solely for the 1 month event. The participants were selected and invited via the Velma online recruitment portal based on an objective double-blind selection process from a pool of potential participants who signified interest via the registration portal. The participants were first taken through psychological profiling (personality, symptomology and psychopathology) before the individual and group sessions began. They were profiled using the Minnesota Multiphasic Personality Inventory -2- Restructured Form (MMPI-2-RF), the latest version of its series. Individual psychotherapy sessions were conducted for all participants based on what was interpreted on their profiles. Focus group discussion was held later to discuss a movie titled ‘catch me if you can’ directed by Steven Spielberg, featuring Leonardo De Caprio and Tom Hanks. The movie was based on the true life story of Frank Abagnale, who was a notorious scammer and con artist in his youthful years. Emergent themes from the movie were discussed as psycho-educative parameters for the participants. The overall evaluation of outcomes from the VELMA ARC rehabilitation boot camp stemmed from a disaggregated assessment of observed changes which are summarized in the final report of the clinical psychologist and was detailed enough to infer genuine repentance and positive change in attitude towards cybercrime among the participants. Follow up services were incorporated to validate initial observations. This gives credence to the potency of the psycho-educative intervention provided during the Velma ARC boot camp. It was recommended that support and collaborations from the government and other agencies/individuals would assist the VELMA foundation in expanding the scope and quality of the Velma ARC initiative as an additional requirement for cybercrime offenders following incarceration.

Keywords: Velma-ARC, cybercrime offenders, rehabilitation, Nigeria

Procedia PDF Downloads 134
96 Cardiothoracic Ratio in Postmortem Computed Tomography: A Tool for the Diagnosis of Cardiomegaly

Authors: Alex Eldo Simon, Abhishek Yadav

Abstract:

This study aimed to evaluate the utility of postmortem computed tomography (CT) and heart weight measurements in the assessment of cardiomegaly in cases of sudden death due to cardiac origin by comparing the results of these two diagnostic methods. The study retrospectively analyzed postmortem computed tomography (PMCT) data from 54 cases of sudden natural death and compared the findings with those of the autopsy. The study involved measuring the cardiothoracic ratio (CTR) from coronal computed tomography (CT) images and determining the actual cardiac weight by weighing the heart during the autopsy. The inclusion criteria for the study were cases of sudden death suspected to be caused by cardiac pathology, while exclusion criteria included death due to unnatural causes such as trauma or poisoning, diagnosed natural causes of death related to organs other than the heart, and cases of decomposition. Sensitivity, specificity, and diagnostic accuracy were calculated, and to evaluate the accuracy of using the cardiothoracic ratio (CTR) to detect an enlarged heart, the study generated receiver operating characteristic (ROC) curves. The cardiothoracic ratio (CTR) is a radiological tool used to assess cardiomegaly by measuring the maximum cardiac diameter in relation to the maximum transverse diameter of the chest wall. The clinically used criteria for CTR have been modified from 0.50 to 0.57 for use in postmortem settings, where abnormalities can be detected by comparing CTR values to this threshold. A CTR value of 0.57 or higher is suggestive of hypertrophy but not conclusive. Similarly, heart weight is measured during the traditional autopsy, and a cardiac weight greater than 450 grams is defined as hypertrophy. Of the 54 cases evaluated, 22 (40.7%) had a cardiothoracic ratio (CTR) ranging from > 0.50 to equal 0.57, and 12 cases (22.2%) had a CTR greater than 0.57, which was defined as hypertrophy. The mean CTR was calculated as 0.52 ± 0.06. Among the 54 cases evaluated, the weight of the heart was measured, and the mean was calculated as 369.4 ± 99.9 grams. Out of the 54 cases evaluated, 12 were found to have hypertrophy as defined by PMCT, while only 9 cases were identified with hypertrophy in traditional autopsy. The sensitivity and specificity of the test were calculated as 55.56% and 84.44%, respectively. The sensitivity of the hypertrophy test was found to be 55.56% (95% CI: 26.66, 81.12¹), the specificity was 84.44% (95% CI: 71.22, 92.25¹), and the diagnostic accuracy was 79.63% (95% CI: 67.1, 88.23¹). The limitation of the study was a low sample size of only 54 cases, which may limit the generalizability of the findings. The comparison of the cardiothoracic ratio with heart weight in this study suggests that PMCT may serve as a screening tool for medico-legal autopsies when performed by forensic pathologists. However, it should be noted that the low sensitivity of the test (55.5%) may limit its diagnostic accuracy, and therefore, further studies with larger sample sizes and more diverse populations are needed to validate these findings.

Keywords: PMCT, virtopsy, CTR, cardiothoracic ratio

Procedia PDF Downloads 70
95 Jurisdictional Federalism and Formal Federalism: Levels of Political Centralization on American and Brazilian Models

Authors: Henrique Rangel, Alexandre Fadel, Igor De Lazari, Bianca Neri, Carlos Bolonha

Abstract:

This paper promotes a comparative analysis of American and Brazilian models of federalism assuming their levels of political centralization as main criterion. The central problem faced herein is the Brazilian approach of Unitarian regime. Although the hegemony of federative form after 1989, Brazil had a historical frame of political centralization that remains under the 1988 constitutional regime. Meanwhile, United States framed a federalism in which States absorb significant authorities. The hypothesis holds that the amount of alternative criteria of federalization – which can generate political centralization –, and the way they are upheld on judicial review, are crucial to understand the levels of political centralization achieved in each model. To test this hypothesis, the research is conducted by a methodology temporally delimited to 1994-2014 period. Three paradigmatic precedents of U.S. Supreme Court were selected: United States vs. Morrison (2000), on gender-motivated violence, Gonzales vs. Raich (2005), on medical use of marijuana, and United States vs. Lopez (1995), on firearm possession on scholar zones. These most relevant cases over federalism in the recent activity of Supreme Court indicates a determinant parameter of deliberation: the commerce clause. After observe the criterion used to permit or prohibit the political centralization in America, the Brazilian normative context is presented. In this sense, it is possible to identify the eventual legal treatment these controversies could receive in this Country. The decision-making reveals some deliberative parameters, which characterizes each federative model. At the end of research, the precedents of Rehnquist Court promote a broad revival of federalism debate, establishing the commerce clause as a secure criterion to uphold or not the necessity of centralization – even with decisions considered conservative. Otherwise, the Brazilian federalism solves them controversies upon in a formalist fashion, within numerous and comprehensive – sometimes casuistic too – normative devices, oriented to make an intense centralization. The aim of this work is indicate how jurisdictional federalism found in United States can preserve a consistent model with States robustly autonomous, while Brazil gives preference to normative mechanisms designed to starts from centralization.

Keywords: constitutional design, federalism, U.S. Supreme Court, legislative authority

Procedia PDF Downloads 503
94 Modeling the Acquisition of Expertise in a Sequential Decision-Making Task

Authors: Cristóbal Moënne-Loccoz, Rodrigo C. Vergara, Vladimir López, Domingo Mery, Diego Cosmelli

Abstract:

Our daily interaction with computational interfaces is plagued of situations in which we go from inexperienced users to experts through self-motivated exploration of the same task. In many of these interactions, we must learn to find our way through a sequence of decisions and actions before obtaining the desired result. For instance, when drawing cash from an ATM machine, choices are presented in a step-by-step fashion so that a specific sequence of actions must be performed in order to produce the expected outcome. But, as they become experts in the use of such interfaces, do users adopt specific search and learning strategies? Moreover, if so, can we use this information to follow the process of expertise development and, eventually, predict future actions? This would be a critical step towards building truly adaptive interfaces that can facilitate interaction at different moments of the learning curve. Furthermore, it could provide a window into potential mechanisms underlying decision-making behavior in real world scenarios. Here we tackle this question using a simple game interface that instantiates a 4-level binary decision tree (BDT) sequential decision-making task. Participants have to explore the interface and discover an underlying concept-icon mapping in order to complete the game. We develop a Hidden Markov Model (HMM)-based approach whereby a set of stereotyped, hierarchically related search behaviors act as hidden states. Using this model, we are able to track the decision-making process as participants explore, learn and develop expertise in the use of the interface. Our results show that partitioning the problem space into such stereotyped strategies is sufficient to capture a host of exploratory and learning behaviors. Moreover, using the modular architecture of stereotyped strategies as a Mixture of Experts, we are able to simultaneously ask the experts about the user's most probable future actions. We show that for those participants that learn the task, it becomes possible to predict their next decision, above chance, approximately halfway through the game. Our long-term goal is, on the basis of a better understanding of real-world decision-making processes, to inform the construction of interfaces that can establish dynamic conversations with their users in order to facilitate the development of expertise.

Keywords: behavioral modeling, expertise acquisition, hidden markov models, sequential decision-making

Procedia PDF Downloads 242
93 Potential Applications of Biosurfactants from Corn Steep Liquor in Cosmetic

Authors: J. M. Cruz, X. Vecıno, L. Rodrıguez-López, J. M. Dominguez, A. B. Moldes

Abstract:

The cosmetic and personal care industry are the fields where biosurfactants could have more possibilities of success because in this kind of products the replacement of synthetic detergents by natural surfactants will provide an additional added value to the product, at the same time that the harmful effects produced by some synthetic surfactants could be avoided or reduced. Therefore, nowadays, consumers are disposed to pay and additional cost if they obtain more natural products. In this work we provide data about the potential of biosurfactants in the cosmetic and personal care industry. Biosurfactants from corn steep liquor, that is a fermented and condensed stream, have showed good surface-active properties, reducing substantially the surface tension of water. The bacteria that usually growth in corn steep liquor comprises Lactobacillus species, generally recognize as safe. The biosurfactant extracted from CSL consists of a lipopeptide, composed by fatty acids, which can reduce the surface tension of water in more than 30 units. It is a yellow and viscous liquid with a density of 1.053 mg/mL and pH=4. By these properties, they could be introduced in the formulation of cosmetic creams, hair conditioners or shampoos. Moreover this biosurfactant extracted from corn steep liquor, have showed a potent antimicrobial effect on different strains of Streptococcus. Some species of Streptococcus are commonly found weakly living in the human respiratory and genitourinary systems, producing several diseases in humans, including skin diseases. For instance, Streptococcus pyogenes produces many toxins and enzymes that help to stabilize skin infections; probably biosurfactants from corn steep liquor can inhibit the mechanisms of the S. pyogenes enzymes. S. pyogenes is an important cause of pharyngitis, impetigo, cellulitis and necrotizing fasciitis. In this work it was observed that 50 mg/L of biosurfactant extract obtained from corn steep liquor is able to inhibit more than 50% the growth of S. pyogenes. Thus, cosmetic and personal care products, formulated with biosurfactants from corn steep liquor, could have prebiotic properties. The natural biosurfactant presented in this work and obtained from corn milling industry streams, have showed a high potential to provide an interesting and sustainable alternative to those, antibacterial and surfactant ingredients used in cosmetic and personal care manufacture, obtained by chemical synthesis, which can cause irritation, and often only show short time effects.

Keywords: antimicrobial activity, biosurfactants, cosmetic, personal care

Procedia PDF Downloads 245
92 The Emergence of Memory at the Nanoscale

Authors: Victor Lopez-Richard, Rafael Schio Wengenroth Silva, Fabian Hartmann

Abstract:

Memcomputing is a computational paradigm that combines information processing and storage on the same physical platform. Key elements for this topic are devices with an inherent memory, such as memristors, memcapacitors, and meminductors. Despite the widespread emergence of memory effects in various solid systems, a clear understanding of the basic microscopic mechanisms that trigger them is still a puzzling task. We report basic ingredients of the theory of solid-state transport, intrinsic to a wide range of mechanisms, as sufficient conditions for a memristive response that points to the natural emergence of memory. This emergence should be discernible under an adequate set of driving inputs, as highlighted by our theoretical prediction and general common trends can be thus listed that become a rule and not the exception, with contrasting signatures according to symmetry constraints, either built-in or induced by external factors at the microscopic level. Explicit analytical figures of merit for the memory modulation of the conductance are presented, unveiling very concise and accessible correlations between general intrinsic microscopic parameters such as relaxation times, activation energies, and efficiencies (encountered throughout various fields in Physics) with external drives: voltage pulses, temperature, illumination, etc. These building blocks of memory can be extended to a vast universe of materials and devices, with combinations of parallel and independent transport channels, providing an efficient and unified physical explanation for a wide class of resistive memory devices that have emerged in recent years. Its simplicity and practicality have also allowed a direct correlation with reported experimental observations with the potential of pointing out the optimal driving configurations. The main methodological tools used to combine three quantum transport approaches, Drude-like model, Landauer-Buttiker formalism, and field-effect transistor emulators, with the microscopic characterization of nonequilibrium dynamics. Both qualitative and quantitative agreements with available experimental responses are provided for validating the main hypothesis. This analysis also shades light on the basic universality of complex natural impedances of systems out of equilibrium and might help pave the way for new trends in the area of memory formation as well as in its technological applications.

Keywords: memories, memdevices, memristors, nonequilibrium states

Procedia PDF Downloads 82
91 Perception of Hazards and Risks in Road Utilization as Space for Social Ceremonies in Indigenous Residential Area of Ogbomoso, Nigeria

Authors: Okanlawon Simon Ayorinde, Odunjo Oluronke Omolola, Fadamiro Joseph Akinlabi, Adedibu Afolabi Adebgite

Abstract:

A road is a path established over land, especially prepared way between places for the use of pedestrian, riders, and vehicles: a hard surface built for vehicles to travel on. The social, economic and health importance of roads in any community and nation cannot be underestimated. Roads provide access to properties and they also provide mobility which is ability to transport goods and services from one place to another. In the residential zones of many indigenous cities in Nigeria, roads are usually blocked for social ceremonies. Road blocked for ceremonies as used in this study are a temporary barrier across a road, used to stop or hinder traffic from passing through to the other side. Social ceremonies that could warrant road blockage include marriage, child naming, funeral, celebration of life’s achievement, birthday anniversary etc. These activities are likely to generate environmental hazards and their attendant risks. The assessment of these hazards and risks in residential zones of indigenous cities in Nigeria becomes imperative. The study is focused on Ogbomoso, Oyo State, Nigeria. The town has two local government councils namely Ogbomoso North and Ogbomoso South. Urban tracts that are easy to identify are political wards in the absence of land use segregation, houses numbering and street naming. The wards that had residential having a minimum of 60% of their land use components were surveyed and fifteen out of twenty wards identified in the town were surveyed. The study utilized primary data collected through questionnaire administration The three major road categories (Trunk A-Federal; Trunk B- State; Trunk C-Local) were identified and trunk C-Local roads were purposively selected being the concern of this study because they are the ones often blocked for social activities. The major stakeholders interviewed and the respective sampling methods are residents (random and systematic), social ceremony organizers (purposive), government officials (purposive) and road users namely commercial motorists and commercial motor cyclists (random and incidental). Data analysis was mainly descriptive. Two indices to measure respondents’ perception were developed. These are ‘Hazard Severity Index’ (HSI) and ‘Relative Awareness Index’ (RAI).Thereafter, policy implications and recommendations were provided.

Keywords: road, residential zones, indigenous cities, blocked, social ceremonies

Procedia PDF Downloads 506
90 Terrestrial Laser Scans to Assess Aerial LiDAR Data

Authors: J. F. Reinoso-Gordo, F. J. Ariza-López, A. Mozas-Calvache, J. L. García-Balboa, S. Eddargani

Abstract:

The DEMs quality may depend on several factors such as data source, capture method, processing type used to derive them, or the cell size of the DEM. The two most important capture methods to produce regional-sized DEMs are photogrammetry and LiDAR; DEMs covering entire countries have been obtained with these methods. The quality of these DEMs has traditionally been evaluated by the national cartographic agencies through punctual sampling that focused on its vertical component. For this type of evaluation there are standards such as NMAS and ASPRS Positional Accuracy Standards for Digital Geospatial Data. However, it seems more appropriate to carry out this evaluation by means of a method that takes into account the superficial nature of the DEM and, therefore, its sampling is superficial and not punctual. This work is part of the Research Project "Functional Quality of Digital Elevation Models in Engineering" where it is necessary to control the quality of a DEM whose data source is an experimental LiDAR flight with a density of 14 points per square meter to which we call Point Cloud Product (PCpro). In the present work it is described the capture data on the ground and the postprocessing tasks until getting the point cloud that will be used as reference (PCref) to evaluate the PCpro quality. Each PCref consists of a patch 50x50 m size coming from a registration of 4 different scan stations. The area studied was the Spanish region of Navarra that covers an area of 10,391 km2; 30 patches homogeneously distributed were necessary to sample the entire surface. The patches have been captured using a Leica BLK360 terrestrial laser scanner mounted on a pole that reached heights of up to 7 meters; the position of the scanner was inverted so that the characteristic shadow circle does not exist when the scanner is in direct position. To ensure that the accuracy of the PCref is greater than that of the PCpro, the georeferencing of the PCref has been carried out with real-time GNSS, and its accuracy positioning was better than 4 cm; this accuracy is much better than the altimetric mean square error estimated for the PCpro (<15 cm); The kind of DEM of interest is the corresponding to the bare earth, so that it was necessary to apply a filter to eliminate vegetation and auxiliary elements such as poles, tripods, etc. After the postprocessing tasks the PCref is ready to be compared with the PCpro using different techniques: cloud to cloud or after a resampling process DEM to DEM.

Keywords: data quality, DEM, LiDAR, terrestrial laser scanner, accuracy

Procedia PDF Downloads 88
89 An Effective Approach to Knowledge Capture in Whole Life Costing in Constructions Project

Authors: Ndibarafinia Young Tobin, Simon Burnett

Abstract:

In spite of the benefits of implementing whole life costing technique as a valuable approach for comparing alternative building designs allowing operational cost benefits to be evaluated against any initial cost increases and also as part of procurement in the construction industry, its adoption has been relatively slow due to the lack of tangible evidence, ‘know-how’ skills and knowledge of the practice, i.e. the lack of professionals in many establishments with knowledge and training on the use of whole life costing technique, this situation is compounded by the absence of available data on whole life costing from relevant projects, lack of data collection mechanisms and so on. This has proved to be very challenging to those who showed some willingness to employ the technique in a construction project. The knowledge generated from a project can be considered as best practices learned on how to carry out tasks in a more efficient way, or some negative lessons learned which have led to losses and slowed down the progress of the project and performance. Knowledge management in whole life costing practice can enhance whole life costing analysis execution in a construction project, as lessons learned from one project can be carried on to future projects, resulting in continuous improvement, providing knowledge that can be used in the operation and maintenance phases of an assets life span. Purpose: The purpose of this paper is to report an effective approach which can be utilised in capturing knowledge in whole life costing practice in a construction project. Design/methodology/approach: An extensive literature review was first conducted on the concept of knowledge management and whole life costing. This was followed by a semi-structured interview to explore the existing and good practice knowledge management in whole life costing practice in a construction project. The data gathered from the semi-structured interview was analyzed using content analysis and used to structure an effective knowledge capturing approach. Findings: From the results obtained in the study, it shows that the practice of project review is the common method used in the capturing of knowledge and should be undertaken in an organized and accurate manner, and results should be presented in the form of instructions or in a checklist format, forming short and precise insights. The approach developed advised that irrespective of how effective the approach to knowledge capture, the absence of an environment for sharing knowledge, would render the approach ineffective. Open culture and resources are critical for providing a knowledge sharing setting, and leadership has to sustain whole life costing knowledge capture, giving full support for its implementation. The knowledge capturing approach has been evaluated by practitioners who are experts in the area of whole life costing practice. The results have indicated that the approach to knowledge capture is suitable and efficient.

Keywords: whole life costing, knowledge capture, project review, construction industry, knowledge management

Procedia PDF Downloads 251
88 Integrated Geophysical Surveys for Sinkhole and Subsidence Vulnerability Assessment, in the West Rand Area of Johannesburg

Authors: Ramoshweu Melvin Sethobya, Emmanuel Chirenje, Mihlali Hobo, Simon Sebothoma

Abstract:

The recent surge in residential infrastructure development around the metropolitan areas of South Africa has necessitated conditions for thorough geotechnical assessments to be conducted prior to site developments to ensure human and infrastructure safety. This paper appraises the success in the application of multi-method geophysical techniques for the delineation of sinkhole vulnerability in a residential landscape. Geophysical techniques ERT, MASW, VES, Magnetics and gravity surveys were conducted to assist in mapping sinkhole vulnerability, using an existing sinkhole as a constraint at Venterspost town, West of Johannesburg city. A combination of different geophysical techniques and results integration from those proved to be useful in the delineation of the lithologic succession around sinkhole locality, and determining the geotechnical characteristics of each layer for its contribution to the development of sinkholes, subsidence and cavities at the vicinity of the site. Study results have also assisted in the determination of the possible depth extension of the currently existing sinkhole and the location of sites where other similar karstic features and sinkholes could form. Results of the ERT, VES and MASW surveys have uncovered dolomitic bedrock at varying depths around the sites, which exhibits high resistivity values in the range 2500-8000ohm.m and corresponding high velocities in the range 1000-2400 m/s. The dolomite layer was found to be overlain by a weathered chert-poor dolomite layer, which has resistivities between the range 250-2400ohm.m, and velocities ranging from 500-600m/s, from which the large sinkhole has been found to collapse/ cave in. A compiled 2.5D high resolution Shear Wave Velocity (Vs) map of the study area was created using 2D profiles of MASW data, offering insights into the prevailing lithological setup conducive for formation various types of karstic features around the site. 3D magnetic models of the site highlighted the regions of possible subsurface interconnections between the currently existing large sinkhole and the other subsidence feature at the site. A number of depth slices were used to detail the conditions near the sinkhole as depth increases. Gravity surveys results mapped the possible formational pathways for development of new karstic features around the site. Combination and correlation of different geophysical techniques proved useful in delineation of the site geotechnical characteristics and mapping the possible depth extend of the currently existing sinkhole.

Keywords: resistivity, magnetics, sinkhole, gravity, karst, delineation, VES

Procedia PDF Downloads 56
87 Analysis in Mexico on Workers Performing Highly Repetitive Movements with Sensory Thermography in the Surface of the Wrist and Elbows

Authors: Sandra K. Enriquez, Claudia Camargo, Jesús E. Olguín, Juan A. López, German Galindo

Abstract:

Currently companies have increased the number of disorders of cumulative trauma (CTDs), these are increasing significantly due to the Highly Repetitive Movements (HRM) performed in workstations, which causes economic losses to businesses, due to temporary and permanent disabilities of workers. This analysis focuses on the prevention of disorders caused by: repeatability, duration and effort; And focuses on reducing cumulative trauma disorders such as occupational diseases using sensory thermography as a noninvasive method, the above is to evaluate the injuries could have workers to perform repetitive motions. Objectives: The aim is to define rest periods or job rotation before they generate a CTD, this sensory thermography by analyzing changes in temperature patterns on wrists and elbows when the worker is performing HRM over a period of time 2 hours and 30 minutes. Information on non-work variables such as wrist and elbow injuries, weight, gender, age, among others, and work variables such as temperature workspace, repetitiveness and duration also met. Methodology: The analysis to 4 industrial designers, 2 men and 2 women to be specific was conducted in a business in normal health for a period of 12 days, using the following time ranges: the first day for every 90 minutes continuous work were asked to rest 5 minutes, the second day for every 90 minutes of continuous work were asked to rest 10 minutes, the same to work 60 and 30 minutes straight. Each worker was tested with 6 different ranges at least twice. This analysis was performed in a controlled room temperature between 20 and 25 ° C, and a time to stabilize the temperature of the wrists and elbows than 20 minutes at the beginning and end of the analysis. Results: The range time of 90 minutes working continuous and a rest of 5 minutes of activity is where the maximum temperature (Tmax) was registered in the wrists and elbows in the office, we found the Tmax was 35.79 ° C with a difference of 2.79 ° C between the initial and final temperature of the left elbow presented at the individual 4 during the 86 minutes, in of range in 90 minutes continuously working and rested for 5 minutes of your activity. Conclusions: It is possible with this alternative technology is sensory thermography predict ranges of rotation or rest for the prevention of CTD to perform HRM work activities, obtaining with this reduce occupational disease, quotas by health agencies and increasing the quality of life of workers, taking this technology a cost-benefit acceptable in the future.

Keywords: sensory thermography, temperature, cumulative trauma disorder (CTD), highly repetitive movement (HRM)

Procedia PDF Downloads 419
86 Dynamic Two-Way FSI Simulation for a Blade of a Small Wind Turbine

Authors: Alberto Jiménez-Vargas, Manuel de Jesús Palacios-Gallegos, Miguel Ángel Hernández-López, Rafael Campos-Amezcua, Julio Cesar Solís-Sanchez

Abstract:

An optimal wind turbine blade design must be able of capturing as much energy as possible from the wind source available at the area of interest. Many times, an optimal design means the use of large quantities of material and complicated processes that make the wind turbine more expensive, and therefore, less cost-effective. For the construction and installation of a wind turbine, the blades may cost up to 20% of the outline pricing, and become more important due to they are part of the rotor system that is in charge of transmitting the energy from the wind to the power train, and where the static and dynamic design loads for the whole wind turbine are produced. The aim of this work is the develop of a blade fluid-structure interaction (FSI) simulation that allows the identification of the major damage zones during the normal production situation, and thus better decisions for design and optimization can be taken. The simulation is a dynamic case, since we have a time-history wind velocity as inlet condition instead of a constant wind velocity. The process begins with the free-use software NuMAD (NREL), to model the blade and assign material properties to the blade, then the 3D model is exported to ANSYS Workbench platform where before setting the FSI system, a modal analysis is made for identification of natural frequencies and modal shapes. FSI analysis is carried out with the two-way technic which begins with a CFD simulation to obtain the pressure distribution on the blade surface, then these results are used as boundary condition for the FEA simulation to obtain the deformation levels for the first time-step. For the second time-step, CFD simulation is reconfigured automatically with the next time-step inlet wind velocity and the deformation results from the previous time-step. The analysis continues the iterative cycle solving time-step by time-step until the entire load case is completed. This work is part of a set of projects that are managed by a national consortium called “CEMIE-Eólico” (Mexican Center in Wind Energy Research), created for strengthen technological and scientific capacities, the promotion of creation of specialized human resources, and to link the academic with private sector in national territory. The analysis belongs to the design of a rotor system for a 5 kW wind turbine design thought to be installed at the Isthmus of Tehuantepec, Oaxaca, Mexico.

Keywords: blade, dynamic, fsi, wind turbine

Procedia PDF Downloads 468
85 Optimized Renewable Energy Mix for Energy Saving in Waste Water Treatment Plants

Authors: J. D. García Espinel, Paula Pérez Sánchez, Carlos Egea Ruiz, Carlos Lardín Mifsut, Andrés López-Aranguren Oliver

Abstract:

This paper shortly describes three main actuations over a Waste Water Treatment Plant (WWTP) for reducing its energy consumption: Optimization of the biological reactor in the aeration stage by including new control algorithms and introducing new efficient equipment, the installation of an innovative hybrid system with zero Grid injection (formed by 100kW of PV energy and 5 kW of mini-wind energy generation) and an intelligent management system for load consumption and energy generation control in the most optimum way. This project called RENEWAT, involved in the European Commission call LIFE 2013, has the main objective of reducing the energy consumptions through different actions on the processes which take place in a WWTP and introducing renewable energies on these treatment plants, with the purpose of promoting the usage of treated waste water for irrigation and decreasing the C02 gas emissions. WWTP is always required before waste water can be reused for irrigation or discharged in water bodies. However, the energetic demand of the treatment process is high enough for making the price of treated water to exceed the one for drinkable water. This makes any policy very difficult to encourage the re-use of treated water, with a great impact on the water cycle, particularly in those areas suffering hydric stress or deficiency. The cost of treating waste water involves another climate-change related burden: the energy necessary for the process is obtained mainly from the electric network, which is, in most of the cases in Europe, energy obtained from the burning of fossil fuels. The innovative part of this project is based on the implementation, adaptation and integration of solutions for this problem, together with a new concept of the integration of energy input and operative energy demand. Moreover, there is an important qualitative jump between the technologies used and the alleged technologies to use in the project which give it an innovative character, due to the fact that there are no similar previous experiences of a WWTP including an intelligent discrimination of energy sources, integrating renewable ones (PV and Wind) and the grid.

Keywords: aeration system, biological reactor, CO2 emissions, energy efficiency, hybrid systems, LIFE 2013 call, process optimization, renewable energy sources, wasted water treatment plants

Procedia PDF Downloads 344
84 Experiment-Based Teaching Method for the Varying Frictional Coefficient

Authors: Mihaly Homostrei, Tamas Simon, Dorottya Schnider

Abstract:

The topic of oscillation in physics is one of the key ideas which is usually taught based on the concept of harmonic oscillation. It can be an interesting activity to deal with a frictional oscillator in advanced high school classes or in university courses. Its mechanics are investigated in this research, which shows that the motion of the frictional oscillator is more complicated than a simple harmonic oscillator. The physics of the applied model in this study seems to be interesting and useful for undergraduate students. The study presents a well-known physical system, which is mostly discussed theoretically in high school and at the university. The ideal frictional oscillator is normally used as an example of harmonic oscillatory motion, as its theory relies on the constant coefficient of sliding friction. The structure of the system is simple: a rod with a homogeneous mass distribution is placed on two rotating identical cylinders placed at the same height so that they are horizontally aligned, and they rotate at the same angular velocity, however in opposite directions. Based on this setup, one could easily show that the equation of motion describes a harmonic oscillation considering the magnitudes of the normal forces in the system as the function of the position and the frictional forces with a constant coefficient of frictions are related to them. Therefore, the whole description of the model relies on simple Newtonian mechanics, which is available for students even in high school. On the other hand, the phenomenon of the described frictional oscillator does not seem to be so straightforward after all; experiments show that the simple harmonic oscillation cannot be observed in all cases, and the system performs a much more complex movement, whereby the rod adjusts itself to a non-harmonic oscillation with a nonzero stable amplitude after an unconventional damping effect. The stable amplitude, in this case, means that the position function of the rod converges to a harmonic oscillation with a constant amplitude. This leads to the idea of a more complex model which can describe the motion of the rod in a more accurate way. The main difference to the original equation of motion is the concept that the frictional coefficient varies with the relative velocity. This dependence on the velocity was investigated in many different research articles as well; however, this specific problem could demonstrate the key concept of the varying friction coefficient and its importance in an interesting and demonstrative way. The position function of the rod is described by a more complicated and non-trivial, yet more precise equation than the usual harmonic oscillation description of the movement. The study discusses the structure of the measurements related to the frictional oscillator, the qualitative and quantitative derivation of the theory, and the comparison of the final theoretical function as well as the measured position-function in time. The project provides useful materials and knowledge for undergraduate students and a new perspective in university physics education.

Keywords: friction, frictional coefficient, non-harmonic oscillator, physics education

Procedia PDF Downloads 182
83 Impact of Farm Settlements' Facilities on Farm Patronage in Oyo State

Authors: Simon Ayorinde Okanlawon

Abstract:

The youths’ prevalent negative attitude to farming is partly due to amenities and facilities found in the urban centers at the expense of the rural areas. Hence, there is the need to create a befitting and conducive farm environment to retain farm employees and attract the youth to farming. This can be achieved through the provision of services and amenities that will ensure a comfortable standard of living higher than that obtained by a person of equal status in other forms of employment in urban centers, thereby eliminating the psychological feeling of lowered self-esteem associated with farming. This study assessed farm settlements’ facilities and patronage in Oyo State with a view to using the information to encourage sustainable agriculture in Nigeria. The study becomes necessary because of the dearth of information on the state of facilities in the farm settlements as it affects patronage of farm settlements for sustainable agriculture in the developing countries like Nigeria. The study utilized three purposely selected farm settlements- Ogbomoso, Fasola and Ilora out of the seven existing ones n Oyo State. One hundred percent (100%) of the 262 residential buildings in the three settlements were sampled, from where a household head from each of the buildings was randomly chosen. This translates to 262 household heads served with questionnaire out of which 47.7% of the questionnaires were recovered. Information obtained included respondents’ residency categories, residents’ status, residency years, housing types, types of holding and number of acres/holding. Others include the socio-economic attributes such as age, gender, income, educational status of respondents, assessment of existing facilities in the selected sites, the level of patronage of the farm settlements including perceived pull factors that can enhance farm settlements patronage. The study revealed that the residents were not satisfied with the adequacy and quality of all the facilities available in their settlements. Residents’ satisfaction with infrastructural facilities cannot be statistically linked with location across the study area. Findings suggested that residents of Ogbomoso farm settlements were not enjoying adequate provision of water supply and road as much as those from Ilora and Fasola. Patronage of the farm settlements were largely driven by farming activities and sale of farm produce. The respondents agreed that provision of farm resort centers, standard recreational and tourism facilities, vacation employment opportunities for youths, functional internet and communication networks among others are likely to boost the level of patronage of the farm settlements. The study concluded that improvement of the facilities both in quality and quantity will encourage the youths in going back to farming. It then recommends that maintenance of existing facilities and provision of more facilities such as resort centers be ensured.

Keywords: encourage, farm settlements' facilities, Oyo state, patronage

Procedia PDF Downloads 200
82 Encouraging the Uptake of Entrepreneurship by Graduates of Higher Education Institutions in South Africa

Authors: Chux Gervase Iwu, Simon Nsengimane

Abstract:

Entrepreneurship stimulates socio-economic development in many countries, if not all. It creates jobs and decreases unemployment and inequality. There are other benefits that are accruable from entrepreneurship, namely the empowerment of women and the promotion of better livelihoods. Innovation has become a weapon for business competition, growth, and sustainability. Paradoxically, it is a threat to businesses because products can be duplicated; new products may decrease the market share of existing ones or delete them from the market. This creates a constant competitive environment that calls for updates, innovation, and the invention of new products and services. Thus, the importance of higher education in instilling a good entrepreneurial mindset in students has become even more critical. It can be argued that the business environment is under enormous pressure from several factors, including the fourth industrial revolution, which calls for the adoption and use of information and communication technology, which is the catalyst for many innovations and organisational changes. Therefore, it is crucial that higher education students are equipped with relevant knowledge and skills to respond effectively to the needs of the business environment and create a vibrant entrepreneurship ecosystem. In South Africa, entrepreneurship education or some form of it has been a privilege for economic and management fields of study, leaving behind other fields. Entrepreneurship should not be limited to business faculties but rather extended to other fields of study. This is perhaps the reason for low levels of entrepreneurship uptake among South African graduates if they are compared with the graduates in other countries. There may be other reasons for the low entrepreneurship uptake. Some of these have been documented in extant literature to include (1) not enough time was spent teaching entrepreneurship in the business faculties, (2) the skills components in the curricula are insufficient, and (3) the overall attitudes/mindsets necessary to establish and run sustainable enterprises seem absent. Therefore, four important areas are recognised as crucial for the effective implementation of entrepreneurship education: policy, private sector engagement, curriculum development, and teacher development. The purpose of this research is to better comprehend the views, aspirations, and expectations of students and faculty members to design an entrepreneurial teaching model for higher education institutions. A qualitative method will be used to conduct a purposive interview with undergraduate and graduate students in select higher institutions. Members of faculty will also be included in the sample as well as, where possible, two or more government personnel responsible for higher education policy development. At present, interpretative analysis is proposed for the analysis of the interviews with the support of Atlas Ti. It is hoped that an entrepreneurship education model in the South African context is realised through this study.

Keywords: entrepreneurship education, higher education institution, graduate unemployment, curriculum development

Procedia PDF Downloads 61
81 Integration Process and Analytic Interface of different Environmental Open Data Sets with Java/Oracle and R

Authors: Pavel H. Llamocca, Victoria Lopez

Abstract:

The main objective of our work is the comparative analysis of environmental data from Open Data bases, belonging to different governments. This means that you have to integrate data from various different sources. Nowadays, many governments have the intention of publishing thousands of data sets for people and organizations to use them. In this way, the quantity of applications based on Open Data is increasing. However each government has its own procedures to publish its data, and it causes a variety of formats of data sets because there are no international standards to specify the formats of the data sets from Open Data bases. Due to this variety of formats, we must build a data integration process that is able to put together all kind of formats. There are some software tools developed in order to give support to the integration process, e.g. Data Tamer, Data Wrangler. The problem with these tools is that they need data scientist interaction to take part in the integration process as a final step. In our case we don’t want to depend on a data scientist, because environmental data are usually similar and these processes can be automated by programming. The main idea of our tool is to build Hadoop procedures adapted to data sources per each government in order to achieve an automated integration. Our work focus in environment data like temperature, energy consumption, air quality, solar radiation, speeds of wind, etc. Since 2 years, the government of Madrid is publishing its Open Data bases relative to environment indicators in real time. In the same way, other governments have published Open Data sets relative to the environment (like Andalucia or Bilbao). But all of those data sets have different formats and our solution is able to integrate all of them, furthermore it allows the user to make and visualize some analysis over the real-time data. Once the integration task is done, all the data from any government has the same format and the analysis process can be initiated in a computational better way. So the tool presented in this work has two goals: 1. Integration process; and 2. Graphic and analytic interface. As a first approach, the integration process was developed using Java and Oracle and the graphic and analytic interface with Java (jsp). However, in order to open our software tool, as second approach, we also developed an implementation with R language as mature open source technology. R is a really powerful open source programming language that allows us to process and analyze a huge amount of data with high performance. There are also some R libraries for the building of a graphic interface like shiny. A performance comparison between both implementations was made and no significant differences were found. In addition, our work provides with an Official Real-Time Integrated Data Set about Environment Data in Spain to any developer in order that they can build their own applications.

Keywords: open data, R language, data integration, environmental data

Procedia PDF Downloads 300