Search results for: inputs
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 345

Search results for: inputs

45 Environmental Benefits of Corn Cob Ash in Lateritic Soil Cement Stabilization for Road Works in a Sub-Tropical Region

Authors: Ahmed O. Apampa, Yinusa A. Jimoh

Abstract:

The potential economic viability and environmental benefits of using a biomass waste, such as corn cob ash (CCA) as pozzolan in stabilizing soils for road pavement construction in a sub-tropical region was investigated. Corn cob was obtained from Maya in South West Nigeria and processed to ash of characteristics similar to Class C Fly Ash pozzolan as specified in ASTM C618-12. This was then blended with ordinary Portland cement in the CCA:OPC ratios of 1:1, 1:2 and 2:1. Each of these blends was then mixed with lateritic soil of ASHTO classification A-2-6(3) in varying percentages from 0 – 7.5% at 1.5% intervals. The soil-CCA-Cement mixtures were thereafter tested for geotechnical index properties including the BS Proctor Compaction, California Bearing Ratio (CBR) and the Unconfined Compression Strength Test. The tests were repeated for soil-cement mix without any CCA blending. The cost of the binder inputs and optimal blends of CCA:OPC in the stabilized soil were thereafter analyzed by developing algorithms that relate the experimental data on strength parameters (Unconfined Compression Strength, UCS and California Bearing Ratio, CBR) with the bivariate independent variables CCA and OPC content, using Matlab R2011b. An optimization problem was then set up minimizing the cost of chemical stabilization of laterite with CCA and OPC, subject to the constraints of minimum strength specifications. The Evolutionary Engine as well as the Generalized Reduced Gradient option of the Solver of MS Excel 2010 were used separately on the cells to obtain the optimal blend of CCA:OPC. The optimal blend attaining the required strength of 1800 kN/m2 was determined for the 1:2 CCA:OPC as 5.4% mix (OPC content 3.6%) compared with 4.2% for the OPC only option; and as 6.2% mix for the 1:1 blend (OPC content 3%). The 2:1 blend did not attain the required strength, though over a 100% gain in UCS value was obtained over the control sample with 0% binder. Upon the fact that 0.97 tonne of CO2 is released for every tonne of cement used (OEE, 2001), the reduced OPC requirement to attain the same result indicates the possibility of reducing the net CO2 contribution of the construction industry to the environment ranging from 14 – 28.5% if CCA:OPC blends are widely used in soil stabilization, going by the results of this study. The paper concludes by recommending that Nigeria and other developing countries in the sub-tropics with abundant stock of biomass waste should look in the direction of intensifying the use of biomass waste as fuel and the derived ash for the production of pozzolans for road-works, thereby reducing overall green house gas emissions and in compliance with the objectives of the United Nations Framework on Climate Change.

Keywords: corn cob ash, biomass waste, lateritic soil, unconfined compression strength, CO2 emission

Procedia PDF Downloads 357
44 Effect of Ease of Doing Business to Economic Growth among Selected Countries in Asia

Authors: Teodorica G. Ani

Abstract:

Economic activity requires an encouraging regulatory environment and effective rules that are transparent and accessible to all. The World Bank has been publishing the annual Doing Business reports since 2004 to investigate the scope and manner of regulations that enhance business activity and those that constrain it. A streamlined business environment supporting the development of competitive small and medium enterprises (SMEs) may expand employment opportunities and improve the living conditions of low income households. Asia has emerged as one of the most attractive markets in the world. Economies in East Asia and the Pacific were among the most active in making it easier for local firms to do business. The study aimed to describe the ease of doing business and its effect to economic growth among selected economies in Asia for the year 2014. The study covered 29 economies in East Asia, Southeast Asia, South Asia and Middle Asia. Ease of doing business is measured by the Doing Business indicators (DBI) of the World Bank. The indicators cover ten aspects of the ease of doing business such as starting a business, dealing with construction permits, getting electricity, registering property, getting credit, protecting investors, paying taxes, trading across borders, enforcing contracts and resolving insolvency. In the study, Gross Domestic Product (GDP) was used as the proxy variable for economic growth. Descriptive research was the research design used. Graphical analysis was used to describe the income and doing business among selected economies. In addition, multiple regression was used to determine the effect of doing business to economic growth. The study presented the income among selected economies. The graph showed China has the highest income while Maldives produces the lowest and that observation were supported by gathered literatures. The study also presented the status of the ten indicators of doing business among selected economies. The graphs showed varying trends on how easy to start a business, deal with construction permits and to register property. Starting a business is easiest in Singapore followed by Hong Kong. The study found out that the variations in ease of doing business is explained by starting a business, dealing with construction permits and registering property. Moreover, an explanation of the regression result implies that a day increase in the average number of days it takes to complete a procedure will decrease the value of GDP in general. The research proposed inputs to policy which may increase the awareness of local government units of different economies on the simplification of the policies of the different components used in measuring doing business.

Keywords: doing business, economic growth, gross domestic product, Asia

Procedia PDF Downloads 357
43 Determinants of Budget Performance in an Oil-Based Economy

Authors: Adeola Adenikinju, Olusanya E. Olubusoye, Lateef O. Akinpelu, Dilinna L. Nwobi

Abstract:

Since the enactment of the Fiscal Responsibility Act (2007), the Federal Government of Nigeria (FGN) has made public its fiscal budget and the subsequent implementation report. A critical review of these documents shows significant variations in the five macroeconomic variables which are inputs in each Presidential budget; oil Production target (mbpd), oil price ($), Foreign exchange rate(N/$), and Gross Domestic Product growth rate (%) and inflation rate (%). This results in underperformance of the Federal budget expected output in terms of non-oil and oil revenue aggregates. This paper evaluates first the existing variance between budgeted and actuals, then the relationship and causality between the determinants of Federal fiscal budget assumptions, and finally the determinants of FGN’s Gross Oil Revenue. The paper employed the use of descriptive statistics, the Autoregressive distributed lag (ARDL) model, and a Profit oil probabilistic model to achieve these objectives. This model permits for both the static and dynamic effect(s) of the independent variable(s) on the dependent variable, unlike a static model that accounts for static or fixed effect(s) only. It offers a technique for checking the existence of a long-run relationship between variables, unlike other tests of cointegration, such as the Engle-Granger and Johansen tests, which consider only non-stationary series that are integrated of the same order. Finally, even with small sample size, the ARDL model is known to generate a valid result, for it is the dependent variable and is the explanatory variable. The results showed that there is a long-run relationship between oil revenue as a proxy for budget performance and its determinants; oil price, produced oil quantity, and foreign exchange rate. There is a short-run relationship between oil revenue and its determinants; oil price, produced oil quantity, and foreign exchange rate. There is a long-run relationship between non-oil revenue and its determinants; inflation rate, GDP growth rate, and foreign exchange rate. The grangers’ causality test results show that there is a mono-directional causality between oil revenue and its determinants. The Federal budget assumptions only explain 68% of oil revenue and 62% of non-oil revenue. There is a mono-directional causality between non-oil revenue and its determinants. The Profit oil Model describes production sharing contracts, joint ventures, and modified carrying arrangements as the greatest contributors to FGN’s gross oil revenue. This provides empirical justification for the selected macroeconomic variables used in the Federal budget design and performance evaluation. The research recommends other variables, debt and money supply, be included in the Federal budget design to explain the Federal budget revenue performance further.

Keywords: ARDL, budget performance, oil price, oil quantity, oil revenue

Procedia PDF Downloads 143
42 Determination of Friction and Damping Coefficients of Folded Cover Mechanism Deployed by Torsion Springs

Authors: I. Yilmaz, O. Taga, F. Kosar, O. Keles

Abstract:

In this study, friction and damping coefficients of folded cover mechanism were obtained in accordance with experimental studies and data. Friction and damping coefficients are the most important inputs to accomplish a mechanism analysis. Friction and damping are two objects that change the time of deployment of mechanisms and their dynamic behaviors. Though recommended friction coefficient values exist in literature, damping is differentiating feature according to mechanic systems. So the damping coefficient should be obtained from mechanism test outputs. In this study, the folded cover mechanism use torsion springs for deploying covers that are formerly close folded position. Torsion springs provide folded covers with desirable deploying time according to variable environmental conditions. To verify all design revisions with system tests will be so costly so that some decisions are taken in accordance with numerical methods. In this study, there are two folded covers required to deploy simultaneously. Scotch-yoke and crank-rod mechanisms were combined to deploy folded covers simultaneously. The mechanism was unlocked with a pyrotechnic bolt onto scotch-yoke disc. When pyrotechnic bolt was exploded, torsion springs provided rotational movement for mechanism. Quick motion camera was recording dynamic behaviors of system during deployment case. Dynamic model of mechanism was modeled as rigid body with Adams MBD (multi body dynamics) then torque values provided by torsion springs were used as an input. A well-advised range of friction and damping coefficients were defined in Adams DOE (design of experiment) then a large number of analyses were performed until deployment time of folded covers run in with test data observed in record of quick motion camera, thus the deployment time of mechanism and dynamic behaviors were obtained. Same mechanism was tested with different torsion springs and torque values then outputs were compared with numerical models. According to comparison, it was understood that friction and damping coefficients obtained in this study can be used safely when studying on folded objects required to deploy simultaneously. In addition to model generated with Adams as rigid body the finite element model of folded mechanism was generated with Abaqus then the outputs of rigid body model and finite element model was compared. Finally, the reasonable solutions were suggested about different outputs of these solution methods.

Keywords: damping, friction, pyro-technic, scotch-yoke

Procedia PDF Downloads 287
41 Improving Predictions of Coastal Benthic Invertebrate Occurrence and Density Using a Multi-Scalar Approach

Authors: Stephanie Watson, Fabrice Stephenson, Conrad Pilditch, Carolyn Lundquist

Abstract:

Spatial data detailing both the distribution and density of functionally important marine species are needed to inform management decisions. Species distribution models (SDMs) have proven helpful in this regard; however, models often focus only on species occurrences derived from spatially expansive datasets and lack the resolution and detail required to inform regional management decisions. Boosted regression trees (BRT) were used to produce high-resolution SDMs (250 m) at two spatial scales predicting probability of occurrence, abundance (count per sample unit), density (count per km2) and uncertainty for seven coastal seafloor taxa that vary in habitat usage and distribution to examine prediction differences and implications for coastal management. We investigated if small scale regionally focussed models (82,000 km2) can provide improved predictions compared to data-rich national scale models (4.2 million km2). We explored the variability in predictions across model type (occurrence vs abundance) and model scale to determine if specific taxa models or model types are more robust to geographical variability. National scale occurrence models correlated well with broad-scale environmental predictors, resulting in higher AUC (Area under the receiver operating curve) and deviance explained scores; however, they tended to overpredict in the coastal environment and lacked spatially differentiated detail for some taxa. Regional models had lower overall performance, but for some taxa, spatial predictions were more differentiated at a localised ecological scale. National density models were often spatially refined and highlighted areas of ecological relevance producing more useful outputs than regional-scale models. The utility of a two-scale approach aids the selection of the most optimal combination of models to create a spatially informative density model, as results contrasted for specific taxa between model type and scale. However, it is vital that robust predictions of occurrence and abundance are generated as inputs for the combined density model as areas that do not spatially align between models can be discarded. This study demonstrates the variability in SDM outputs created over different geographical scales and highlights implications and opportunities for managers utilising these tools for regional conservation, particularly in data-limited environments.

Keywords: Benthic ecology, spatial modelling, multi-scalar modelling, marine conservation.

Procedia PDF Downloads 42
40 Artificial Intelligence and Robotics in the Eye of Private Law with Special Regards to Intellectual Property and Liability Issues

Authors: Barna Arnold Keserű

Abstract:

In the last few years (what is called by many scholars the big data era) artificial intelligence (hereinafter AI) get more and more attention from the public and from the different branches of sciences as well. What previously was a mere science-fiction, now starts to become reality. AI and robotics often walk hand in hand, what changes not only the business and industrial life, but also has a serious impact on the legal system. The main research of the author focuses on these impacts in the field of private law, with special regards to liability and intellectual property issues. Many questions arise in these areas connecting to AI and robotics, where the boundaries are not sufficiently clear, and different needs are articulated by the different stakeholders. Recognizing the urgent need of thinking the Committee on Legal Affairs of the European Parliament adopted a Motion for a European Parliament Resolution A8-0005/2017 (of January 27th, 2017) in order to take some recommendations to the Commission on civil law rules on robotics and AI. This document defines some crucial usage of AI and/or robotics, e.g. the field of autonomous vehicles, the human job replacement in the industry or smart applications and machines. It aims to give recommendations to the safe and beneficial use of AI and robotics. However – as the document says – there are no legal provisions that specifically apply to robotics or AI in IP law, but that existing legal regimes and doctrines can be readily applied to robotics, although some aspects appear to call for specific consideration, calls on the Commission to support a horizontal and technologically neutral approach to intellectual property applicable to the various sectors in which robotics could be employed. AI can generate some content what worth copyright protection, but the question came up: who is the author, and the owner of copyright? The AI itself can’t be deemed author because it would mean that it is legally equal with the human persons. But there is the programmer who created the basic code of the AI, or the undertaking who sells the AI as a product, or the user who gives the inputs to the AI in order to create something new. Or AI generated contents are so far from humans, that there isn’t any human author, so these contents belong to public domain. The same questions could be asked connecting to patents. The research aims to answer these questions within the current legal framework and tries to enlighten future possibilities to adapt these frames to the socio-economical needs. In this part, the proper license agreements in the multilevel-chain from the programmer to the end-user become very important, because AI is an intellectual property in itself what creates further intellectual property. This could collide with data-protection and property rules as well. The problems are similar in the field of liability. We can use different existing forms of liability in the case when AI or AI led robotics cause damages, but it is unsure that the result complies with economical and developmental interests.

Keywords: artificial intelligence, intellectual property, liability, robotics

Procedia PDF Downloads 177
39 Analysis of the Savings Behaviour of Rice Farmers in Tiaong, Quezon, Philippines

Authors: Angelika Kris D. Dalangin, Cesar B. Quicoy

Abstract:

Rice farming is a major source of livelihood and employment in the Philippines, but it requires a substantial amount of capital. Capital may come from income (farm, non-farm, and off-farm), savings and credit. However, rice farmers suffer from lack of capital due to high costs of inputs and low productivity. Capital insufficiency, coupled with low productivity, hindered them to meet their basic household and production needs. Hence, they resorted to borrowing money, mostly from informal lenders who charge very high interest rates. As another source of capital, savings can help rice farmers meet their basic needs for both the household and the farm. However, information is inadequate whether the farmers save or not, as well as, why they do not depend on savings to augment their lack of capital. Thus, it is worth analyzing how rice farmers saved. The study revealed, using the actual savings which is the difference between the household income and expenditure, that about three-fourths (72%) of the total number of farmers interviewed are savers. However, when they were asked whether they are savers or not, more than half of them considered themselves as non-savers. This gap shows that there are many farmers who think that they do not have savings at all; hence they continue to borrow money and do not depend on savings to augment their lack of capital. The study also identified the forms of savings, saving motives, and savings utilization among rice farmers. Results revealed that, for the past 12 months, most of the farmers saved cash at home for liquidity purposes while others deposited cash in banks and/or saved their money in the form of livestock. Among the most important reasons of farmers for saving are for daily household expenses, for building a house, for emergency purposes, for retirement, and for their next production. Furthermore, the study assessed the factors affecting the rice farmers’ savings behaviour using logistic regression. Results showed that the factors found to be significant were presence of non-farm income, per capita net farm income, and per capita household expense. The presence of non-farm income and per capita net farm income positively affects the farmers’ savings behaviour. On the other hand, per capita household expenses have negative effect. The effect, however, of per capita net farm income and household expenses is very negligible because of the very small chance that the farmer is a saver. Generally, income and expenditure were proved to be significant factors that affect the savings behaviour of the rice farmers. However, most farmers could not save regularly due to low farm income and high household and farm expenditures. Thus, it is highly recommended that government should develop programs or implement policies that will create more jobs for the farmers and their family members. In addition, programs and policies should be implemented to increase farm productivity and income.

Keywords: agricultural economics, agricultural finance, binary logistic regression, logit, Philippines, Quezon, rice farmers, savings, savings behaviour

Procedia PDF Downloads 204
38 Smart Irrigation System for Applied Irrigation Management in Tomato Seedling Production

Authors: Catariny C. Aleman, Flavio B. Campos, Matheus A. Caliman, Everardo C. Mantovani

Abstract:

The seedling production stage is a critical point in the vegetable production system. Obtaining high-quality seedlings is a prerequisite for subsequent cropping to occur well and productivity optimization is required. The water management is an important step in agriculture production. The adequate water requirement in horticulture seedlings can provide higher quality and increase field production. The practice of irrigation is indispensable and requires a duly adjusted quality irrigation system, together with a specific water management plan to meet the water demand of the crop. Irrigation management in seedling management requires a great deal of specific information, especially when it involves the use of inputs such as hydrorentering polymers and automation technologies of the data acquisition and irrigation system. The experiment was conducted in a greenhouse at the Federal University of Viçosa, Viçosa - MG. Tomato seedlings (Lycopersicon esculentum Mill) were produced in plastic trays of 128 cells, suspended at 1.25 m from the ground. The seedlings were irrigated by 4 micro sprinklers of fixed jet 360º per tray, duly isolated by sideboards, following the methodology developed for this work. During Phase 1, in January / February 2017 (duration of 24 days), the cultivation coefficient (Kc) of seedlings cultured in the presence and absence of hydrogel was evaluated by weighing lysimeter. In Phase 2, September 2017 (duration of 25 days), the seedlings were submitted to 4 irrigation managements (Kc, timer, 0.50 ETo, and 1.00 ETo), in the presence and absence of hydrogel and then evaluated in relation to quality parameters. The microclimate inside the greenhouse was monitored with the use of air temperature, relative humidity and global radiation sensors connected to a microcontroller that performed hourly calculations of reference evapotranspiration by Penman-Monteith standard method FAO56 modified for the balance of long waves according to Walker, Aldrich, Short (1983), and conducted water balance and irrigation decision making for each experimental treatment. Kc of seedlings cultured on a substrate with hydrogel (1.55) was higher than Kc on a pure substrate (1.39). The use of the hydrogel was a differential for the production of earlier tomato seedlings, with higher final height, the larger diameter of the colon, greater accumulation of a dry mass of shoot, a larger area of crown projection and greater the rate of relative growth. The handling 1.00 ETo promoted higher relative growth rate.

Keywords: automatic system; efficiency of water use; precision irrigation, micro sprinkler.

Procedia PDF Downloads 91
37 Fire Risk Information Harmonization for Transboundary Fire Events between Portugal and Spain

Authors: Domingos Viegas, Miguel Almeida, Carmen Rocha, Ilda Novo, Yolanda Luna

Abstract:

Forest fires along the more than 1200km of the Spanish-Portuguese border are more and more frequent, currently achieving around 2000 fire events per year. Some of these events develop to large international wildfire requiring concerted operations based on shared information between the two countries. The fire event of Valencia de Alcantara (2003) causing several fatalities and more than 13000ha burnt, is a reference example of these international events. Currently, Portugal and Spain have a specific cross-border cooperation protocol on wildfires response for a strip of about 30km (15 km for each side). It is recognized by public authorities the successfulness of this collaboration however it is also assumed that this cooperation should include more functionalities such as the development of a common risk information system for transboundary fire events. Since Portuguese and Spanish authorities use different approaches to determine the fire risk indexes inputs and different methodologies to assess the fire risk, sometimes the conjoint firefighting operations are jeopardized since the information is not harmonized and the understanding of the situation by the civil protection agents from both countries is not unique. Thus, a methodology aiming the harmonization of the fire risk calculation and perception by Portuguese and Spanish Civil protection authorities is hereby presented. The final results are presented as well. The fire risk index used in this work is the Canadian Fire Weather Index (FWI), which is based on meteorological data. The FWI is limited on its application as it does not take into account other important factors with great effect on the fire appearance and development. The combination of these factors is very complex since, besides the meteorology, it addresses several parameters of different topics, namely: sociology, topography, vegetation and soil cover. Therefore, the meaning of FWI values is different from region to region, according the specific characteristics of each region. In this work, a methodology for FWI calibration based on the number of fire occurrences and on the burnt area in the transboundary regions of Portugal and Spain, in order to assess the fire risk based on calibrated FWI values, is proposed. As previously mentioned, the cooperative firefighting operations require a common perception of the information shared. Therefore, a common classification of the fire risk for the fire events occurred in the transboundary strip is proposed with the objective of harmonizing this type of information. This work is integrated in the ECHO project SpitFire - Spanish-Portuguese Meteorological Information System for Transboundary Operations in Forest Fires, which aims the development of a web platform for the sharing of information and supporting decision tools to be used in international fire events involving Portugal and Spain.

Keywords: data harmonization, FWI, international collaboration, transboundary wildfires

Procedia PDF Downloads 226
36 An Artificially Intelligent Teaching-Agent to Enhance Learning Interactions in Virtual Settings

Authors: Abdulwakeel B. Raji

Abstract:

This paper introduces a concept of an intelligent virtual learning environment that involves communication between learners and an artificially intelligent teaching agent in an attempt to replicate classroom learning interactions. The benefits of this technology over current e-learning practices is that it creates a virtual classroom where real time adaptive learning interactions are made possible. This is a move away from the static learning practices currently being adopted by e-learning systems. Over the years, artificial intelligence has been applied to various fields, including and not limited to medicine, military applications, psychology, marketing etc. The purpose of e-learning applications is to ensure users are able to learn outside of the classroom, but a major limitation has been the inability to fully replicate classroom interactions between teacher and students. This study used comparative surveys to gain information and understanding of the current learning practices in Nigerian universities and how they compare to these practices compare to the use of a developed e-learning system. The study was conducted by attending several lectures and noting the interactions between lecturers and tutors and as an aftermath, a software has been developed that deploys the use of an artificial intelligent teaching-agent alongside an e-learning system to enhance user learning experience and attempt to create the similar learning interactions to those found in classroom and lecture hall settings. Dialogflow has been used to implement a teaching-agent, which has been developed using JSON, which serves as a virtual teacher. Course content has been created using HTML, CSS, PHP and JAVASCRIPT as a web-based application. This technology can run on handheld devices and Google based home technologies to give learners an access to the teaching agent at any time. This technology also implements the use of definite clause grammars and natural language processing to match user inputs and requests with defined rules to replicate learning interactions. This technology developed covers familiar classroom scenarios such as answering users’ questions, asking ‘do you understand’ at regular intervals and answering subsequent requests, taking advanced user queries to give feedbacks at other periods. This software technology uses deep learning techniques to learn user interactions and patterns to subsequently enhance user learning experience. A system testing has been undergone by undergraduate students in the UK and Nigeria on the course ‘Introduction to Database Development’. Test results and feedback from users shows that this study and developed software is a significant improvement on existing e-learning systems. Further experiments are to be run using the software with different students and more course contents.

Keywords: virtual learning, natural language processing, definite clause grammars, deep learning, artificial intelligence

Procedia PDF Downloads 112
35 Assessing Sydney Tar Ponds Remediation and Natural Sediment Recovery in Nova Scotia, Canada

Authors: Tony R. Walker, N. Devin MacAskill, Andrew Thalhiemer

Abstract:

Sydney Harbour, Nova Scotia has long been subject to effluent and atmospheric inputs of metals, polycyclic aromatic hydrocarbons (PAHs), and polychlorinated biphenyls (PCBs) from a large coking operation and steel plant that operated in Sydney for nearly a century until closure in 1988. Contaminated effluents from the industrial site resulted in the creation of the Sydney Tar Ponds, one of Canada’s largest contaminated sites. Since its closure, there have been several attempts to remediate this former industrial site and finally, in 2004, the governments of Canada and Nova Scotia committed to remediate the site to reduce potential ecological and human health risks to the environment. The Sydney Tar Ponds and Coke Ovens cleanup project has become the most prominent remediation project in Canada today. As an integral part of remediation of the site (i.e., which consisted of solidification/stabilization and associated capping of the Tar Ponds), an extensive multiple media environmental effects program was implemented to assess what effects remediation had on the surrounding environment, and, in particular, harbour sediments. Additionally, longer-term natural sediment recovery rates of select contaminants predicted for the harbour sediments were compared to current conditions. During remediation, potential contributions to sediment quality, in addition to remedial efforts, were evaluated which included a significant harbour dredging project, propeller wash from harbour traffic, storm events, adjacent loading/unloading of coal and municipal wastewater treatment discharges. Two sediment sampling methodologies, sediment grab and gravity corer, were also compared to evaluate the detection of subtle changes in sediment quality. Results indicated that overall spatial distribution pattern of historical contaminants remains unchanged, although at much lower concentrations than previously reported, due to natural recovery. Measurements of sediment indicator parameter concentrations confirmed that natural recovery rates of Sydney Harbour sediments were in broad agreement with predicted concentrations, in spite of ongoing remediation activities. Overall, most measured parameters in sediments showed little temporal variability even when using different sampling methodologies, during three years of remediation compared to baseline, except for the detection of significant increases in total PAH concentrations noted during one year of remediation monitoring. The data confirmed the effectiveness of mitigation measures implemented during construction relative to harbour sediment quality, despite other anthropogenic activities and the dynamic nature of the harbour.

Keywords: contaminated sediment, monitoring, recovery, remediation

Procedia PDF Downloads 212
34 Option Pricing Theory Applied to the Service Sector

Authors: Luke Miller

Abstract:

This paper develops an options pricing methodology to value strategic pricing strategies in the services sector. More specifically, this study provides a unifying taxonomy of current service sector pricing practices, frames these pricing decisions as strategic real options, demonstrates accepted option valuation techniques to assess service sector pricing decisions, and suggests future research areas where pricing decisions and real options overlap. Enhancing revenue in the service sector requires proactive decision making in a world of uncertainty. In an effort to strategically price service products, revenue enhancement necessitates a careful study of the service costs, customer base, competition, legalities, and shared economies with the market. Pricing decisions involve the quality of inputs, manpower, and best practices to maintain superior service. These decisions further hinge on identifying relevant pricing strategies and understanding how these strategies impact a firm’s value. A relatively new area of research applies option pricing theory to investments in real assets and is commonly known as real options. The real options approach is based on the premise that many corporate decisions to invest or divest in assets are simply an option wherein the firm has the right to make an investment without any obligation to act. The decision maker, therefore, has more flexibility and the value of this operating flexibility should be taken into consideration. The real options framework has already been applied to numerous areas including manufacturing, inventory, natural resources, research and development, strategic decisions, technology, and stock valuation. Additionally, numerous surveys have identified a growing need for the real options decision framework within all areas of corporate decision-making. Despite the wide applicability of real options, no study has been carried out linking service sector pricing decisions and real options. This is surprising given the service sector comprises 80% of the US employment and Gross Domestic Product (GDP). Identifying real options as a practical tool to value different service sector pricing strategies is believed to have a significant impact on firm decisions. This paper identifies and discusses four distinct pricing strategies available to the service sector from an options’ perspective: (1) Cost-based profit margin, (2) Increased customer base, (3) Platform pricing, and (4) Buffet pricing. Within each strategy lie several pricing tactics available to the service firm. These tactics can be viewed as options the decision maker has to best manage a strategic position in the market. To demonstrate the effectiveness of including flexibility in the pricing decision, a series of pricing strategies were developed and valued using a real options binomial lattice structure. The options pricing approach discussed in this study allows service firms to directly incorporate market-driven perspectives into the decision process and thus synchronizing service operations with organizational economic goals.

Keywords: option pricing theory, real options, service sector, valuation

Procedia PDF Downloads 329
33 Characterization of Phenolic Compounds from Carménère Wines during Aging with Oak Wood (Staves, Chips and Barrels)

Authors: E. Obreque-Slier, J. Laqui-Estaña, A. Peña-Neira, M. Medel-Marabolí

Abstract:

Wine is an important source of polyphenols. Red wines show important concentrations of nonflavonoid (gallic acid, ellagic acid, caffeic acid and coumaric acid) and flavonoid compounds [(+)-catechin, (-)-epicatechin, (+)-gallocatechin and (-)-epigallocatechin]. However, a significant variability in the quantitative and qualitative distribution of chemical constituents in wine has to be expected depending on an array of important factors, such as the varietal differences of Vitis vinifera and cultural practices. It has observed that Carménère grapes present a differential composition and evolution of phenolic compounds when compared to other varieties and specifically with Cabernet Sauvignon grapes. Likewise, among the cultural practices, the aging in contact with oak wood is a high relevance factor. Then, the extraction of different polyphenolic compounds from oak wood into wine during its ageing process produces both qualitative and quantitative changes. Recently, many new techniques have been introduced in winemaking. One of these involves putting new pieces of wood (oak chips or inner staves) into inert containers. It offers some distinct and previously unavailable flavour advantages, as well as new options in wine handling. To our best knowledge, there is not information about the behaviour of Carménère wines (Chilean emblematic cultivar) in contact with oak wood. In addition, the effect of aging time and wood product (barrels, chips or staves) on the phenolic composition in Carménère wines has not been studied. This study aims at characterizing the condensed and hydrolyzable tannins from Carménère wines during the aging with staves, chips and barrels from French oak wood. The experimental design was completely randomized with two independent assays: aging time (0-12 month) and different formats of wood (barrel, chips and staves). The wines were characterized by spectrophotometric (total tannins and fractionation of proanthocyanidins into monomers, oligomers and polymers) and HPLC-DAD (ellagitannins) analysis. The wines in contact with different products of oak wood showed a similar content of total tannins during the study, while the control wine (without oak wood) presented a lower content of these compounds. In addition, it was observed that the polymeric proanthocyanidin fraction was the most abundant, while the monomeric fraction was the less abundant fraction in all treatments in two sample. However, significative differences in each fractions were observed between wines in contact from barrel, chips, and staves in two sample dates. Finally, the wine from barrels presented the highest content of the ellagitannins from the fourth to the last sample date. In conclusion, the use of alternative formats of oak wood affects the chemical composition of wines during aging, and these enological products are an interesting alternative to contribute with tannins to wine.

Keywords: enological inputs, oak wood aging, polyphenols, red wine

Procedia PDF Downloads 140
32 Self-Supervised Learning for Hate-Speech Identification

Authors: Shrabani Ghosh

Abstract:

Automatic offensive language detection in social media has become a stirring task in today's NLP. Manual Offensive language detection is tedious and laborious work where automatic methods based on machine learning are only alternatives. Previous works have done sentiment analysis over social media in different ways such as supervised, semi-supervised, and unsupervised manner. Domain adaptation in a semi-supervised way has also been explored in NLP, where the source domain and the target domain are different. In domain adaptation, the source domain usually has a large amount of labeled data, while only a limited amount of labeled data is available in the target domain. Pretrained transformers like BERT, RoBERTa models are fine-tuned to perform text classification in an unsupervised manner to perform further pre-train masked language modeling (MLM) tasks. In previous work, hate speech detection has been explored in Gab.ai, which is a free speech platform described as a platform of extremist in varying degrees in online social media. In domain adaptation process, Twitter data is used as the source domain, and Gab data is used as the target domain. The performance of domain adaptation also depends on the cross-domain similarity. Different distance measure methods such as L2 distance, cosine distance, Maximum Mean Discrepancy (MMD), Fisher Linear Discriminant (FLD), and CORAL have been used to estimate domain similarity. Certainly, in-domain distances are small, and between-domain distances are expected to be large. The previous work finding shows that pretrain masked language model (MLM) fine-tuned with a mixture of posts of source and target domain gives higher accuracy. However, in-domain performance of the hate classifier on Twitter data accuracy is 71.78%, and out-of-domain performance of the hate classifier on Gab data goes down to 56.53%. Recently self-supervised learning got a lot of attention as it is more applicable when labeled data are scarce. Few works have already been explored to apply self-supervised learning on NLP tasks such as sentiment classification. Self-supervised language representation model ALBERTA focuses on modeling inter-sentence coherence and helps downstream tasks with multi-sentence inputs. Self-supervised attention learning approach shows better performance as it exploits extracted context word in the training process. In this work, a self-supervised attention mechanism has been proposed to detect hate speech on Gab.ai. This framework initially classifies the Gab dataset in an attention-based self-supervised manner. On the next step, a semi-supervised classifier trained on the combination of labeled data from the first step and unlabeled data. The performance of the proposed framework will be compared with the results described earlier and also with optimized outcomes obtained from different optimization techniques.

Keywords: attention learning, language model, offensive language detection, self-supervised learning

Procedia PDF Downloads 83
31 Bayesian Structural Identification with Systematic Uncertainty Using Multiple Responses

Authors: André Jesus, Yanjie Zhu, Irwanda Laory

Abstract:

Structural health monitoring is one of the most promising technologies concerning aversion of structural risk and economic savings. Analysts often have to deal with a considerable variety of uncertainties that arise during a monitoring process. Namely the widespread application of numerical models (model-based) is accompanied by a widespread concern about quantifying the uncertainties prevailing in their use. Some of these uncertainties are related with the deterministic nature of the model (code uncertainty) others with the variability of its inputs (parameter uncertainty) and the discrepancy between a model/experiment (systematic uncertainty). The actual process always exhibits a random behaviour (observation error) even when conditions are set identically (residual variation). Bayesian inference assumes that parameters of a model are random variables with an associated PDF, which can be inferred from experimental data. However in many Bayesian methods the determination of systematic uncertainty can be problematic. In this work systematic uncertainty is associated with a discrepancy function. The numerical model and discrepancy function are approximated by Gaussian processes (surrogate model). Finally, to avoid the computational burden of a fully Bayesian approach the parameters that characterise the Gaussian processes were estimated in a four stage process (modular Bayesian approach). The proposed methodology has been successfully applied on fields such as geoscience, biomedics, particle physics but never on the SHM context. This approach considerably reduces the computational burden; although the extent of the considered uncertainties is lower (second order effects are neglected). To successfully identify the considered uncertainties this formulation was extended to consider multiple responses. The efficiency of the algorithm has been tested on a small scale aluminium bridge structure, subjected to a thermal expansion due to infrared heaters. Comparison of its performance with responses measured at different points of the structure and associated degrees of identifiability is also carried out. A numerical FEM model of the structure was developed and the stiffness from its supports is considered as a parameter to calibrate. Results show that the modular Bayesian approach performed best when responses of the same type had the lowest spatial correlation. Based on previous literature, using different types of responses (strain, acceleration, and displacement) should also improve the identifiability problem. Uncertainties due to parametric variability, observation error, residual variability, code variability and systematic uncertainty were all recovered. For this example the algorithm performance was stable and considerably quicker than Bayesian methods that account for the full extent of uncertainties. Future research with real-life examples is required to fully access the advantages and limitations of the proposed methodology.

Keywords: bayesian, calibration, numerical model, system identification, systematic uncertainty, Gaussian process

Procedia PDF Downloads 305
30 Mass Flux and Forensic Assessment: Informed Remediation Decision Making at One of Canada’s Most Polluted Sites

Authors: Tony R. Walker, N. Devin MacAskill, Andrew Thalhiemer

Abstract:

Sydney Harbour, Nova Scotia, Canada has long been subject to effluent and atmospheric inputs of contaminants, including thousands of tons of PAHs from a large coking and steel plant which operated in Sydney for nearly a century. Contaminants comprised of coal tar residues which were discharged from coking ovens into a small tidal tributary, which became known as the Sydney Tar Ponds (STPs), and subsequently discharged into Sydney Harbour. An Environmental Impact Statement concluded that mobilization of contaminated sediments posed unacceptable ecological risks, therefore immobilizing contaminants in the STPs using solidification and stabilization was identified as a primary source control remediation option to mitigate against continued transport of contaminated sediments from the STPs into Sydney Harbour. Recent developments in contaminant mass flux techniques focus on understanding “mobile” vs. “immobile” contaminants at remediation sites. Forensic source evaluations are also increasingly used for understanding origins of PAH contaminants in soils or sediments. Flux and forensic source evaluation-informed remediation decision-making uses this information to develop remediation end point goals aimed at reducing off-site exposure and managing potential ecological risk. This study included reviews of previous flux studies, calculating current mass flux estimates and a forensic assessment using PAH fingerprint techniques, during remediation of one of Canada’s most polluted sites at the STPs. Historically, the STPs was thought to be the major source of PAH contamination in Sydney Harbour with estimated discharges of nearly 800 kg/year of PAHs. However, during three years of remediation monitoring only 17-97 kg/year of PAHs were discharged from the STPs, which was also corroborated by an independent PAH flux study during the first year of remediation which estimated 119 kg/year. The estimated mass efflux of PAHs from the STPs during remediation was in stark contrast to ~2000 kg loading thought necessary to cause a short term increase in harbour sediment PAH concentrations. These mass flux estimates during remediation were also between three to eight times lower than PAHs discharged from the STPs a decade prior to remediation, when at the same time, government studies demonstrated on-going reduction in PAH concentrations in harbour sediments. Flux results were also corroborated using forensic source evaluations using PAH fingerprint techniques which found a common source of PAHs for urban soils, marine and aquatic sediments in and around Sydney. Coal combustion (from historical coking) and coal dust transshipment (from current coal transshipment facilities), are likely the principal source of PAHs in these media and not migration of PAH laden sediments from the STPs during a large scale remediation project.

Keywords: contaminated sediment, mass flux, forensic source evaluations, remediation

Procedia PDF Downloads 216
29 Distribution and Ecological Risk Assessment of Trace Elements in Sediments along the Ganges River Estuary, India

Authors: Priyanka Mondal, Santosh K. Sarkar

Abstract:

The present study investigated the spatiotemporal distribution and ecological risk assessment of trace elements of surface sediments (top 0 - 5 cm; grain size ≤ 0.63 µm) in relevance to sediment quality characteristics along the Ganges River Estuary, India. Sediment samples were collected during ebb tide from intertidal regions covering seven sampling sites of diverse environmental stresses. The elements were analyzed with the help of ICPAES. This positive, mixohaline, macro-tidal estuary has global significance contributing ecological and economic services. Presence of fine-clayey particle (47.03%) enhances the adsorption as well as transportation of trace elements. There is a remarkable inter-metallic variation (mg kg-1 dry weight) in the distribution pattern in the following manner: Al (31801± 15943) > Fe (23337± 7584) > Mn (461±147) > S(381±235) > Zn(54 ±18) > V(43 ±14) > Cr(39 ±15) > As (34±15) > Cu(27 ±11) > Ni (24 ±9) > Se (17 ±8) > Co(11 ±3) > Mo(10 ± 2) > Hg(0.02 ±0.01). An overall trend of enrichment of majority of trace elements was very much pronounced at the site Lot 8, ~ 35km upstream of the estuarine mouth. In contrast, the minimum concentration was recorded at site Gangasagar, mouth of the estuary, with high energy profile. The prevalent variations in trace element distribution are being liable for a set of cumulative factors such as hydrodynamic conditions, sediment dispersion pattern and textural variations as well as non-homogenous input of contaminants from point and non-point sources. In order to gain insight into the trace elements distribution, accumulation, and their pollution status, geoaccumulation index (Igeo) and enrichment factor (EF) were used. The Igeo indicated that surface sediments were moderately polluted with As (0.60) and Mo (1.30) and strongly contaminated with Se (4.0). The EF indicated severe pollution of Se (53.82) and significant pollution of As (4.05) and Mo (6.0) and indicated the influx of As, Mo and Se in sediments from anthropogenic sources (such as industrial and municipal sewage, atmospheric deposition, agricultural run-off, etc.). The significant role of the megacity Calcutta in relevance to the untreated sewage discharge, atmospheric inputs and other anthropogenic activities is worthwhile to mention. The ecological risk for different trace elements was evaluated using sediment quality guidelines, effects range low (ERL), and effect range median (ERM). The concentration of As, Cu and Ni at 100%, 43% and 86% of the sampling sites has exceeded the ERL value while none of the element concentration exceeded ERM. The potential ecological risk index values revealed that As at 14.3% of the sampling sites would pose relatively moderate risk to benthic organisms. The effective role of finer clay particles for trace element distribution was revealed by multivariate analysis. The authors strongly recommend regular monitoring emphasizing on accurate appraisal of the potential risk of trace elements for effective and sustainable management of this estuarine environment.

Keywords: pollution assessment, sediment contamination, sediment quality, trace elements

Procedia PDF Downloads 238
28 A New Perspective in Cervical Dystonia: Neurocognitive Impairment

Authors: Yesim Sucullu Karadag, Pinar Kurt, Sule Bilen, Nese Subutay Oztekin, Fikri Ak

Abstract:

Background: Primary cervical dystonia is thought to be a purely motor disorder. But recent studies revealed that patients with dystonia had additional non-motor features. Sensory and psychiatric disturbances could be included into the non-motor spectrum of dystonia. The Basal Ganglia receive inputs from all cortical areas and throughout the thalamus project to several cortical areas, thus participating to circuits that have been linked to motor as well as sensory, emotional and cognitive functions. However, there are limited studies indicating cognitive impairment in patients with cervical dystonia. More evidence is required regarding neurocognitive functioning in these patients. Objective: This study is aimed to investigate neurocognitive profile of cervical dystonia patients in comparison to healthy controls (HC) by employing a detailed set of neuropsychological tests in addition to self-reported instruments. Methods: Totally 29 (M/F: 7/22) cervical dystonia patients and 30 HC (M/F: 10/20) were included into the study. Exclusion criteria were depression and not given informed consent. Standard demographic, educational data and clinical reports (disease duration, disability index) were recorded for all patients. After a careful neurological evaluation, all subjects were given a comprehensive battery of neuropsychological tests: Self report of neuropsychological condition (by visual analogue scale-VAS, 0-100), RAVLT, STROOP, PASAT, TMT, SDMT, JLOT, DST, COWAT, ACTT, and FST. Patients and HC were compared regarding demographic, clinical features and neurocognitive tests. Also correlation between disease duration, disability index and self report -VAS were assessed. Results: There was no difference between patients and HCs regarding socio-demographic variables such as age, gender and years of education (p levels were 0.36, 0.436, 0.869; respectively). All of the patients were assessed at the peak of botulinum toxine effect and they were not taking an anticholinergic agent or benzodiazepine. Dystonia patients had significantly impaired verbal learning and memory (RAVLT, p<0.001), divided attention and working memory (ACTT, p<0.001), attention speed (TMT-A and B, p=0.008, 0.050), executive functions (PASAT, p<0.001; SDMT, p= 0.001; FST, p<0.001), verbal attention (DST, p=0.001), verbal fluency (COWAT, p<0.001), visio-spatial processing (JLOT, p<0.001) in comparison to healthy controls. But focused attention (STROOP-spontaneous correction) was not different between two groups (p>0.05). No relationship was found regarding disease duration and disability index with any neurocognitive tests. Conclusions: Our study showed that neurocognitive functions of dystonia patients were worse than control group with the similar age, sex, and education independently clinical expression like disease duration and disability index. This situation may be the result of possible cortical and subcortical changes in dystonia patients. Advanced neuroimaging techniques might be helpful to explain these changes in cervical dystonia patients.

Keywords: cervical dystonia, neurocognitive impairment, neuropsychological test, dystonia disability index

Procedia PDF Downloads 387
27 Recycling Service Strategy by Considering Demand-Supply Interaction

Authors: Hui-Chieh Li

Abstract:

Circular economy promotes greater resource productivity and avoids pollution through greater recycling and re-use which bring benefits for both the environment and the economy. The concept is contrast to a linear economy which is ‘take, make, dispose’ model of production. A well-design reverse logistics service strategy could enhance the willingness of recycling of the users and reduce the related logistics cost as well as carbon emissions. Moreover, the recycle brings the manufacturers most advantages as it targets components for closed-loop reuse, essentially converting materials and components from worn-out product into inputs for new ones at right time and right place. This study considers demand-supply interaction, time-dependent recycle demand, time-dependent surplus value of recycled product and constructs models on recycle service strategy for the recyclable waste collector. A crucial factor in optimizing a recycle service strategy is consumer demand. The study considers the relationships between consumer demand towards recycle and product characteristics, surplus value and user behavior. The study proposes a recycle service strategy which differs significantly from the conventional and typical uniform service strategy. Periods with considerable demand and large surplus product value suggest frequent and short service cycle. The study explores how to determine a recycle service strategy for recyclable waste collector in terms of service cycle frequency and duration and vehicle type for all service cycles by considering surplus value of recycled product, time-dependent demand, transportation economies and demand-supply interaction. The recyclable waste collector is responsible for the collection of waste product for the manufacturer. The study also examines the impacts of utilization rate on the cost and profit in the context of different sizes of vehicles. The model applies mathematical programming methods and attempts to maximize the total profit of the distributor during the study period. This study applies the binary logit model, analytical model and mathematical programming methods to the problem. The model specifically explores how to determine a recycle service strategy for the recycler by considering product surplus value, time-dependent recycle demand, transportation economies and demand-supply interaction. The model applies mathematical programming methods and attempts to minimize the total logistics cost of the recycler and maximize the recycle benefits of the manufacturer during the study period. The study relaxes the constant demand assumption and examines how service strategy affects consumer demand towards waste recycling. Results of the study not only help understanding how the user demand for recycle service and product surplus value affects the logistics cost and manufacturer’s benefits, but also provide guidance such as award bonus and carbon emission regulations for the government.

Keywords: circular economy, consumer demand, product surplus value, recycle service strategy

Procedia PDF Downloads 371
26 Technology Management for Early Stage Technologies

Authors: Ming Zhou, Taeho Park

Abstract:

Early stage technologies have been particularly challenging to manage due to high degrees of their numerous uncertainties. Most research results directly out of a research lab tend to be at their early, if not the infant stage. A long while uncertain commercialization process awaits these lab results. The majority of such lab technologies go nowhere and never get commercialized due to various reasons. Any efforts or financial resources put into managing these technologies turn fruitless. High stake naturally calls for better results, which make a patenting decision harder to make. A good and well protected patent goes a long way for commercialization of the technology. Our preliminary research showed that there was not a simple yet productive procedure for such valuation. Most of the studies now have been theoretical and overly comprehensive where practical suggestions were non-existent. Hence, we attempted to develop a simple and highly implementable procedure for efficient and scalable valuation. We thoroughly reviewed existing research, interviewed practitioners in the Silicon Valley area, and surveyed university technology offices. Instead of presenting another theoretical and exhaustive research, we aimed at developing a practical guidance that a government agency and/or university office could easily deploy and get things moving to later steps of managing early stage technologies. We provided a procedure to thriftily value and make the patenting decision. A patenting index was developed using survey data and expert opinions. We identified the most important factors to be used in the patenting decision using survey ratings. The rating then assisted us in generating good relative weights for the later scoring and weighted averaging step. More importantly, we validated our procedure by testing it with our practitioner contacts. Their inputs produced a general yet highly practical cut schedule. Such schedule of realistic practices has yet to be witnessed our current research. Although a technology office may choose to deviate from our cuts, what we offered here at least provided a simple and meaningful starting point. This procedure was welcomed by practitioners in our expert panel and university officers in our interview group. This research contributed to our current understanding and practices of managing early stage technologies by instating a heuristically simple yet theoretical solid method for the patenting decision. Our findings generated top decision factors, decision processes and decision thresholds of key parameters. This research offered a more practical perspective which further completed our extant knowledge. Our results could be impacted by our sample size and even biased a bit by our focus on the Silicon Valley area. Future research, blessed with bigger data size and more insights, may want to further train and validate our parameter values in order to obtain more consistent results and analyze our decision factors for different industries.

Keywords: technology management, early stage technology, patent, decision

Procedia PDF Downloads 315
25 Use of Artificial Neural Networks to Estimate Evapotranspiration for Efficient Irrigation Management

Authors: Adriana Postal, Silvio C. Sampaio, Marcio A. Villas Boas, Josué P. Castro, Ralpho R. Reis

Abstract:

This study deals with the estimation of reference evapotranspiration (ET₀) in an agricultural context, focusing on efficient irrigation management to meet the growing interest in the sustainable management of water resources. Given the importance of water in agriculture and its scarcity in many regions, efficient use of this resource is essential to ensure food security and environmental sustainability. The methodology used involved the application of artificial intelligence techniques, specifically Multilayer Perceptron (MLP) Artificial Neural Networks (ANNs), to predict ET₀ in the state of Paraná, Brazil. The models were trained and validated with meteorological data from the Brazilian National Institute of Meteorology (INMET), together with data obtained from a producer's weather station in the western region of Paraná. Two optimizers (SGD and Adam) and different meteorological variables, such as temperature, humidity, solar radiation, and wind speed, were explored as inputs to the models. Nineteen configurations with different input variables were tested; amidst them, configuration 9, with 8 input variables, was identified as the most efficient of all. Configuration 10, with 4 input variables, was considered the most effective, considering the smallest number of variables. The main conclusions of this study show that MLP ANNs are capable of accurately estimating ET₀, providing a valuable tool for irrigation management in agriculture. Both configurations (9 and 10) showed promising performance in predicting ET₀. The validation of the models with cultivator data underlined the practical relevance of these tools and confirmed their generalization ability for different field conditions. The results of the statistical metrics, including Mean Absolute Error (MAE), Mean Squared Error (MSE), Root Mean Squared Error (RMSE), and Coefficient of Determination (R2), showed excellent agreement between the model predictions and the observed data, with MAE as low as 0.01 mm/day and 0.03 mm/day, respectively. In addition, the models achieved an R2 between 0.99 and 1, indicating a satisfactory fit to the real data. This agreement was also confirmed by the Kolmogorov-Smirnov test, which evaluates the agreement of the predictions with the statistical behavior of the real data and yields values between 0.02 and 0.04 for the producer data. In addition, the results of this study suggest that the developed technique can be applied to other locations by using specific data from these sites to further improve ET₀ predictions and thus contribute to sustainable irrigation management in different agricultural regions. To summarize, this study has helped to advance research in the field of irrigation management in agriculture. It provides an accessible and effective approach to ET₀ estimation that has the potential to significantly improve water use efficiency and promote agricultural sustainability in different contexts.

Keywords: agricultural technology, neural networks in agriculture, water efficiency, water use optimization

Procedia PDF Downloads 18
24 Using Convolutional Neural Networks to Distinguish Different Sign Language Alphanumerics

Authors: Stephen L. Green, Alexander N. Gorban, Ivan Y. Tyukin

Abstract:

Within the past decade, using Convolutional Neural Networks (CNN)’s to create Deep Learning systems capable of translating Sign Language into text has been a breakthrough in breaking the communication barrier for deaf-mute people. Conventional research on this subject has been concerned with training the network to recognize the fingerspelling gestures of a given language and produce their corresponding alphanumerics. One of the problems with the current developing technology is that images are scarce, with little variations in the gestures being presented to the recognition program, often skewed towards single skin tones and hand sizes that makes a percentage of the population’s fingerspelling harder to detect. Along with this, current gesture detection programs are only trained on one finger spelling language despite there being one hundred and forty-two known variants so far. All of this presents a limitation for traditional exploitation for the state of current technologies such as CNN’s, due to their large number of required parameters. This work aims to present a technology that aims to resolve this issue by combining a pretrained legacy AI system for a generic object recognition task with a corrector method to uptrain the legacy network. This is a computationally efficient procedure that does not require large volumes of data even when covering a broad range of sign languages such as American Sign Language, British Sign Language and Chinese Sign Language (Pinyin). Implementing recent results on method concentration, namely the stochastic separation theorem, an AI system is supposed as an operate mapping an input present in the set of images u ∈ U to an output that exists in a set of predicted class labels q ∈ Q of the alphanumeric that q represents and the language it comes from. These inputs and outputs, along with the interval variables z ∈ Z represent the system’s current state which implies a mapping that assigns an element x ∈ ℝⁿ to the triple (u, z, q). As all xi are i.i.d vectors drawn from a product mean distribution, over a period of time the AI generates a large set of measurements xi called S that are grouped into two categories: the correct predictions M and the incorrect predictions Y. Once the network has made its predictions, a corrector can then be applied through centering S and Y by subtracting their means. The data is then regularized by applying the Kaiser rule to the resulting eigenmatrix and then whitened before being split into pairwise, positively correlated clusters. Each of these clusters produces a unique hyperplane and if any element x falls outside the region bounded by these lines then it is reported as an error. As a result of this methodology, a self-correcting recognition process is created that can identify fingerspelling from a variety of sign language and successfully identify the corresponding alphanumeric and what language the gesture originates from which no other neural network has been able to replicate.

Keywords: convolutional neural networks, deep learning, shallow correctors, sign language

Procedia PDF Downloads 77
23 Contribution to the Study of Automatic Epileptiform Pattern Recognition in Long Term EEG Signals

Authors: Christine F. Boos, Fernando M. Azevedo

Abstract:

Electroencephalogram (EEG) is a record of the electrical activity of the brain that has many applications, such as monitoring alertness, coma and brain death; locating damaged areas of the brain after head injury, stroke and tumor; monitoring anesthesia depth; researching physiology and sleep disorders; researching epilepsy and localizing the seizure focus. Epilepsy is a chronic condition, or a group of diseases of high prevalence, still poorly explained by science and whose diagnosis is still predominantly clinical. The EEG recording is considered an important test for epilepsy investigation and its visual analysis is very often applied for clinical confirmation of epilepsy diagnosis. Moreover, this EEG analysis can also be used to help define the types of epileptic syndrome, determine epileptiform zone, assist in the planning of drug treatment and provide additional information about the feasibility of surgical intervention. In the context of diagnosis confirmation the analysis is made using long term EEG recordings with at least 24 hours long and acquired by a minimum of 24 electrodes in which the neurophysiologists perform a thorough visual evaluation of EEG screens in search of specific electrographic patterns called epileptiform discharges. Considering that the EEG screens usually display 10 seconds of the recording, the neurophysiologist has to evaluate 360 screens per hour of EEG or a minimum of 8,640 screens per long term EEG recording. Analyzing thousands of EEG screens in search patterns that have a maximum duration of 200 ms is a very time consuming, complex and exhaustive task. Because of this, over the years several studies have proposed automated methodologies that could facilitate the neurophysiologists’ task of identifying epileptiform discharges and a large number of methodologies used neural networks for the pattern classification. One of the differences between all of these methodologies is the type of input stimuli presented to the networks, i.e., how the EEG signal is introduced in the network. Five types of input stimuli have been commonly found in literature: raw EEG signal, morphological descriptors (i.e. parameters related to the signal’s morphology), Fast Fourier Transform (FFT) spectrum, Short-Time Fourier Transform (STFT) spectrograms and Wavelet Transform features. This study evaluates the application of these five types of input stimuli and compares the classification results of neural networks that were implemented using each of these inputs. The performance of using raw signal varied between 43 and 84% efficiency. The results of FFT spectrum and STFT spectrograms were quite similar with average efficiency being 73 and 77%, respectively. The efficiency of Wavelet Transform features varied between 57 and 81% while the descriptors presented efficiency values between 62 and 93%. After simulations we could observe that the best results were achieved when either morphological descriptors or Wavelet features were used as input stimuli.

Keywords: Artificial neural network, electroencephalogram signal, pattern recognition, signal processing

Procedia PDF Downloads 503
22 Phenolic Composition of Wines from Cultivar Carménère during Aging with Inserts to Barrels

Authors: E. Obreque-Slier, P. Osorio-Umaña, G. Vidal-Acevedo, A. Peña-Neira, M. Medel-Marabolí

Abstract:

Sensory and nutraceutical characteristics of a wine are determined by different chemical compounds, such as organic acids, sugars, alcohols, polysaccharides, aromas, and polyphenols. The polyphenols correspond to secondary metabolites that are associated with the prevention of several pathologies, and those are responsible for color, aroma, bitterness, and astringency in wines. These compounds come from grapes and wood during aging in barrels, which correspond to the format of wood most widely used in wine production. However, the barrels is a high-cost input with a limited useful life (3-4 years). For this reason, some oenological products have been developed in order to renew the barrels and increase their useful life in some years. These formats are being used slowly because limited information exists about the effect on the wine chemical characteristics. The objective of the study was to evaluate the effect of different laubarrel renewal systems (staves and zigzag) on the polyphenolic characteristics of a Carménère wine (Vitis vinifera), an emblematic cultivar of Chile. For this, a completely randomized experimental design with 5 treatments and three replicates per treatment was used. The treatments were: new barrels (T0), used barrels during 4 years (T1), scraped used barrels (T2), used barrels with staves (T3) and used barrels with zigzag (T4). The study was performed for 12 months, and different spectrophotometric parameters (phenols, anthocyanins, and total tannins) and HPLC-DAD (low molecular weight phenols) were evaluated. The wood inputs were donated by Toneleria Nacional and corresponded to products from the same production batch. The total phenols content increased significantly after 40 days, while the total tannin concentration decreased gradually during the study. The anthocyanin concentration increased after 120 days of the assay in all treatments. Comparatively, it was observed that the wine of T2 presented the lowest values of these polyphenols, while the T0 and T4 presented the highest total phenol contents. Also, T1 presented the highest values of total tannins in relation to the rest of the treatments in some samples. The low molecular weight phenolic compounds identified by HPLC-DAD were 7 flavonoids (epigallocatechin, catechin, procyanidin gallate, epicatechin, quercetin, rutin and myricetin) and 14 non-flavonoids (gallic, protocatechuic, hydroxybenzoic, trans-cutaric, vanillinic, caffeic, syringic, p-coumaric and ellagic acids; tyrosol, vanillin, syringaldehyde, trans-resveratrol and cis-resveratrol). Tyrosol was the most abundant compound, whereas ellagic acid was the lowest in the samples. Comparatively, it was observed that the wines of T2 showed the lowest concentrations of flavonoid and non-flavonoid phenols during the study. In contrast, wines of T1, T3, and T4 presented the highest contents of non-flavonoid polyphenols. In summary, the use of barrel renovators (zig zag and staves) is an interesting alternative which would emulate the contribution of polyphenols from the barrels to the wine.

Keywords: barrels, oak wood aging, polyphenols, red wine

Procedia PDF Downloads 171
21 Emerging Issues for Global Impact of Foreign Institutional Investors (FII) on Indian Economy

Authors: Kamlesh Shashikant Dave

Abstract:

The global financial crisis is rooted in the sub-prime crisis in U.S.A. During the boom years, mortgage brokers attracted by the big commission, encouraged buyers with poor credit to accept housing mortgages with little or no down payment and without credit check. A combination of low interest rates and large inflow of foreign funds during the booming years helped the banks to create easy credit conditions for many years. Banks lent money on the assumptions that housing price would continue to rise. Also the real estate bubble encouraged the demand for houses as financial assets .Banks and financial institutions later repackaged these debts with other high risk debts and sold them to worldwide investors creating financial instruments called collateral debt obligations (CDOs). With the rise in interest rate, mortgage payments rose and defaults among the subprime category of borrowers increased accordingly. Through the securitization of mortgage payments, a recession developed in the housing sector and consequently it was transmitted to the entire US economy and rest of the world. The financial credit crisis has moved the US and the global economy into recession. Indian economy has also affected by the spill over effects of the global financial crisis. Great saving habit among people, strong fundamentals, strong conservative and regulatory regime have saved Indian economy from going out of gear, though significant parts of the economy have slowed down. Industrial activity, particularly in the manufacturing and infrastructure sectors decelerated. The service sector too, slow in construction, transport, trade, communication, hotels and restaurants sub sectors. The financial crisis has some adverse impact on the IT sector. Exports had declined in absolute terms in October. Higher inputs costs and dampened demand have dented corporate margins while the uncertainty surrounding the crisis has affected business confidence. To summarize, reckless subprime lending, loose monetary policy of US, expansion of financial derivatives beyond acceptable norms and greed of Wall Street has led to this exceptional global financial and economic crisis. Thus, the global credit crisis of 2008 highlights the need to redesign both the global and domestic financial regulatory systems not only to properly address systematic risk but also to support its proper functioning (i.e financial stability).Such design requires: 1) Well managed financial institutions with effective corporate governance and risk management system 2) Disclosure requirements sufficient to support market discipline. 3)Proper mechanisms for resolving problem institution and 4) Mechanisms to protect financial services consumers in the event of financial institutions failure.

Keywords: FIIs, BSE, sensex, global impact

Procedia PDF Downloads 423
20 Designing Entrepreneurship Education Contents for Entrepreneurial Intention Building among Undergraduates in India

Authors: Sumita Srivastava

Abstract:

Despite several measures taken by the Government of India, entrepreneurship is still not perceived as a viable career option by the young generation. Although the rate of startups has improved a little after the penetration of e portals as business platforms, still the numbers are not very significant. It is also important to note that entrepreneurial initiatives are mostly taken up by graduates of premier institutions of India like Indian Institute of Technology (IITs) and Indian Institute of Management (IIMs). The scenario is not very satisfactory amongst the masses graduating from mainstream universities of the country. Indian youth at large are not attracted towards entrepreneurship as a career choice. The reason probably lies in the social fabric of the country and inappropriate education system which does not support the entrepreneurship at large amongst youth in the country. Education is critical to the development of an economy from the poverty level to the level of self-sustenance and development. The current curriculum in the majority of business schools in India prepares the average graduate to become employed by the available firms or business owners in society. For graduates in other streams, employment opportunities are very limited. The aim of this study was to identify and design entrepreneurship education contents to encourage undergraduates to pursue entrepreneurship as a career choice. This comprehensive study was conducted in multiple stages. Extensive research was conducted at each stage with an appropriate methodology. These stages of the project study were interconnected with each other, and each preceding stage provided inputs for the following stage of the study. In the first stage of the study, an empirical analysis was conducted to understand the current state of entrepreneurial intentions of undergraduates of Agra city. Various stakeholders were contacted at the stage, including students (n = 500), entrepreneurs (n = 20) and academicians and field experts (n = 10). At the second stage of the project study, a systems science technique, Nominal Group Technique (NGT) was used to identify the critical elements of entrepreneurship education in India based upon the findings of stage 1. The application of the Nominal Group Technique involved a workshop format; 15 domain experts participated in the workshop. Throughout the process, a democratic process was followed to avoid individual dominance and premature focusing on a single idea. The study obtained 63 responses from experts for effective entrepreneurship education in India. The responses were reduced to seven elements after a few thematic iterations. These elements were then segregated into content (knowledge, skills and attitude) and learning interaction on the basis of experts’ responses. After identifying critical elements of entrepreneurship education in the previous stage, the course was designed and validated at stage 3 of the project. Scientific methods were used at this stage to validate the curriculum contents and training interventions experimentally. The educational and training interventions designed through this study would not only help in developing entrepreneurial intentions but also creating skills relevant to the local entrepreneurial opportunities in the vicinity.

Keywords: curriculum design, entrepreneurial intention, entrepreneuship education, nominal group technique

Procedia PDF Downloads 106
19 Development of an Automatic Control System for ex vivo Heart Perfusion

Authors: Pengzhou Lu, Liming Xin, Payam Tavakoli, Zhonghua Lin, Roberto V. P. Ribeiro, Mitesh V. Badiwala

Abstract:

Ex vivo Heart Perfusion (EVHP) has been developed as an alternative strategy to expand cardiac donation by enabling resuscitation and functional assessment of hearts donated from marginal donors, which were previously not accepted. EVHP parameters, such as perfusion flow (PF) and perfusion pressure (PP) are crucial for optimal organ preservation. However, with the heart’s constant physiological changes during EVHP, such as coronary vascular resistance, manual control of these parameters is rendered imprecise and cumbersome for the operator. Additionally, low control precision and the long adjusting time may lead to irreversible damage to the myocardial tissue. To solve this problem, an automatic heart perfusion system was developed by applying a Human-Machine Interface (HMI) and a Programmable-Logic-Controller (PLC)-based circuit to control PF and PP. The PLC-based control system collects the data of PF and PP through flow probes and pressure transducers. It has two control modes: the RPM-flow mode and the pressure mode. The RPM-flow control mode is an open-loop system. It influences PF through providing and maintaining the desired speed inputted through the HMI to the centrifugal pump with a maximum error of 20 rpm. The pressure control mode is a closed-loop system where the operator selects a target Mean Arterial Pressure (MAP) to control PP. The inputs of the pressure control mode are the target MAP, received through the HMI, and the real MAP, received from the pressure transducer. A PID algorithm is applied to maintain the real MAP at the target value with a maximum error of 1mmHg. The precision and control speed of the RPM-flow control mode were examined by comparing the PLC-based system to an experienced operator (EO) across seven RPM adjustment ranges (500, 1000, 2000 and random RPM changes; 8 trials per range) tested in a random order. System’s PID algorithm performance in pressure control was assessed during 10 EVHP experiments using porcine hearts. Precision was examined through monitoring the steady-state pressure error throughout perfusion period, and stabilizing speed was tested by performing two MAP adjustment changes (4 trials per change) of 15 and 20mmHg. A total of 56 trials were performed to validate the RPM-flow control mode. Overall, the PLC-based system demonstrated the significantly faster speed than the EO in all trials (PLC 1.21±0.03, EO 3.69±0.23 seconds; p < 0.001) and greater precision to reach the desired RPM (PLC 10±0.7, EO 33±2.7 mean RPM error; p < 0.001). Regarding pressure control, the PLC-based system has the median precision of ±1mmHg error and the median stabilizing times in changing 15 and 20mmHg of MAP are 15 and 19.5 seconds respectively. The novel PLC-based control system was 3 times faster with 60% less error than the EO for RPM-flow control. In pressure control mode, it demonstrates a high precision and fast stabilizing speed. In summary, this novel system successfully controlled perfusion flow and pressure with high precision, stability and a fast response time through a user-friendly interface. This design may provide a viable technique for future development of novel heart preservation and assessment strategies during EVHP.

Keywords: automatic control system, biomedical engineering, ex-vivo heart perfusion, human-machine interface, programmable logic controller

Procedia PDF Downloads 146
18 Economic Valuation of Emissions from Mobile Sources in the Urban Environment of Bogotá

Authors: Dayron Camilo Bermudez Mendoza

Abstract:

Road transportation is a significant source of externalities, notably in terms of environmental degradation and the emission of pollutants. These emissions adversely affect public health, attributable to criteria pollutants like particulate matter (PM2.5 and PM10) and carbon monoxide (CO), and also contribute to climate change through the release of greenhouse gases, such as carbon dioxide (CO2). It is, therefore, crucial to quantify the emissions from mobile sources and develop a methodological framework for their economic valuation, aiding in the assessment of associated costs and informing policy decisions. The forthcoming congress will shed light on the externalities of transportation in Bogotá, showcasing methodologies and findings from the construction of emission inventories and their spatial analysis within the city. This research focuses on the economic valuation of emissions from mobile sources in Bogotá, employing methods like hedonic pricing and contingent valuation. Conducted within the urban confines of Bogotá, the study leverages demographic, transportation, and emission data sourced from the Mobility Survey, official emission inventories, and tailored estimates and measurements. The use of hedonic pricing and contingent valuation methodologies facilitates the estimation of the influence of transportation emissions on real estate values and gauges the willingness of Bogotá's residents to invest in reducing these emissions. The findings are anticipated to be instrumental in the formulation and execution of public policies aimed at emission reduction and air quality enhancement. In compiling the emission inventory, innovative data sources were identified to determine activity factors, including information from automotive diagnostic centers and used vehicle sales websites. The COPERT model was utilized to ascertain emission factors, requiring diverse inputs such as data from the national transit registry (RUNT), OpenStreetMap road network details, climatological data from the IDEAM portal, and Google API for speed analysis. Spatial disaggregation employed GIS tools and publicly available official spatial data. The development of the valuation methodology involved an exhaustive systematic review, utilizing platforms like the EVRI (Environmental Valuation Reference Inventory) portal and other relevant sources. The contingent valuation method was implemented via surveys in various public settings across the city, using a referendum-style approach for a sample of 400 residents. For the hedonic price valuation, an extensive database was developed, integrating data from several official sources and basing analyses on the per-square meter property values in each city block. The upcoming conference anticipates the presentation and publication of these results, embodying a multidisciplinary knowledge integration and culminating in a master's thesis.

Keywords: economic valuation, transport economics, pollutant emissions, urban transportation, sustainable mobility

Procedia PDF Downloads 33
17 Culvert Blockage Evaluation Using Australian Rainfall And Runoff 2019

Authors: Rob Leslie, Taher Karimian

Abstract:

The blockage of cross drainage structures is a risk that needs to be understood and managed or lessened through the design. A blockage is a random event, influenced by site-specific factors, which needs to be quantified for design. Under and overestimation of blockage can have major impacts on flood risk and cost associated with drainage structures. The importance of this matter is heightened for those projects located within sensitive lands. It is a particularly complex problem for large linear infrastructure projects (e.g., rail corridors) located within floodplains where blockage factors can influence flooding upstream and downstream of the infrastructure. The selection of the appropriate blockage factors for hydraulic modeling has been subject to extensive research by hydraulic engineers. This paper has been prepared to review the current Australian Rainfall and Runoff 2019 (ARR 2019) methodology for blockage assessment by applying this method to a transport corridor brownfield upgrade case study in New South Wales. The results of applying the method are also validated against asset data and maintenance records. ARR 2019 – Book 6, Chapter 6 includes advice and an approach for estimating the blockage of bridges and culverts. This paper concentrates specifically on the blockage of cross drainage structures. The method has been developed to estimate the blockage level for culverts affected by sediment or debris due to flooding. The objective of the approach is to evaluate a numerical blockage factor that can be utilized in a hydraulic assessment of cross drainage structures. The project included an assessment of over 200 cross drainage structures. In order to estimate a blockage factor for use in the hydraulic model, a process has been advanced that considers the qualitative factors (e.g., Debris type, debris availability) and site-specific hydraulic factors that influence blockage. A site rating associated with the debris potential (i.e., availability, transportability, mobility) at each crossing was completed using the method outlined in ARR 2019 guidelines. The hydraulic results inputs (i.e., flow velocity, flow depth) and qualitative factors at each crossing were developed into an advanced spreadsheet where the design blockage level for cross drainage structures were determined based on the condition relating Inlet Clear Width and L10 (average length of the longest 10% of the debris reaching the site) and the Adjusted Debris Potential. Asset data, including site photos and maintenance records, were then reviewed and compared with the blockage assessment to check the validity of the results. The results of this assessment demonstrate that the estimated blockage factors at each crossing location using ARR 2019 guidelines are well-validated with the asset data. The primary finding of the study is that the ARR 2019 methodology is a suitable approach for culvert blockage assessment that has been validated against a case study spanning a large geographical area and multiple sub-catchments. The study also found that the methodology can be effectively coded within a spreadsheet or similar analytical tool to automate its application.

Keywords: ARR 2019, blockage, culverts, methodology

Procedia PDF Downloads 308
16 Groundwater Arsenic Contamination in Gangetic Jharkhand, India: Risk Implications for Human Health and Sustainable Agriculture

Authors: Sukalyan Chakraborty

Abstract:

Arsenic contamination in groundwater has been a matter of serious concern worldwide. Globally, arsenic contaminated water has caused serious chronic human diseases and in the last few decades the transfer of arsenic to human beings via food chain has gained much attention because food represents a further potential exposure pathway to arsenic in instances where crops are irrigated with high arsenic groundwater, grown in contaminated fields or cooked with arsenic laden water. In the present study, the groundwater of Sahibganj district of Jharkhand has been analysed to find the degree of contamination and its probable associated risk due to direct consumption or irrigation. The present study area comprising of three blocks, namely Sahibganj, Rajmahal and Udhwa in Sahibganj district of Jharkhand state, India, situated in the western bank of river Ganga has been investigated for arsenic contamination in groundwater, soil and crops predominantly growing in the region. Associated physicochemical parameters of groundwater including pH, temperature, electrical conductivity (EC), total dissolved solids (TDS), dissolved oxygen (DO), oxidation reduction potential (ORP), ammonium, nitrate and chloride were assessed to understand the mobilisation mechanism and chances of arsenic exposure from soil to crops and further into the food chain. Results suggested the groundwater to be dominantly Ca-HCO3- type with low redox potential and high total dissolved solids load. Major cations followed the order of Ca ˃ Na ˃ Mg ˃ K. The concentration of major anions was found in the order of HCO3− > Cl− > SO42− > NO3− > PO43− varied between 0.009 to 0.20 mg L-1. Fe concentrations of the groundwater samples were below WHO permissible limit varying between 54 to 344 µg L-1. Phosphate concentration was high and showed a significant positive correlation with arsenic. As concentrations ranged from 7 to 115 µg L-1 in premonsoon, between 2 and 98 µg L-1 in monsoon and 1 to 133µg L-1 in postmonsoon season. Arsenic concentration was found to be much higher than the WHO or BIS permissible limit in majority of the villages in the study area. Arsenic was also seen to be positively correlated with iron and phosphate. PCA results demonstrated the role of both geological condition and anthropogenic inputs to influence the water quality. Arsenic was also found to increase with depth up to 100 m from the surface. Calculation of carcinogenic and non-carcinogenic effects of the arsenic concentration in the communities exposed to the groundwater for drinking and other purpose indicated high risk with an average of more than 1 in a 1000 population. Health risk analysis revealed high to very high carcinogenic and non-carcinogenic risk for adults and children in the communities dependent on groundwater of the study area. Observation suggested the groundwater to be considerably polluted with arsenic and posing significant health risk for the exposed communities. The mobilisation mechanism of arsenic also could be identified from the results suggesting reductive dissolution of Fe oxyhydroxides due to high phosphate concentration from agricultural input arsenic release from the sediments along river Ganges.

Keywords: arsenic, physicochemical parameters, mobilisation, health effects

Procedia PDF Downloads 204