Search results for: Bayesian estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2141

Search results for: Bayesian estimation

191 Investigating the Role of Supplier Involvement in the Design Process as an Approach for Enhancing Building Maintainability

Authors: Kamal Ahmed, Othman Ayman, Refat Mostafa

Abstract:

The post-construction phase represents a critical milestone in the project lifecycle. This is because design errors and omissions, as well as construction defects, are examined during this phase. The traditional procurement approaches that are commonly adopted in construction projects separate design from construction, which ultimately inhibits contractors, suppliers and other parties from providing the design team with constructive comments and feedback to improve the project design. As a result, a lack of considering maintainability aspects during the design process results in increasing maintenance and operation costs as well as reducing building performance. This research aims to investigate the role of Early Supplier Involvement (ESI) in the design process as an approach to enhancing building maintainability. In order to achieve this aim, a research methodology consisting of a literature review, case studies and a survey questionnaire was designed to accomplish four objectives. Firstly, a literature review was used to examine the concepts of building maintenance, maintainability, the design process and ESI. Secondly, three case studies were presented and analyzed to investigate the role of ESI in enhancing building maintainability during the design process. Thirdly, a survey questionnaire was conducted with a representative sample of Architectural Design Firms (ADFs) in Egypt to investigate their perception and application of ESI towards enhancing building maintainability during the design process. Finally, the research developed a framework to facilitate ESI in the design process in ADFs in Egypt. Data analysis showed that the ‘Difficulty of trusting external parties and sharing information with transparency’ was ranked the highest challenge of ESI in ADFs in Egypt, followed by ‘Legal competitive advantage restrictions’. Moreover, ‘Better estimation for operation and maintenance costs’ was ranked the highest contribution of ESI towards enhancing building maintainability, followed by ‘Reduce the number of operation and maintenance problems or reworks’. Finally, ‘Innovation, technical expertise, and competence’ was ranked the highest supplier’s selection criteria, while ‘paying consultation fees for offering advice and recommendations to the design team’ was ranked the highest form of supplier’s remuneration. The proposed framework represents a synthesis that is creative in thought and adds value to the knowledge in a manner that has not previously occurred.

Keywords: maintenance, building maintainability, building life cycle cost (ICC), material supplier

Procedia PDF Downloads 47
190 An Optimal Control Method for Reconstruction of Topography in Dam-Break Flows

Authors: Alia Alghosoun, Nabil El Moçayd, Mohammed Seaid

Abstract:

Modeling dam-break flows over non-flat beds requires an accurate representation of the topography which is the main source of uncertainty in the model. Therefore, developing robust and accurate techniques for reconstructing topography in this class of problems would reduce the uncertainty in the flow system. In many hydraulic applications, experimental techniques have been widely used to measure the bed topography. In practice, experimental work in hydraulics may be very demanding in both time and cost. Meanwhile, computational hydraulics have served as an alternative for laboratory and field experiments. Unlike the forward problem, the inverse problem is used to identify the bed parameters from the given experimental data. In this case, the shallow water equations used for modeling the hydraulics need to be rearranged in a way that the model parameters can be evaluated from measured data. However, this approach is not always possible and it suffers from stability restrictions. In the present work, we propose an adaptive optimal control technique to numerically identify the underlying bed topography from a given set of free-surface observation data. In this approach, a minimization function is defined to iteratively determine the model parameters. The proposed technique can be interpreted as a fractional-stage scheme. In the first stage, the forward problem is solved to determine the measurable parameters from known data. In the second stage, the adaptive control Ensemble Kalman Filter is implemented to combine the optimality of observation data in order to obtain the accurate estimation of the topography. The main features of this method are on one hand, the ability to solve for different complex geometries with no need for any rearrangements in the original model to rewrite it in an explicit form. On the other hand, its achievement of strong stability for simulations of flows in different regimes containing shocks or discontinuities over any geometry. Numerical results are presented for a dam-break flow problem over non-flat bed using different solvers for the shallow water equations. The robustness of the proposed method is investigated using different numbers of loops, sensitivity parameters, initial samples and location of observations. The obtained results demonstrate high reliability and accuracy of the proposed techniques.

Keywords: erodible beds, finite element method, finite volume method, nonlinear elasticity, shallow water equations, stresses in soil

Procedia PDF Downloads 130
189 Referencing Anna: Findings From Eye-tracking During Dutch Pronoun Resolution

Authors: Robin Devillers, Chantal van Dijk

Abstract:

Children face ambiguities in everyday language use. Particularly ambiguity in pronoun resolution can be challenging, whereas adults can rapidly identify the antecedent of the mentioned pronoun. Two main factors underlie this process, namely the accessibility of the referent and the syntactic cues of the pronoun. After 200ms, adults have converged the accessibility and the syntactic constraints, while relieving cognitive effort by considering contextual cues. As children are still developing their cognitive capacity, they are not able yet to simultaneously assess and integrate accessibility, contextual cues and syntactic information. As such, they fail to identify the correct referent and possibly fixate more on the competitor in comparison to adults. In this study, Dutch while-clauses were used to investigate the interpretation of pronouns by children. The aim is to a) examine the extent to which 7-10 year old children are able to utilise discourse and syntactic information during online and offline sentence processing and b) analyse the contribution of individual factors, including age, working memory, condition and vocabulary. Adult and child participants are presented with filler-items and while-clauses, and the latter follows a particular structure: ‘Anna and Sophie are sitting in the library. While Anna is reading a book, she is taking a sip of water.’ This sentence illustrates the ambiguous situation, as it is unclear whether ‘she’ refers to Anna or Sophie. In the unambiguous situation, either Anna or Sophie would be substituted by a boy, such as ‘Peter’. The pronoun in the second sentence will unambiguously refer to one of the characters due to the syntactic constraints of the pronoun. Children’s and adults’ responses were measured by means of a visual world paradigm. This paradigm consisted of two characters, of which one was the referent (the target) and the other was the competitor. A sentence was presented and followed by a question, which required the participant to choose which character was the referent. Subsequently, this paradigm yields an online (fixations) and offline (accuracy) score. These findings will be analysed using Generalised Additive Mixed Models, which allow for a thorough estimation of the individual variables. These findings will contribute to the scientific literature in several ways; firstly, the use of while-clauses has not been studied much and it’s processing has not yet been identified. Moreover, online pronoun resolution has not been investigated much in both children and adults, and therefore, this study will contribute to adults and child’s pronoun resolution literature. Lastly, pronoun resolution has not been studied yet in Dutch and as such, this study adds to the languages

Keywords: pronouns, online language processing, Dutch, eye-tracking, first language acquisition, language development

Procedia PDF Downloads 100
188 Fracture Toughness Characterizations of Single Edge Notch (SENB) Testing Using DIC System

Authors: Amr Mohamadien, Ali Imanpour, Sylvester Agbo, Nader Yoosef-Ghodsi, Samer Adeeb

Abstract:

The fracture toughness resistance curve (e.g., J-R curve and crack tip opening displacement (CTOD) or δ-R curve) is important in facilitating strain-based design and integrity assessment of oil and gas pipelines. This paper aims to present laboratory experimental data to characterize the fracture behavior of pipeline steel. The influential parameters associated with the fracture of API 5L X52 pipeline steel, including different initial crack sizes, were experimentally investigated for a single notch edge bend (SENB). A total of 9 small-scale specimens with different crack length to specimen depth ratios were conducted and tested using single edge notch bending (SENB). ASTM E1820 and BS7448 provide testing procedures to construct the fracture resistance curve (Load-CTOD, CTOD-R, or J-R) from test results. However, these procedures are limited by standard specimens’ dimensions, displacement gauges, and calibration curves. To overcome these limitations, this paper presents the use of small-scale specimens and a 3D-digital image correlation (DIC) system to extract the parameters required for fracture toughness estimation. Fracture resistance curve parameters in terms of crack mouth open displacement (CMOD), crack tip opening displacement (CTOD), and crack growth length (∆a) were carried out from test results by utilizing the DIC system, and an improved regression fitting resistance function (CTOD Vs. crack growth), or (J-integral Vs. crack growth) that is dependent on a variety of initial crack sizes was constructed and presented. The obtained results were compared to the available results of the classical physical measurement techniques, and acceptable matchings were observed. Moreover, a case study was implemented to estimate the maximum strain value that initiates the stable crack growth. This might be of interest to developing more accurate strain-based damage models. The results of laboratory testing in this study offer a valuable database to develop and validate damage models that are able to predict crack propagation of pipeline steel, accounting for the influential parameters associated with fracture toughness.

Keywords: fracture toughness, crack propagation in pipeline steels, CTOD-R, strain-based damage model

Procedia PDF Downloads 63
187 Sustainability Impact Assessment of Construction Ecology to Engineering Systems and Climate Change

Authors: Moustafa Osman Mohammed

Abstract:

Construction industry, as one of the main contributor in depletion of natural resources, influences climate change. This paper discusses incremental and evolutionary development of the proposed models for optimization of a life-cycle analysis to explicit strategy for evaluation systems. The main categories are virtually irresistible for introducing uncertainties, uptake composite structure model (CSM) as environmental management systems (EMSs) in a practice science of evaluation small and medium-sized enterprises (SMEs). The model simplified complex systems to reflect nature systems’ input, output and outcomes mode influence “framework measures” and give a maximum likelihood estimation of how elements are simulated over the composite structure. The traditional knowledge of modeling is based on physical dynamic and static patterns regarding parameters influence environment. It unified methods to demonstrate how construction systems ecology interrelated from management prospective in procedure reflects the effect of the effects of engineering systems to ecology as ultimately unified technologies in extensive range beyond constructions impact so as, - energy systems. Sustainability broadens socioeconomic parameters to practice science that meets recovery performance, engineering reflects the generic control of protective systems. When the environmental model employed properly, management decision process in governments or corporations could address policy for accomplishment strategic plans precisely. The management and engineering limitation focuses on autocatalytic control as a close cellular system to naturally balance anthropogenic insertions or aggregation structure systems to pound equilibrium as steady stable conditions. Thereby, construction systems ecology incorporates engineering and management scheme, as a midpoint stage between biotic and abiotic components to predict constructions impact. The later outcomes’ theory of environmental obligation suggests either a procedures of method or technique that is achieved in sustainability impact of construction system ecology (SICSE), as a relative mitigation measure of deviation control, ultimately.

Keywords: sustainability, environmental impact assessment, environemtal management, construction ecology

Procedia PDF Downloads 393
186 A Double Ended AC Series Arc Fault Location Algorithm Based on Currents Estimation and a Fault Map Trace Generation

Authors: Edwin Calderon-Mendoza, Patrick Schweitzer, Serge Weber

Abstract:

Series arc faults appear frequently and unpredictably in low voltage distribution systems. Many methods have been developed to detect this type of faults and commercial protection systems such AFCI (arc fault circuit interrupter) have been used successfully in electrical networks to prevent damage and catastrophic incidents like fires. However, these devices do not allow series arc faults to be located on the line in operating mode. This paper presents a location algorithm for series arc fault in a low-voltage indoor power line in an AC 230 V-50Hz home network. The method is validated through simulations using the MATLAB software. The fault location method uses electrical parameters (resistance, inductance, capacitance, and conductance) of a 49 m indoor power line. The mathematical model of a series arc fault is based on the analysis of the V-I characteristics of the arc and consists basically of two antiparallel diodes and DC voltage sources. In a first step, the arc fault model is inserted at some different positions across the line which is modeled using lumped parameters. At both ends of the line, currents and voltages are recorded for each arc fault generation at different distances. In the second step, a fault map trace is created by using signature coefficients obtained from Kirchhoff equations which allow a virtual decoupling of the line’s mutual capacitance. Each signature coefficient obtained from the subtraction of estimated currents is calculated taking into account the Discrete Fast Fourier Transform of currents and voltages and also the fault distance value. These parameters are then substituted into Kirchhoff equations. In a third step, the same procedure described previously to calculate signature coefficients is employed but this time by considering hypothetical fault distances where the fault can appear. In this step the fault distance is unknown. The iterative calculus from Kirchhoff equations considering stepped variations of the fault distance entails the obtaining of a curve with a linear trend. Finally, the fault distance location is estimated at the intersection of two curves obtained in steps 2 and 3. The series arc fault model is validated by comparing current registered from simulation with real recorded currents. The model of the complete circuit is obtained for a 49m line with a resistive load. Also, 11 different arc fault positions are considered for the map trace generation. By carrying out the complete simulation, the performance of the method and the perspectives of the work will be presented.

Keywords: indoor power line, fault location, fault map trace, series arc fault

Procedia PDF Downloads 137
185 Intersubjectivity of Forensic Handwriting Analysis

Authors: Marta Nawrocka

Abstract:

In each of the legal proceedings, in which expert evidence is carried out, a major concern is the assessment of the evidential value of expert reports. Judicial institutions, while making decisions, rely heavily on the expert reports, because they usually do not possess 'special knowledge' from a certain fields of science which makes it impossible for them to verify the results presented in the processes. In handwriting studies, the standards of analysis are developed. They unify procedures used by experts in comparing signs and in constructing expert reports. However, the methods used by experts are usually of a qualitative nature. They rely on the application of knowledge and experience of expert and in effect give significant range of margin in the assessment. Moreover, the standards used by experts are still not very precise and the process of reaching the conclusions is poorly understood. The above-mentioned circumstances indicate that expert opinions in the field of handwriting analysis, for many reasons, may not be sufficiently reliable. It is assumed that this state of affairs has its source in a very low level of intersubjectivity of measuring scales and analysis procedures, which consist elements of this kind of analysis. Intersubjectivity is a feature of cognition which (in relation to methods) indicates the degree of consistency of results that different people receive using the same method. The higher the level of intersubjectivity is, the more reliable and credible the method can be considered. The aim of the conducted research was to determine the degree of intersubjectivity of the methods used by the experts from the scope of handwriting analysis. 30 experts took part in the study and each of them received two signatures, with varying degrees of readability, for analysis. Their task was to distinguish graphic characteristics in the signature, estimate the evidential value of the found characteristics and estimate the evidential value of the signature. The obtained results were compared with each other using the Alpha Krippendorff’s statistic, which numerically determines the degree of compatibility of the results (assessments) that different people receive under the same conditions using the same method. The estimation of the degree of compatibility of the experts' results for each of these tasks allowed to determine the degree of intersubjectivity of the studied method. The study showed that during the analysis, the experts identified different signature characteristics and attributed different evidential value to them. In this scope, intersubjectivity turned out to be low. In addition, it turned out that experts in various ways called and described the same characteristics, and the language used was often inconsistent and imprecise. Thus, significant differences have been noted on the basis of language and applied nomenclature. On the other hand, experts attributed a similar evidential value to the entire signature (set of characteristics), which indicates that in this range, they were relatively consistent.

Keywords: forensic sciences experts, handwriting analysis, inter-rater reliability, reliability of methods

Procedia PDF Downloads 149
184 Component Test of Martensitic/Ferritic Steels and Nickel-Based Alloys and Their Welded Joints under Creep and Thermo-Mechanical Fatigue Loading

Authors: Daniel Osorio, Andreas Klenk, Stefan Weihe, Andreas Kopp, Frank Rödiger

Abstract:

Future power plants currently face high design requirements due to worsening climate change and environmental restrictions, which demand high operational flexibility, superior thermal performance, minimal emissions, and higher cyclic capability. The aim of the paper is, therefore, to investigate the creep and thermo-mechanical material behavior of improved materials experimentally and welded joints at component scale under near-to-service operating conditions, which are promising for application in highly efficient and flexible future power plants. These materials promise an increase in flexibility and a reduction in manufacturing costs by providing enhanced creep strength and, therefore, the possibility for wall thickness reduction. At the temperature range between 550°C and 625°C, the investigation focuses on the in-phase thermo-mechanical fatigue behavior of dissimilar welded joints of conventional materials (ferritic and martensitic material T24 and T92) to nickel-based alloys (A617B and HR6W) by means of membrane test panels. The temperature and external load are varied in phase during the test, while the internal pressure remains constant. At the temperature range between 650°C and 750°C, it focuses on the creep behavior under multiaxial stress loading of similar and dissimilar welded joints of high temperature resistant nickel-based alloys (A740H, A617B, and HR6W) by means of a thick-walled-component test. In this case, the temperature, the external axial load, and the internal pressure remain constant during testing. Numerical simulations are used for the estimation of the axial component load in order to induce a meaningful damage evolution without causing a total component failure. Metallographic investigations after testing will provide support for understanding the damage mechanism and the influence of the thermo-mechanical load and multiaxiality on the microstructure change and on the creep and TMF- strength.

Keywords: creep, creep-fatigue, component behaviour, weld joints, high temperature material behaviour, nickel-alloys, high temperature resistant steels

Procedia PDF Downloads 119
183 The Usefulness of Premature Chromosome Condensation Scoring Module in Cell Response to Ionizing Radiation

Authors: K. Rawojć, J. Miszczyk, A. Możdżeń, A. Panek, J. Swakoń, M. Rydygier

Abstract:

Due to the mitotic delay, poor mitotic index and disappearance of lymphocytes from peripheral blood circulation, assessing the DNA damage after high dose exposure is less effective. Conventional chromosome aberration analysis or cytokinesis-blocked micronucleus assay do not provide an accurate dose estimation or radiosensitivity prediction in doses higher than 6.0 Gy. For this reason, there is a need to establish reliable methods allowing analysis of biological effects after exposure in high dose range i.e., during particle radiotherapy. Lately, Premature Chromosome Condensation (PCC) has become an important method in high dose biodosimetry and a promising treatment modality to cancer patients. The aim of the study was to evaluate the usefulness of drug-induced PCC scoring procedure in an experimental mode, where 100 G2/M cells were analyzed in different dose ranges. To test the consistency of obtained results, scoring was performed by 3 independent persons in the same mode and following identical scoring criteria. Whole-body exposure was simulated in an in vitro experiment by irradiating whole blood collected from healthy donors with 60 MeV protons and 250 keV X-rays, in the range of 4.0 – 20.0 Gy. Drug-induced PCC assay was performed on human peripheral blood lymphocytes (HPBL) isolated after in vitro exposure. Cells were cultured for 48 hours with PHA. Then to achieve premature condensation, calyculin A was added. After Giemsa staining, chromosome spreads were photographed and manually analyzed by scorers. The dose-effect curves were derived by counting the excess chromosome fragments. The results indicated adequate dose estimates for the whole-body exposure scenario in the high dose range for both studied types of radiation. Moreover, compared results revealed no significant differences between scores, which has an important meaning in reducing the analysis time. These investigations were conducted as a part of an extended examination of 60 MeV protons from AIC-144 isochronous cyclotron, at the Institute of Nuclear Physics in Kraków, Poland (IFJ PAN) by cytogenetic and molecular methods and were partially supported by grant DEC-2013/09/D/NZ7/00324 from the National Science Centre, Poland.

Keywords: cell response to radiation exposure, drug induced premature chromosome condensation, premature chromosome condensation procedure, proton therapy

Procedia PDF Downloads 352
182 Developing City-Level Sustainability Indicators in the Mena Region with the Case of Benghazi and Amman

Authors: Serag El Hegazi

Abstract:

The development of an assessment methodological framework for local and institutional sustainability is a key factor for future development plans and visions. This paper develops an approach to local and institutional sustainability assessment (ALISA). The ALISA methodology is a methodological framework that assists in the clarification, formulation, preparation, selection, and ranking of key indicators to facilitate the assessment of the level of sustainability at the local and institutional levels in North African and Middle Eastern cities. According to the literature review, this paper formulates a methodological framework, ALISA, which is a combination of the UNCSD (2001) Theme Indicators Framework and the issue-based Framework illustrated by McLaren (1996). The methodological framework has been implemented to formulate, select, and prioritise key indicators that most directly reflect the issues of a case study at the local community and institutional level. Yet, in the meantime, there is a lack of clear indicators and frameworks that can be developed to apply successfully at the local and institutional levels in the MENA Region, particularly in the cities of Benghazi and Amman. This is an essential issue for sustainability development estimation. Therefore, a conceptual framework was developed to be tested as a methodology to collect and classify data. The Approach to Local and Institutional Sustainability Assessment (ALISA) is a methodological framework that was developed to apply to certain cities in the MENA region. The main goal is to develop the ALISA framework to formulate, choose, and prioritize sustainability key indicators, which then can assist in guiding an assessment progress to improve decisions and policymakers towards the development of sustainable cities at the local and institutional level in the city of Benghazi. The conceptual, methodological framework, which supports this research with joint documentary and analysed data in two case studies, including focus-group discussions, semi-structured interviews, and questionnaires, reflects the approach required to develop a combined framework that assists the development of sustainability indicators. To achieve this progress and reach the aim of this paper, which is developing a practical approach for sustainability indicators framework that could be used as a tool to develop local and institutional sustainability indicators, appropriate stages must be applied to propose a set of local and institutional sustainability indicators as follows: Step one: issues clarifications, Step two: objectives formation/analysing of issues and boundaries, Step three: indicators preparation, First list of proposed indictors, Step four: indicator selection, Step five: indicator rating/ranking.

Keywords: sustainability indicators, approach to local and institutional level, ALISA, policymakers

Procedia PDF Downloads 21
181 Internal Financing Constraints and Corporate Investment: Evidence from Indian Manufacturing Firms

Authors: Gaurav Gupta, Jitendra Mahakud

Abstract:

This study focuses on the significance of internal financing constraints on the determination of corporate fixed investments in the case of Indian manufacturing companies. Financing constraints companies which have less internal fund or retained earnings face more transaction and borrowing costs due to imperfections in the capital market. The period of study is 1999-2000 to 2013-2014 and we consider 618 manufacturing companies for which the continuous data is available throughout the study period. The data is collected from PROWESS data base maintained by Centre for Monitoring Indian Economy Pvt. Ltd. Panel data methods like fixed effect and random effect methods are used for the analysis. The Likelihood Ratio test, Lagrange Multiplier test, and Hausman test results conclude the suitability of the fixed effect model for the estimation. The cash flow and liquidity of the company have been used as the proxies for the internal financial constraints. In accordance with various theories of corporate investments, we consider other firm specific variable like firm age, firm size, profitability, sales and leverage as the control variables in the model. From the econometric analysis, we find internal cash flow and liquidity have the significant and positive impact on the corporate investments. The variables like cost of capital, sales growth and growth opportunities are found to be significantly determining the corporate investments in India, which is consistent with the neoclassical, accelerator and Tobin’s q theory of corporate investment. To check the robustness of results, we divided the sample on the basis of cash flow and liquidity. Firms having cash flow greater than zero are put under one group, and firms with cash flow less than zero are put under another group. Also, the firms are divided on the basis of liquidity following the same approach. We find that the results are robust to both types of companies having positive and negative cash flow and liquidity. The results for other variables are also in the same line as we find for the whole sample. These findings confirm that internal financing constraints play a significant role for determination of corporate investment in India. The findings of this study have the implications for the corporate managers to focus on the projects having higher expected cash inflows to avoid the financing constraints. Apart from that, they should also maintain adequate liquidity to minimize the external financing costs.

Keywords: cash flow, corporate investment, financing constraints, panel data method

Procedia PDF Downloads 241
180 Results of Operation of Online Medical Care System

Authors: Mahsa Houshdar, Seyed Mehdi Samimi Ardestani , ُSeyed Saeed Sadr

Abstract:

Introduction: Online Medicare is a method in which parts of a medical process - whether its diagnostics, monitoring or the treatment itself will be done by using online services. This system has been operated in one boy’s high school, one girl’s high school and one high school in deprived aria. Method: At the first step the students registered for using the system. It was not mandatory and not free. They participated in estimating depression scale, anxiety scale and clinical interview by online medical care system. During this estimation, we could find the existence and severity of depression and anxiety in each one of the participants, also we could find the consequent needs of each one, such as supportive therapy in mild depression or anxiety, need to visited by psychologist in moderate cases, need to visited by psychiatrist in moderate-severe cases, need to visited by psychiatrist and psychologist in severe cases and need to perform medical lab examination tests. The lab examination tests were performed on persons specified by the system. The lab examinations were included: serum level of vitamin D, serum level of vitamin B12, serum level of calcium, fasting blood sugar, HbA1c, thyroid function tests and CBC. All of the students were solely treated by vitamins or minerals therapy and/ or treatment of medical problem (such as hypothyroidism). After a few months, we came back to high schools and estimated the existence and severity of depression and anxiety in treated students. With comparing these results, the affectability of the system could be prof. Results: Totally, we operate this project in 1077 participants in 243 of participant, the lab examination test were performed. In girls high schools: the existence and severity of depression significantly deceased (P value= 0.018<0.05 & P value 0.004< 0.05), but results about anxiety was not significant. In boys high schools: the existence and severity of depression significantly decreased (P value= 0.023<0.05 & P value = 0.004< 0.05 & P value= 0.049< 0.05). In boys high schools: the existence and severity of anxiety significantly decreased (P value= 0.041<0.05 & P value = 0.046< 0.05 &) but in one high school results about anxiety was not significant. In high school in deprived area the students did not have any problem paying for participating in the project, but they could not pay for medical lab examination tests. Thus, operation of the system was not possible in deprived area without a sponsor. Conclusion: This online medical system was successful in creating medical and psychiatric profile without attending physician. It was successful in decreasing depression without using antidepressants, but it was partially successful in decreasing anxiety.

Keywords: depression, diabetes, online medicare, vitamin D deficiency

Procedia PDF Downloads 325
179 Estimation of Morbidity Level of Industrial Labour Conditions at Zestafoni Ferroalloy Plant

Authors: M. Turmanauli, T. Todua, O. Gvaberidze, R. Javakhadze, N. Chkhaidze, N. Khatiashvili

Abstract:

Background: Mining process has the significant influence on human health and quality of life. In recent years the events in Georgia were reflected on the industry working process, especially minimal requirements of labor safety, hygiene standards of workplace and the regime of work and rest are not observed. This situation is often caused by the lack of responsibility, awareness, and knowledge both of workers and employers. The control of working conditions and its protection has been worsened in many of industries. Materials and Methods: For evaluation of the current situation the prospective epidemiological study by face to face interview method was conducted at Georgian “Manganese Zestafoni Ferroalloy Plant” in 2011-2013. 65.7% of employees (1428 bulletin) were surveyed and the incidence rates of temporary disability days were studied. Results: The average length of a temporary disability single accident was studied taking into consideration as sex groups as well as the whole cohort. According to the classes of harmfulness the following results were received: Class 2.0-10.3%; 3.1-12.4%; 3.2-35.1%; 3.3-12.1%; 3.4-17.6%; 4.0-12.5%. Among the employees 47.5% and 83.1% were tobacco and alcohol consumers respectively. According to the age groups and years of work on the base of previous experience ≥50 ages and ≥21 years of work data prevalence respectively. The obtained data revealed increased morbidity rate according to age and years of work. It was found that the bone and articulate system and connective tissue diseases, aggravation of chronic respiratory diseases, ischemic heart diseases, hypertension and cerebral blood discirculation were the leading among the other diseases. High prevalence of morbidity observed in the workplace with not satisfactory labor conditions from the hygienic point of view. Conclusion: According to received data the causes of morbidity are the followings: unsafety labor conditions; incomplete of preventive medical examinations (preliminary and periodic); lack of access to appropriate health care services; derangement of gathering, recording, and analysis of morbidity data. This epidemiological study was conducted at the JSC “Manganese Ferro Alloy Plant” according to State program “ Prevention of Occupational Diseases” (Program code is 35 03 02 05).

Keywords: occupational health, mining process, morbidity level, cerebral blood discirculation

Procedia PDF Downloads 428
178 Measuring the Economic Impact of Cultural Heritage: Comparative Analysis of the Multiplier Approach and the Value Chain Approach

Authors: Nina Ponikvar, Katja Zajc Kejžar

Abstract:

While the positive impacts of heritage on a broad societal spectrum have long been recognized and measured, the economic effects of the heritage sector are often less visible and frequently underestimated. At macro level, economic effects are usually studied based on one of the two mainstream approach, i.e. either the multiplier approach or the value chain approach. Consequently, there is limited comparability of the empirical results due to the use of different methodological approach in the literature. Furthermore, it is also not clear on which criteria the used approach was selected. Our aim is to bring the attention to the difference in the scope of effects that are encompassed by the two most frequent methodological approaches to valuation of economic effects of cultural heritage on macroeconomic level, i.e. the multiplier approach and the value chain approach. We show that while the multiplier approach provides a systematic, theory-based view of economic impacts but requires more data and analysis, the value chain approach has less solid theoretical foundations and depends on the availability of appropriate data to identify the contribution of cultural heritage to other sectors. We conclude that the multiplier approach underestimates the economic impact of cultural heritage, mainly due to the narrow definition of cultural heritage in the statistical classification and the inability to identify part of the contribution of cultural heritage that is hidden in other sectors. Yet it is not possible to clearly determine whether the value chain method overestimates or underestimates the actual economic impact of cultural heritage since there is a risk that the direct effects are overestimated and double counted, but not all indirect and induced effects are considered. Accordingly, these two approaches are not substitutes but rather complementary. Consequently, a direct comparison of the estimated impacts is not possible and should not be done due to the different scope. To illustrate the difference of the impact assessment of the cultural heritage, we apply both approaches to the case of Slovenia in the 2015-2022 period and measure the economic impact of cultural heritage sector in terms of turnover, gross value added and employment. The empirical results clearly show that the estimation of the economic impact of a sector using the multiplier approach is more conservative, while the estimates based on value added capture a much broader range of impacts. According to the multiplier approach, each euro in cultural heritage sector generates an additional 0.14 euros in indirect effects and an additional 0.44 euros in induced effects. Based on the value-added approach, the indirect economic effect of the “narrow” heritage sectors is amplified by the impact of cultural heritage activities on other sectors. Accordingly, every euro of sales and every euro of gross value added in the cultural heritage sector generates approximately 6 euros of sales and 4 to 5 euros of value added in other sectors. In addition, each employee in the cultural heritage sector is linked to 4 to 5 jobs in other sectors.

Keywords: economic value of cultural heritage, multiplier approach, value chain approach, indirect effects, slovenia

Procedia PDF Downloads 75
177 Predicting Costs in Construction Projects with Machine Learning: A Detailed Study Based on Activity-Level Data

Authors: Soheila Sadeghi

Abstract:

Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.

Keywords: cost prediction, machine learning, project management, random forest, neural networks

Procedia PDF Downloads 56
176 Ribotaxa: Combined Approaches for Taxonomic Resolution Down to the Species Level from Metagenomics Data Revealing Novelties

Authors: Oshma Chakoory, Sophie Comtet-Marre, Pierre Peyret

Abstract:

Metagenomic classifiers are widely used for the taxonomic profiling of metagenomic data and estimation of taxa relative abundance. Small subunit rRNA genes are nowadays a gold standard for the phylogenetic resolution of complex microbial communities, although the power of this marker comes down to its use as full-length. We benchmarked the performance and accuracy of rRNA-specialized versus general-purpose read mappers, reference-targeted assemblers and taxonomic classifiers. We then built a pipeline called RiboTaxa to generate a highly sensitive and specific metataxonomic approach. Using metagenomics data, RiboTaxa gave the best results compared to other tools (Kraken2, Centrifuge (1), METAXA2 (2), PhyloFlash (3)) with precise taxonomic identification and relative abundance description, giving no false positive detection. Using real datasets from various environments (ocean, soil, human gut) and from different approaches (metagenomics and gene capture by hybridization), RiboTaxa revealed microbial novelties not seen by current bioinformatics analysis opening new biological perspectives in human and environmental health. In a study focused on corals’ health involving 20 metagenomic samples (4), an affiliation of prokaryotes was limited to the family level with Endozoicomonadaceae characterising healthy octocoral tissue. RiboTaxa highlighted 2 species of uncultured Endozoicomonas which were dominant in the healthy tissue. Both species belonged to a genus not yet described, opening new research perspectives on corals’ health. Applied to metagenomics data from a study on human gut and extreme longevity (5), RiboTaxa detected the presence of an uncultured archaeon in semi-supercentenarians (aged 105 to 109 years) highlighting an archaeal genus, not yet described, and 3 uncultured species belonging to the Enorma genus that could be species of interest participating in the longevity process. RiboTaxa is user-friendly, rapid, allowing microbiota structure description from any environment and the results can be easily interpreted. This software is freely available at https://github.com/oschakoory/RiboTaxa under the GNU Affero General Public License 3.0.

Keywords: metagenomics profiling, microbial diversity, SSU rRNA genes, full-length phylogenetic marker

Procedia PDF Downloads 121
175 Using Arellano-Bover/Blundell-Bond Estimator in Dynamic Panel Data Analysis – Case of Finnish Housing Price Dynamics

Authors: Janne Engblom, Elias Oikarinen

Abstract:

A panel dataset is one that follows a given sample of individuals over time, and thus provides multiple observations on each individual in the sample. Panel data models include a variety of fixed and random effects models which form a wide range of linear models. A special case of panel data models are dynamic in nature. A complication regarding a dynamic panel data model that includes the lagged dependent variable is endogeneity bias of estimates. Several approaches have been developed to account for this problem. In this paper, the panel models were estimated using the Arellano-Bover/Blundell-Bond Generalized method of moments (GMM) estimator which is an extension of the Arellano-Bond model where past values and different transformations of past values of the potentially problematic independent variable are used as instruments together with other instrumental variables. The Arellano–Bover/Blundell–Bond estimator augments Arellano–Bond by making an additional assumption that first differences of instrument variables are uncorrelated with the fixed effects. This allows the introduction of more instruments and can dramatically improve efficiency. It builds a system of two equations—the original equation and the transformed one—and is also known as system GMM. In this study, Finnish housing price dynamics were examined empirically by using the Arellano–Bover/Blundell–Bond estimation technique together with ordinary OLS. The aim of the analysis was to provide a comparison between conventional fixed-effects panel data models and dynamic panel data models. The Arellano–Bover/Blundell–Bond estimator is suitable for this analysis for a number of reasons: It is a general estimator designed for situations with 1) a linear functional relationship; 2) one left-hand-side variable that is dynamic, depending on its own past realizations; 3) independent variables that are not strictly exogenous, meaning they are correlated with past and possibly current realizations of the error; 4) fixed individual effects; and 5) heteroskedasticity and autocorrelation within individuals but not across them. Based on data of 14 Finnish cities over 1988-2012 differences of short-run housing price dynamics estimates were considerable when different models and instrumenting were used. Especially, the use of different instrumental variables caused variation of model estimates together with their statistical significance. This was particularly clear when comparing estimates of OLS with different dynamic panel data models. Estimates provided by dynamic panel data models were more in line with theory of housing price dynamics.

Keywords: dynamic model, fixed effects, panel data, price dynamics

Procedia PDF Downloads 1508
174 A Machine Learning Approach for Efficient Resource Management in Construction Projects

Authors: Soheila Sadeghi

Abstract:

Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.

Keywords: resource allocation, machine learning, optimization, data-driven decision-making, project management

Procedia PDF Downloads 40
173 Efficient Estimation of Maximum Theoretical Productivity from Batch Cultures via Dynamic Optimization of Flux Balance Models

Authors: Peter C. St. John, Michael F. Crowley, Yannick J. Bomble

Abstract:

Production of chemicals from engineered organisms in a batch culture typically involves a trade-off between productivity, yield, and titer. However, strategies for strain design typically involve designing mutations to achieve the highest yield possible while maintaining growth viability. Such approaches tend to follow the principle of designing static networks with minimum metabolic functionality to achieve desired yields. While these methods are computationally tractable, optimum productivity is likely achieved by a dynamic strategy, in which intracellular fluxes change their distribution over time. One can use multi-stage fermentations to increase either productivity or yield. Such strategies would range from simple manipulations (aerobic growth phase, anaerobic production phase), to more complex genetic toggle switches. Additionally, some computational methods can also be developed to aid in optimizing two-stage fermentation systems. One can assume an initial control strategy (i.e., a single reaction target) in maximizing productivity - but it is unclear how close this productivity would come to a global optimum. The calculation of maximum theoretical yield in metabolic engineering can help guide strain and pathway selection for static strain design efforts. Here, we present a method for the calculation of a maximum theoretical productivity of a batch culture system. This method follows the traditional assumptions of dynamic flux balance analysis: that internal metabolite fluxes are governed by a pseudo-steady state and external metabolite fluxes are represented by dynamic system including Michealis-Menten or hill-type regulation. The productivity optimization is achieved via dynamic programming, and accounts explicitly for an arbitrary number of fermentation stages and flux variable changes. We have applied our method to succinate production in two common microbial hosts: E. coli and A. succinogenes. The method can be further extended to calculate the complete productivity versus yield Pareto surface. Our results demonstrate that nearly optimal yields and productivities can indeed be achieved with only two discrete flux stages.

Keywords: A. succinogenes, E. coli, metabolic engineering, metabolite fluxes, multi-stage fermentations, succinate

Procedia PDF Downloads 215
172 Neuropharmacological and Neurochemical Evaluation of Methanolic Extract of Elaeocarpus sphaericus (Gaertn.) Stem Bark by Using Multiple Behaviour Models of Mice

Authors: Jaspreet Kaur, Parminder Nain, Vipin Saini, Sumitra Dahiya

Abstract:

Elaeocarpus sphaericus has been traditionally used in the Indian traditional medicine system for the treatment of stress, anxiety, depression, palpitation, epilepsy, migraine and lack of concentration. The study was investigated to evaluate the neurological potential such as anxiolytic, muscle relaxant and sedative activity of methanolic extract of Elaeocarpus sphaericus stem bark (MEESSB) in mice. Preliminary phytochemical screening and acute oral toxicity of MEESSB was carried out by using standard methods. The anxiety was induced by employing Elevated Plus-Maze (EPM), Light and Dark Test (LDT), Open Field Test (OFT) and Social Interaction test (SIT). The motor coordination and sedative effect was also observed by using actophotometer, rota-rod apparatus and ketamine-induced sleeping time, respectively. Animals were treated with different doses of MEESSB (i.e.100, 200, 400 and 800 mg/kg orally) and diazepam (2 mg/kg i.p) for 21 days. Brain neurotransmitters like dopamine, serotonin and nor-epinephrine level were estimated by validated methods. Preliminary phytochemical analysis of the extract revealed the presence of tannins, phytosterols, steroids and alkaloids. In the acute toxicity studies, MEESSB was found to be non-toxic and with no mortality. In anxiolytic studies, the different doses of MEESSB showed a significant (p<0.05) effect on EPM and LDT. In OFT and SIT, a significant (p<0.05) increase in ambulation, rearing and social interaction time was observed. In the case of motor coordination activity, the MEESSB does not cause any significant effect on the latency to fall off from the rotarod bar as compared to the control group. Moreover, no significant effects on ketamine-induced sleep latency and total sleeping time induced by ketamine were observed. Results of neurotransmitter estimation revealed the increased concentration of dopamine, whereas the level of serotonin and nor-epinephrine was found to be decreased in the mice brain, with MEESSB at dose 800 mg/kg only. The study has validated the folkloric use of the plant as an anxiolytic in Indian traditional medicine while also suggesting potential usefulness in the treatment of stress and anxiety without causing sedation.

Keywords: anxiolytic, behavior experiments, brain neurotransmitters, elaeocarpus sphaericus

Procedia PDF Downloads 177
171 Driving Environmental Quality through Fuel Subsidy Reform in Nigeria

Authors: O. E. Akinyemi, P. O. Alege, O. O. Ajayi, L. A. Amaghionyediwe, A. A. Ogundipe

Abstract:

Nigeria as an oil-producing developing country in Africa is one of the many countries that had been subsidizing consumption of fossil fuel. Despite the numerous advantage of this policy ranging from increased energy access, fostering economic and industrial development, protecting the poor households from oil price shocks, political considerations, among others; they have been found to impose economic cost, wasteful, inefficient, create price distortions discourage investment in the energy sector and contribute to environmental pollution. These negative consequences coupled with the fact that the policy had not been very successful at achieving some of its stated objectives, led to a number of organisations and countries such as the Group of 7 (G7), World Bank, International Monetary Fund (IMF), International Energy Agency (IEA), Organisation for Economic Co-operation and Development (OECD), among others call for global effort towards reforming fossil fuel subsidies. This call became necessary in view of seeking ways to harmonise certain existing policies which may by design hamper current effort at tackling environmental concerns such as climate change. This is in addition to driving a green growth strategy and low carbon development in achieving sustainable development. The energy sector is identified to play a vital role. This study thus investigates the prospects of using fuel subsidy reform as a viable tool in driving an economy that de-emphasizes carbon growth in Nigeria. The method used is the Johansen and Engle-Granger two-step Co-integration procedure in order to investigate the existence or otherwise of a long-run equilibrium relationship for the period 1971 to 2011. Its theoretical framework is rooted in the Environmental Kuznet Curve (EKC) hypothesis. In developing three case scenarios (case of subsidy payment, no subsidy payment and effective subsidy), findings from the study supported evidence of a long run sustainable equilibrium model. Also, estimation results reflected that the first and the second scenario do not significantly influence the indicator of environmental quality. The implication of this is that in reforming fuel subsidy to drive environmental quality for an economy like Nigeria, strong and effective regulatory framework (measure that was interacted with fuel subsidy to yield effective subsidy) is essential.

Keywords: environmental quality, fuel subsidy, green growth, low carbon growth strategy

Procedia PDF Downloads 326
170 Intensive Neurophysiological Rehabilitation System: New Approach for Treatment of Children with Autism

Authors: V. I. Kozyavkin, L. F. Shestopalova, T. B. Voloshyn

Abstract:

Introduction: Rehabilitation of children with Autism is the issue of the day in psychiatry and neurology. It is attributed to constantly increasing quantity of autistic children - Autistic Spectrum Disorders (ASD) Existing rehabilitation approaches in treatment of children with Autism improve their medico- social and social- psychological adjustment. Experience of treatment for different kinds of Autistic disorders in International Clinic of Rehabilitation (ICR) reveals the necessity of complex intensive approach for healing this malady and wider implementation of a Kozyavkin method for treatment of children with ASD. Methods: 19 children aged from 3 to 14 years were examined. They were diagnosed ‘Autism’ (F84.0) with comorbid neurological pathology (from pyramidal insufficiency to para- and tetraplegia). All patients underwent rehabilitation in ICR during two weeks, where INRS approach was used. INRS included methods like biomechanical correction of the spine, massage, physical therapy, joint mobilization, wax-paraffin applications. They were supplemented by art- therapy, ergotherapy, rhythmical group exercises, computer game therapy, team Olympic games and other methods for improvement of motivation and social integration of the child. Estimation of efficacy was conducted using parent’s questioning and done twice- on the onset of INRS rehabilitation course and two weeks afterward. For efficacy assessment of rehabilitation of autistic children in ICR standardized tool was used, namely Autism Treatment Evaluation Checklist (ATEC). This scale was selected because any rehabilitation approaches for the child with Autism can be assessed using it. Results: Before the onset of INRS treatment mean score according to ATEC scale was 64,75±9,23, it reveals occurrence in examined children severe communication, speech, socialization and behavioral impairments. After the end of the rehabilitation course, the mean score was 56,5±6,7, what indicates positive dynamics in comparison to the onset of rehabilitation. Generally, improvement of psychoemotional state occurred in 90% of cases. Most significant changes occurred in the scope of speech (16,5 before and 14,5 after the treatment), socialization (15.1 before and 12,5 after) and behavior (20,1 before and 17.4 after). Conclusion: As a result of INRS rehabilitation course reduction of autistic symptoms was noted. Particularly improvements in speech were observed (children began to spell out new syllables, words), there was some decrease in signs of destructiveness, quality of contact with the surrounding people improved, new skills of self-service appeared. The prospect of the study is further, according to evidence- based medicine standards, deeper examination of INRS and assessment of its usefulness in treatment for Autism and ASD.

Keywords: intensive neurophysiological rehabilitation system (INRS), international clinic od rehabilitation, ASD, rehabilitation

Procedia PDF Downloads 169
169 Digital Image Correlation: Metrological Characterization in Mechanical Analysis

Authors: D. Signore, M. Ferraiuolo, P. Caramuta, O. Petrella, C. Toscano

Abstract:

The Digital Image Correlation (DIC) is a newly developed optical technique that is spreading in all engineering sectors because it allows the non-destructive estimation of the entire surface deformation without any contact with the component under analysis. These characteristics make the DIC very appealing in all the cases the global deformation state is to be known without using strain gages, which are the most used measuring device. The DIC is applicable to any material subjected to distortion caused by either thermal or mechanical load, allowing to obtain high-definition mapping of displacements and deformations. That is why in the civil and the transportation industry, DIC is very useful for studying the behavior of metallic materials as well as of composite materials. DIC is also used in the medical field for the characterization of the local strain field of the vascular tissues surface subjected to uniaxial tensile loading. DIC can be carried out in the two dimension mode (2D DIC) if a single camera is used or in a three dimension mode (3D DIC) if two cameras are involved. Each point of the test surface framed by the cameras can be associated with a specific pixel of the image, and the coordinates of each point are calculated knowing the relative distance between the two cameras together with their orientation. In both arrangements, when a component is subjected to a load, several images related to different deformation states can be are acquired through the cameras. A specific software analyzes the images via the mutual correlation between the reference image (obtained without any applied load) and those acquired during the deformation giving the relative displacements. In this paper, a metrological characterization of the digital image correlation is performed on aluminum and composite targets both in static and dynamic loading conditions by comparison between DIC and strain gauges measures. In the static test, interesting results have been obtained thanks to an excellent agreement between the two measuring techniques. In addition, the deformation detected by the DIC is compliant with the result of a FEM simulation. In the dynamic test, the DIC was able to follow with a good accuracy the periodic deformation of the specimen giving results coherent with the ones given by FEM simulation. In both situations, it was seen that the DIC measurement accuracy depends on several parameters such as the optical focusing, the parameters chosen to perform the mutual correlation between the images and, finally, the reference points on image to be analyzed. In the future, the influence of these parameters will be studied, and a method to increase the accuracy of the measurements will be developed in accordance with the requirements of the industries especially of the aerospace one.

Keywords: accuracy, deformation, image correlation, mechanical analysis

Procedia PDF Downloads 311
168 Exploration and Evaluation of the Effect of Multiple Countermeasures on Road Safety

Authors: Atheer Al-Nuaimi, Harry Evdorides

Abstract:

Every day many people die or get disabled or injured on roads around the world, which necessitates more specific treatments for transportation safety issues. International road assessment program (iRAP) model is one of the comprehensive road safety models which accounting for many factors that affect road safety in a cost-effective way in low and middle income countries. In iRAP model road safety has been divided into five star ratings from 1 star (the lowest level) to 5 star (the highest level). These star ratings are based on star rating score which is calculated by iRAP methodology depending on road attributes, traffic volumes and operating speeds. The outcome of iRAP methodology are the treatments that can be used to improve road safety and reduce fatalities and serious injuries (FSI) numbers. These countermeasures can be used separately as a single countermeasure or mix as multiple countermeasures for a location. There is general agreement that the adequacy of a countermeasure is liable to consistent losses when it is utilized as a part of mix with different countermeasures. That is, accident diminishment appraisals of individual countermeasures cannot be easily added together. The iRAP model philosophy makes utilization of a multiple countermeasure adjustment factors to predict diminishments in the effectiveness of road safety countermeasures when more than one countermeasure is chosen. A multiple countermeasure correction factors are figured for every 100-meter segment and for every accident type. However, restrictions of this methodology incorporate a presumable over-estimation in the predicted crash reduction. This study aims to adjust this correction factor by developing new models to calculate the effect of using multiple countermeasures on the number of fatalities for a location or an entire road. Regression models have been used to establish relationships between crash frequencies and the factors that affect their rates. Multiple linear regression, negative binomial regression, and Poisson regression techniques were used to develop models that can address the effectiveness of using multiple countermeasures. Analyses are conducted using The R Project for Statistical Computing showed that a model developed by negative binomial regression technique could give more reliable results of the predicted number of fatalities after the implementation of road safety multiple countermeasures than the results from iRAP model. The results also showed that the negative binomial regression approach gives more precise results in comparison with multiple linear and Poisson regression techniques because of the overdispersion and standard error issues.

Keywords: international road assessment program, negative binomial, road multiple countermeasures, road safety

Procedia PDF Downloads 240
167 Mathematical Modelling of Biogas Dehumidification by Using of Counterflow Heat Exchanger

Authors: Staņislavs Gendelis, Andris Jakovičs, Jānis Ratnieks, Aigars Laizāns, Dāvids Vardanjans

Abstract:

Dehumidification of biogas at the biomass plants is very important to provide the energy efficient burning of biomethane at the outlet. A few methods are widely used to reduce the water content in biogas, e.g. chiller/heat exchanger based cooling, usage of different adsorbents like PSA, or the combination of such approaches. A quite different method of biogas dehumidification is offered and analyzed in this paper. The main idea is to direct the flow of biogas from the plant around it downwards; thus, creating additional insulation layer. As the temperature in gas shell layer around the plant will decrease from ~ 38°C to 20°C in the summer or even to 0°C in the winter, condensation of water vapor occurs. The water from the bottom of the gas shell can be collected and drain away. In addition, another upward shell layer is created after the condensate drainage place on the outer side to further reducing heat losses. Thus, counterflow biogas heat exchanger is created around the biogas plant. This research work deals with the numerical modelling of biogas flow, taking into account heat exchange and condensation on cold surfaces. Different kinds of boundary conditions (air and ground temperatures in summer/winter) and various physical properties of constructions (insulation between layers, wall thickness) are included in the model to make it more general and useful for different biogas flow conditions. The complexity of this problem is fact, that the temperatures in both channels are conjugated in case of low thermal resistance between layers. MATLAB programming language is used for multiphysical model development, numerical calculations and result visualization. Experimental installation of a biogas plant’s vertical wall with an additional 2 layers of polycarbonate sheets with the controlled gas flow was set up to verify the modelling results. Gas flow at inlet/outlet, temperatures between the layers and humidity were controlled and measured during a number of experiments. Good correlation with modelling results for vertical wall section allows using of developed numerical model for an estimation of parameters for the whole biogas dehumidification system. Numerical modelling of biogas counterflow heat exchanger system placed on the plant’s wall for various cases allows optimizing of thickness for gas layers and insulation layer to ensure necessary dehumidification of the gas under different climatic conditions. Modelling of system’s defined configuration with known conditions helps to predict the temperature and humidity content of the biogas at the outlet.

Keywords: biogas dehumidification, numerical modelling, condensation, biogas plant experimental model

Procedia PDF Downloads 549
166 Signaling Theory: An Investigation on the Informativeness of Dividends and Earnings Announcements

Authors: Faustina Masocha, Vusani Moyo

Abstract:

For decades, dividend announcements have been presumed to contain important signals about the future prospects of companies. Similarly, the same has been presumed about management earnings announcements. Despite both dividend and earnings announcements being considered informative, a number of researchers questioned their credibility and found both to contain short-term signals. Pertaining to dividend announcements, some authors argued that although they might contain important information that can result in changes in share prices, which consequently results in the accumulation of abnormal returns, their degree of informativeness is less compared to other signaling tools such as earnings announcements. Yet, this claim in favor has been refuted by other researchers who found the effect of earnings to be transitory and of little value to shareholders as indicated by the little abnormal returns earned during the period surrounding earnings announcements. Considering the above, it is apparent that both dividends and earnings have been hypothesized to have a signaling impact. This prompts one to question which between these two signaling tools is more informative. To answer this question, two follow-up questions were asked. The first question sought to determine the event which results in the most effect on share prices, while the second question focused on the event that influenced trading volume the most. To answer the first question and evaluate the effect that each of these events had on share prices, an event study methodology was employed on a sample made up of the top 10 JSE-listed companies for data collected from 2012 to 2019 to determine if shareholders gained abnormal returns (ARs) during announcement dates. The event that resulted in the most persistent and highest amount of ARs was considered to be more informative. Looking at the second follow-up question, an investigation was conducted to determine if either dividends or earnings announcements influenced trading patterns, resulting in abnormal trading volumes (ATV) around announcement time. The event that resulted in the most ATV was considered more informative. Using an estimation period of 20 days and an event window of 21 days, and hypothesis testing, it was found that announcements pertaining to the increase of earnings resulted in the most ARs, Cumulative Abnormal Returns (CARs) and had a lasting effect in comparison to dividend announcements whose effect lasted until day +3. This solidifies some empirical arguments that the signaling effect of dividends has become diminishing. It was also found that when reported earnings declined in comparison to the previous period, there was an increase in trading volume, resulting in ATV. Although dividend announcements did result in abnormal returns, they were lesser than those acquired during earnings announcements which refutes a number of theoretical and empirical arguments that found dividends to be more informative than earnings announcements.

Keywords: dividend signaling, event study methodology, information content of earnings, signaling theory

Procedia PDF Downloads 172
165 Deep Learning for Qualitative and Quantitative Grain Quality Analysis Using Hyperspectral Imaging

Authors: Ole-Christian Galbo Engstrøm, Erik Schou Dreier, Birthe Møller Jespersen, Kim Steenstrup Pedersen

Abstract:

Grain quality analysis is a multi-parameterized problem that includes a variety of qualitative and quantitative parameters such as grain type classification, damage type classification, and nutrient regression. Currently, these parameters require human inspection, a multitude of instruments employing a variety of sensor technologies, and predictive model types or destructive and slow chemical analysis. This paper investigates the feasibility of applying near-infrared hyperspectral imaging (NIR-HSI) to grain quality analysis. For this study two datasets of NIR hyperspectral images in the wavelength range of 900 nm - 1700 nm have been used. Both datasets contain images of sparsely and densely packed grain kernels. The first dataset contains ~87,000 image crops of bulk wheat samples from 63 harvests where protein value has been determined by the FOSS Infratec NOVA which is the golden industry standard for protein content estimation in bulk samples of cereal grain. The second dataset consists of ~28,000 image crops of bulk grain kernels from seven different wheat varieties and a single rye variety. In the first dataset, protein regression analysis is the problem to solve while variety classification analysis is the problem to solve in the second dataset. Deep convolutional neural networks (CNNs) have the potential to utilize spatio-spectral correlations within a hyperspectral image to simultaneously estimate the qualitative and quantitative parameters. CNNs can autonomously derive meaningful representations of the input data reducing the need for advanced preprocessing techniques required for classical chemometric model types such as artificial neural networks (ANNs) and partial least-squares regression (PLS-R). A comparison between different CNN architectures utilizing 2D and 3D convolution is conducted. These results are compared to the performance of ANNs and PLS-R. Additionally, a variety of preprocessing techniques from image analysis and chemometrics are tested. These include centering, scaling, standard normal variate (SNV), Savitzky-Golay (SG) filtering, and detrending. The results indicate that the combination of NIR-HSI and CNNs has the potential to be the foundation for an automatic system unifying qualitative and quantitative grain quality analysis within a single sensor technology and predictive model type.

Keywords: deep learning, grain analysis, hyperspectral imaging, preprocessing techniques

Procedia PDF Downloads 99
164 Estimating Understory Species Diversity of West Timor Tropical Savanna, Indonesia: The Basis for Planning an Integrated Management of Agricultural and Environmental Weeds and Invasive Species

Authors: M. L. Gaol, I. W. Mudita

Abstract:

Indonesia is well known as a country covered by lush tropical rain forests, but in fact, the northeastern part of the country, within the areas geologically known as Lesser Sunda, the dominant vegetation is tropical savanna. Lesser Sunda is a chain of islands located closer to Australia than to islands in the other parts of the country. Among those of islands in the chain which is closes to Australia, and thereby most strongly affected by the hot and dry Australian climate, is the island of Timor, the western part of which belongs to Indonesia and the eastern part is a sovereign state East Timor. Regardless of being the most dominant vegetation cover, tropical savanna in West Timor, especially its understory, is rarely investigated. This research was therefore carried out to investigate the structure, composition and diversity of the understory of this tropical savanna as the basis for looking at the possibility of introducing other spesieis for various purposes. For this research, 14 terrestrial communities representing major types of the existing savannas in West Timor was selected with aid of the most recently available satellite imagery. At each community, one stand of the size of 50 m x 50 m most likely representing the community was as the site of observation for the type of savanna under investigation. At each of the 14 communities, 20 plots of 1 m x 1 m in size was placed at random to identify understory species and to count the total number of individuals and to estimate the cover of each species. Based on such counts and estimation, the important value of each species was later calculated. The results of this research indicated that the understory of savanna in West Timor consisted of 73 understory species. Of this number of species, 18 species are grasses and 55 are non-grasses. Although lower than non-grass species, grass species indeed dominated the savanna as indicated by their number of individuals (65.33 vs 34.67%), species cover (57.80 vs 42.20%), and important value (123.15 vs 76.85). Of the 14 communities, the lowest density of grass was 13.50/m2 and the highest was 417.50/m2. Of 18 grass species found, all were commonly found as agricultural weeds, whereas of 55 non-grass, 10 species were commonly found as agricultural weeds, environmental weeds, or invasive species. In terms of better managing the savanna in the region, these findings provided the basis for planning a more integrated approach in managing such agricultural and environmental weeds as well as invasive species by considering the structure, composition, and species diversity of the understory species existing in each site. These findings also provided the basis for better understanding the flora of the region as a whole and for developing a flora database of West Timor in future.

Keywords: tropical savanna, understory species, integrated management, weedy and invasive species

Procedia PDF Downloads 136
163 An Improved Adaptive Dot-Shape Beamforming Algorithm Research on Frequency Diverse Array

Authors: Yanping Liao, Zenan Wu, Ruigang Zhao

Abstract:

Frequency diverse array (FDA) beamforming is a technology developed in recent years, and its antenna pattern has a unique angle-distance-dependent characteristic. However, the beam is always required to have strong concentration, high resolution and low sidelobe level to form the point-to-point interference in the concentrated set. In order to eliminate the angle-distance coupling of the traditional FDA and to make the beam energy more concentrated, this paper adopts a multi-carrier FDA structure based on proposed power exponential frequency offset to improve the array structure and frequency offset of the traditional FDA. The simulation results show that the beam pattern of the array can form a dot-shape beam with more concentrated energy, and its resolution and sidelobe level performance are improved. However, the covariance matrix of the signal in the traditional adaptive beamforming algorithm is estimated by the finite-time snapshot data. When the number of snapshots is limited, the algorithm has an underestimation problem, which leads to the estimation error of the covariance matrix to cause beam distortion, so that the output pattern cannot form a dot-shape beam. And it also has main lobe deviation and high sidelobe level problems in the case of limited snapshot. Aiming at these problems, an adaptive beamforming technique based on exponential correction for multi-carrier FDA is proposed to improve beamforming robustness. The steps are as follows: first, the beamforming of the multi-carrier FDA is formed under linear constrained minimum variance (LCMV) criteria. Then the eigenvalue decomposition of the covariance matrix is ​​performed to obtain the diagonal matrix composed of the interference subspace, the noise subspace and the corresponding eigenvalues. Finally, the correction index is introduced to exponentially correct the small eigenvalues ​​of the noise subspace, improve the divergence of small eigenvalues ​​in the noise subspace, and improve the performance of beamforming. The theoretical analysis and simulation results show that the proposed algorithm can make the multi-carrier FDA form a dot-shape beam at limited snapshots, reduce the sidelobe level, improve the robustness of beamforming, and have better performance.

Keywords: adaptive beamforming, correction index, limited snapshot, multi-carrier frequency diverse array, robust

Procedia PDF Downloads 130
162 The Role of Risk Attitudes and Networks on the Migration Decision: Empirical Evidence from the United States

Authors: Tamanna Rimi

Abstract:

A large body of literature has discussed the determinants of migration decision. However, the potential role of individual risk attitudes on migration decision has so far been overlooked. The research on migration literature has studied how the expected income differential influences migration flows for a risk neutral individual. However, migration takes place when there is no expected income differential or even the variability of income appears as lower than in the current location. This migration puzzle motivates a recent trend in the literature that analyzes how attitudes towards risk influence the decision to migrate. However, the significance of risk attitudes on migration decision has been addressed mostly in a theoretical perspective in the mainstream migration literature. The efficient outcome of labor market and overall economy are largely influenced by migration in many countries. Therefore, attitudes towards risk as a determinant of migration should get more attention in empirical studies. To author’s best knowledge, this is the first study that has examined the relationship between relative risk aversion and migration decision in US market. This paper considers movement across United States as a means of migration. In addition, this paper also explores the network effect due to the increasing size of one’s own ethnic group to a source location on the migration decision and how attitudes towards risk vary with network effect. Two ethnic groups (i.e. Asian and Hispanic) have been considered in this regard. For the empirical estimation, this paper uses two sources of data: 1) U.S. census data for social, economic, and health research, 2010 (IPUMPS) and 2) University of Michigan Health and Retirement Study, 2010 (HRS). In order to measure relative risk aversion, this study uses the ‘Two Sample Two-Stage Instrumental Variable (TS2SIV)’ technique. This is a similar method of Angrist (1990) and Angrist and Kruegers’ (1992) ‘Two Sample Instrumental Variable (TSIV)’ technique. Using a probit model, the empirical investigation yields the following results: (i) risk attitude has a significantly large impact on migration decision where more risk averse people are less likely to migrate; (ii) the impact of risk attitude on migration varies by other demographic characteristics such as age and sex; (iii) people with higher concentration of same ethnic households living in a particular place are expected to migrate less from their current place; (iv) the risk attitudes on migration vary with network effect. The overall findings of this paper relating risk attitude, migration decision and network effect can be a significant contribution addressing the gap between migration theory and empirical study in migration literature.

Keywords: migration, network effect, risk attitude, U.S. market

Procedia PDF Downloads 162