Search results for: least squares estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2185

Search results for: least squares estimation

205 Optimal Pricing Based on Real Estate Demand Data

Authors: Vanessa Kummer, Maik Meusel

Abstract:

Real estate demand estimates are typically derived from transaction data. However, in regions with excess demand, transactions are driven by supply and therefore do not indicate what people are actually looking for. To estimate the demand for housing in Switzerland, search subscriptions from all important Swiss real estate platforms are used. These data do, however, suffer from missing information—for example, many users do not specify how many rooms they would like or what price they would be willing to pay. In economic analyses, it is often the case that only complete data is used. Usually, however, the proportion of complete data is rather small which leads to most information being neglected. Also, the data might have a strong distortion if it is complete. In addition, the reason that data is missing might itself also contain information, which is however ignored with that approach. An interesting issue is, therefore, if for economic analyses such as the one at hand, there is an added value by using the whole data set with the imputed missing values compared to using the usually small percentage of complete data (baseline). Also, it is interesting to see how different algorithms affect that result. The imputation of the missing data is done using unsupervised learning. Out of the numerous unsupervised learning approaches, the most common ones, such as clustering, principal component analysis, or neural networks techniques are applied. By training the model iteratively on the imputed data and, thereby, including the information of all data into the model, the distortion of the first training set—the complete data—vanishes. In a next step, the performances of the algorithms are measured. This is done by randomly creating missing values in subsets of the data, estimating those values with the relevant algorithms and several parameter combinations, and comparing the estimates to the actual data. After having found the optimal parameter set for each algorithm, the missing values are being imputed. Using the resulting data sets, the next step is to estimate the willingness to pay for real estate. This is done by fitting price distributions for real estate properties with certain characteristics, such as the region or the number of rooms. Based on these distributions, survival functions are computed to obtain the functional relationship between characteristics and selling probabilities. Comparing the survival functions shows that estimates which are based on imputed data sets do not differ significantly from each other; however, the demand estimate that is derived from the baseline data does. This indicates that the baseline data set does not include all available information and is therefore not representative for the entire sample. Also, demand estimates derived from the whole data set are much more accurate than the baseline estimation. Thus, in order to obtain optimal results, it is important to make use of all available data, even though it involves additional procedures such as data imputation.

Keywords: demand estimate, missing-data imputation, real estate, unsupervised learning

Procedia PDF Downloads 285
204 A Comparative Study of Sampling-Based Uncertainty Propagation with First Order Error Analysis and Percentile-Based Optimization

Authors: M. Gulam Kibria, Shourav Ahmed, Kais Zaman

Abstract:

In system analysis, the information on the uncertain input variables cause uncertainty in the system responses. Different probabilistic approaches for uncertainty representation and propagation in such cases exist in the literature. Different uncertainty representation approaches result in different outputs. Some of the approaches might result in a better estimation of system response than the other approaches. The NASA Langley Multidisciplinary Uncertainty Quantification Challenge (MUQC) has posed challenges about uncertainty quantification. Subproblem A, the uncertainty characterization subproblem, of the challenge posed is addressed in this study. In this subproblem, the challenge is to gather knowledge about unknown model inputs which have inherent aleatory and epistemic uncertainties in them with responses (output) of the given computational model. We use two different methodologies to approach the problem. In the first methodology we use sampling-based uncertainty propagation with first order error analysis. In the other approach we place emphasis on the use of Percentile-Based Optimization (PBO). The NASA Langley MUQC’s subproblem A is developed in such a way that both aleatory and epistemic uncertainties need to be managed. The challenge problem classifies each uncertain parameter as belonging to one the following three types: (i) An aleatory uncertainty modeled as a random variable. It has a fixed functional form and known coefficients. This uncertainty cannot be reduced. (ii) An epistemic uncertainty modeled as a fixed but poorly known physical quantity that lies within a given interval. This uncertainty is reducible. (iii) A parameter might be aleatory but sufficient data might not be available to adequately model it as a single random variable. For example, the parameters of a normal variable, e.g., the mean and standard deviation, might not be precisely known but could be assumed to lie within some intervals. It results in a distributional p-box having the physical parameter with an aleatory uncertainty, but the parameters prescribing its mathematical model are subjected to epistemic uncertainties. Each of the parameters of the random variable is an unknown element of a known interval. This uncertainty is reducible. From the study, it is observed that due to practical limitations or computational expense, the sampling is not exhaustive in sampling-based methodology. That is why the sampling-based methodology has high probability of underestimating the output bounds. Therefore, an optimization-based strategy to convert uncertainty described by interval data into a probabilistic framework is necessary. This is achieved in this study by using PBO.

Keywords: aleatory uncertainty, epistemic uncertainty, first order error analysis, uncertainty quantification, percentile-based optimization

Procedia PDF Downloads 240
203 Detailed Analysis of Multi-Mode Optical Fiber Infrastructures for Data Centers

Authors: Matej Komanec, Jan Bohata, Stanislav Zvanovec, Tomas Nemecek, Jan Broucek, Josef Beran

Abstract:

With the exponential growth of social networks, video streaming and increasing demands on data rates, the number of newly built data centers rises proportionately. The data centers, however, have to adjust to the rapidly increased amount of data that has to be processed. For this purpose, multi-mode (MM) fiber based infrastructures are often employed. It stems from the fact, the connections in data centers are typically realized within a short distance, and the application of MM fibers and components considerably reduces costs. On the other hand, the usage of MM components brings specific requirements for installation service conditions. Moreover, it has to be taken into account that MM fiber components have a higher production tolerance for parameters like core and cladding diameters, eccentricity, etc. Due to the high demands for the reliability of data center components, the determination of properly excited optical field inside the MM fiber core belongs to the key parameters while designing such an MM optical system architecture. Appropriately excited mode field of the MM fiber provides optimal power budget in connections, leads to the decrease of insertion losses (IL) and achieves effective modal bandwidth (EMB). The main parameter, in this case, is the encircled flux (EF), which should be properly defined for variable optical sources and consequent different mode-field distribution. In this paper, we present detailed investigation and measurements of the mode field distribution for short MM links purposed in particular for data centers with the emphasis on reliability and safety. These measurements are essential for large MM network design. The various scenarios, containing different fibers and connectors, were tested in terms of IL and mode-field distribution to reveal potential challenges. Furthermore, we focused on estimation of particular defects and errors, which can realistically occur like eccentricity, connector shifting or dust, were simulated and measured, and their dependence to EF statistics and functionality of data center infrastructure was evaluated. The experimental tests were performed at two wavelengths, commonly used in MM networks, of 850 nm and 1310 nm to verify EF statistics. Finally, we provide recommendations for data center systems and networks, using OM3 and OM4 MM fiber connections.

Keywords: optical fiber, multi-mode, data centers, encircled flux

Procedia PDF Downloads 375
202 Strategy and Mechanism for Intercepting Unpredictable Moving Targets in the Blue-Tailed Damselfly (Ischnura elegans)

Authors: Ziv Kassner, Gal Ribak

Abstract:

Members of the Odonata order (dragonflies and damselflies) stand out for their maneuverability and superb flight control, which allow them to catch flying prey in the air. These outstanding aerial abilities were fine-tuned during millions of years of an evolutionary arms race between Odonata and their prey, providing an attractive research model for studying the relationship between sensory input – and aerodynamic output in a flying insect. The ability to catch a maneuvering target in air is interesting not just for insect behavioral ecology and neuroethology but also for designing small and efficient robotic air vehicles. While the aerial prey interception of dragonflies (suborder: Anisoptera) have been studied before, little is known about how damselflies (suborder: Zygoptera) intercept prey. Here, high-speed cameras (filming at 1000 frames per second) were used to explore how damselflies catch unpredictable targets that move through air. Blue-tailed damselflies - Ischnura elegans (family: Coenagrionidae) were introduced to a flight arena and filmed while landing on moving targets that were oscillated harmonically. The insects succeeded in capturing targets that were moved with an amplitude of 6 cm and frequencies of 0-2.5 Hz (fastest mean target speed of 0.3 m s⁻¹) and targets that were moved in 1 Hz (an average speed of 0.3 m s⁻¹) but with an amplitude of 15 cm. To land on stationary or slow targets, damselflies either flew directly to the target, or flew sideways, up to a point in which the target was fixed in the center of the field of view, followed by direct flight path towards the target. As the target moved in increased frequency, damselflies demonstrated an ability to track the targets while flying sideways and minimizing the changes of their body direction on the yaw axis. This was likely an attempt to keep the targets at the center of the visual field while minimizing rotational optic flow of the surrounding visual panorama. Stabilizing rotational optic flow helps in estimation of the velocity and distance of the target. These results illustrate how dynamic visual information is used by damselflies to guide them towards a maneuvering target, enabling the superb aerial hunting abilities of these insects. They also exemplifies the plasticity of the damselfly flight apparatus which enables flight in any direction, irrespective of the direction of the body.

Keywords: bio-mechanics, insect flight, target fixation, tracking and interception

Procedia PDF Downloads 153
201 Impact of Interface Soil Layer on Groundwater Aquifer Behaviour

Authors: Hayder H. Kareem, Shunqi Pan

Abstract:

The geological environment where the groundwater is collected represents the most important element that affects the behaviour of groundwater aquifer. As groundwater is a worldwide vital resource, it requires knowing the parameters that affect this source accurately so that the conceptualized mathematical models would be acceptable to the broadest ranges. Therefore, groundwater models have recently become an effective and efficient tool to investigate groundwater aquifer behaviours. Groundwater aquifer may contain aquitards, aquicludes, or interfaces within its geological formations. Aquitards and aquicludes have geological formations that forced the modellers to include those formations within the conceptualized groundwater models, while interfaces are commonly neglected from the conceptualization process because the modellers believe that the interface has no effect on aquifer behaviour. The current research highlights the impact of an interface existing in a real unconfined groundwater aquifer called Dibdibba, located in Al-Najaf City, Iraq where it has a river called the Euphrates River that passes through the eastern part of this city. Dibdibba groundwater aquifer consists of two types of soil layers separated by an interface soil layer. A groundwater model is built for Al-Najaf City to explore the impact of this interface. Calibration process is done using PEST 'Parameter ESTimation' approach and the best Dibdibba groundwater model is obtained. When the soil interface is conceptualized, results show that the groundwater tables are significantly affected by that interface through appearing dry areas of 56.24 km² and 6.16 km² in the upper and lower layers of the aquifer, respectively. The Euphrates River will also leak water into the groundwater aquifer of 7359 m³/day. While these results are changed when the soil interface is neglected where the dry area became 0.16 km², the Euphrates River leakage became 6334 m³/day. In addition, the conceptualized models (with and without interface) reveal different responses for the change in the recharge rates applied on the aquifer through the uncertainty analysis test. The aquifer of Dibdibba in Al-Najaf City shows a slight deficit in the amount of water supplied by the current pumping scheme and also notices that the Euphrates River suffers from stresses applied to the aquifer. Ultimately, this study shows a crucial need to represent the interface soil layer in model conceptualization to be the intended and future predicted behaviours more reliable for consideration purposes.

Keywords: Al-Najaf City, groundwater aquifer behaviour, groundwater modelling, interface soil layer, Visual MODFLOW

Procedia PDF Downloads 183
200 Towards End-To-End Disease Prediction from Raw Metagenomic Data

Authors: Maxence Queyrel, Edi Prifti, Alexandre Templier, Jean-Daniel Zucker

Abstract:

Analysis of the human microbiome using metagenomic sequencing data has demonstrated high ability in discriminating various human diseases. Raw metagenomic sequencing data require multiple complex and computationally heavy bioinformatics steps prior to data analysis. Such data contain millions of short sequences read from the fragmented DNA sequences and stored as fastq files. Conventional processing pipelines consist in multiple steps including quality control, filtering, alignment of sequences against genomic catalogs (genes, species, taxonomic levels, functional pathways, etc.). These pipelines are complex to use, time consuming and rely on a large number of parameters that often provide variability and impact the estimation of the microbiome elements. Training Deep Neural Networks directly from raw sequencing data is a promising approach to bypass some of the challenges associated with mainstream bioinformatics pipelines. Most of these methods use the concept of word and sentence embeddings that create a meaningful and numerical representation of DNA sequences, while extracting features and reducing the dimensionality of the data. In this paper we present an end-to-end approach that classifies patients into disease groups directly from raw metagenomic reads: metagenome2vec. This approach is composed of four steps (i) generating a vocabulary of k-mers and learning their numerical embeddings; (ii) learning DNA sequence (read) embeddings; (iii) identifying the genome from which the sequence is most likely to come and (iv) training a multiple instance learning classifier which predicts the phenotype based on the vector representation of the raw data. An attention mechanism is applied in the network so that the model can be interpreted, assigning a weight to the influence of the prediction for each genome. Using two public real-life data-sets as well a simulated one, we demonstrated that this original approach reaches high performance, comparable with the state-of-the-art methods applied directly on processed data though mainstream bioinformatics workflows. These results are encouraging for this proof of concept work. We believe that with further dedication, the DNN models have the potential to surpass mainstream bioinformatics workflows in disease classification tasks.

Keywords: deep learning, disease prediction, end-to-end machine learning, metagenomics, multiple instance learning, precision medicine

Procedia PDF Downloads 125
199 Flood Mapping Using Height above the Nearest Drainage Model: A Case Study in Fredericton, NB, Canada

Authors: Morteza Esfandiari, Shabnam Jabari, Heather MacGrath, David Coleman

Abstract:

Flood is a severe issue in different places in the world as well as the city of Fredericton, New Brunswick, Canada. The downtown area of Fredericton is close to the Saint John River, which is susceptible to flood around May every year. Recently, the frequency of flooding seems to be increased, especially after the fact that the downtown area and surrounding urban/agricultural lands got flooded in two consecutive years in 2018 and 2019. In order to have an explicit vision of flood span and damage to affected areas, it is necessary to use either flood inundation modelling or satellite data. Due to contingent availability and weather dependency of optical satellites, and limited existing data for the high cost of hydrodynamic models, it is not always feasible to rely on these sources of data to generate quality flood maps after or during the catastrophe. Height Above the Nearest Drainage (HAND), a state-of-the-art topo-hydrological index, normalizes the height of a basin based on the relative elevation along with the stream network and specifies the gravitational or the relative drainage potential of an area. HAND is a relative height difference between the stream network and each cell on a Digital Terrain Model (DTM). The stream layer is provided through a multi-step, time-consuming process which does not always result in an optimal representation of the river centerline depending on the topographic complexity of that region. HAND is used in numerous case studies with quite acceptable and sometimes unexpected results because of natural and human-made features on the surface of the earth. Some of these features might cause a disturbance in the generated model, and consequently, the model might not be able to predict the flow simulation accurately. We propose to include a previously existing stream layer generated by the province of New Brunswick and benefit from culvert maps to improve the water flow simulation and accordingly the accuracy of HAND model. By considering these parameters in our processing, we were able to increase the accuracy of the model from nearly 74% to almost 92%. The improved model can be used for generating highly accurate flood maps, which is necessary for future urban planning and flood damage estimation without any need for satellite imagery or hydrodynamic computations.

Keywords: HAND, DTM, rapid floodplain, simplified conceptual models

Procedia PDF Downloads 151
198 Artificial Membrane Comparison for Skin Permeation in Skin PAMPA

Authors: Aurea C. L. Lacerda, Paulo R. H. Moreno, Bruna M. P. Vianna, Cristina H. R. Serra, Airton Martin, André R. Baby, Vladi O. Consiglieri, Telma M. Kaneko

Abstract:

The modified Franz cell is the most widely used model for in vitro permeation studies, however it still presents some disadvantages. Thus, some alternative methods have been developed such as Skin PAMPA, which is a bio- artificial membrane that has been applied for skin penetration estimation of xenobiotics based on HT permeability model consisting. Skin PAMPA greatest advantage is to carry out more tests, in a fast and inexpensive way. The membrane system mimics the stratum corneum characteristics, which is the primary skin barrier. The barrier properties are given by corneocytes embedded in a multilamellar lipid matrix. This layer is the main penetration route through the paracellular permeation pathway and it consists of a mixture of cholesterol, ceramides, and fatty acids as the dominant components. However, there is no consensus on the membrane composition. The objective of this work was to compare the performance among different bio-artificial membranes for studying the permeation in skin PAMPA system. Material and methods: In order to mimetize the lipid composition`s present in the human stratum corneum six membranes were developed. The membrane composition was equimolar mixture of cholesterol, ceramides 1-O-C18:1, C22, and C20, plus fatty acids C20 and C24. The membrane integrity assay was based on the transport of Brilliant Cresyl Blue, which has a low permeability; and Lucifer Yellow with very poor permeability and should effectively be completely rejected. The membrane characterization was performed using Confocal Laser Raman Spectroscopy, using stabilized laser at 785 nm with 10 second integration time and 2 accumulations. The membrane behaviour results on the PAMPA system were statistically evaluated and all of the compositions have shown integrity and permeability. The confocal Raman spectra were obtained in the region of 800-1200 cm-1 that is associated with the C-C stretches of the carbon scaffold from the stratum corneum lipids showed similar pattern for all the membranes. The ceramides, long chain fatty acids and cholesterol in equimolar ratio permitted to obtain lipid mixtures with self-organization capability, similar to that occurring into the stratum corneum. Conclusion: The artificial biological membranes studied for Skin PAMPA showed to be similar and with comparable properties to the stratum corneum.

Keywords: bio-artificial membranes, comparison, confocal Raman, skin PAMPA

Procedia PDF Downloads 509
197 Multiperson Drone Control with Seamless Pilot Switching Using Onboard Camera and Openpose Real-Time Keypoint Detection

Authors: Evan Lowhorn, Rocio Alba-Flores

Abstract:

Traditional classification Convolutional Neural Networks (CNN) attempt to classify an image in its entirety. This becomes problematic when trying to perform classification with a drone’s camera in real-time due to unpredictable backgrounds. Object detectors with bounding boxes can be used to isolate individuals and other items, but the original backgrounds remain within these boxes. These basic detectors have been regularly used to determine what type of object an item is, such as “person” or “dog.” Recent advancement in computer vision, particularly with human imaging, is keypoint detection. Human keypoint detection goes beyond bounding boxes to fully isolate humans and plot points, or Regions of Interest (ROI), on their bodies within an image. ROIs can include shoulders, elbows, knees, heads, etc. These points can then be related to each other and used in deep learning methods such as pose estimation. For drone control based on human motions, poses, or signals using the onboard camera, it is important to have a simple method for pilot identification among multiple individuals while also giving the pilot fine control options for the drone. To achieve this, the OpenPose keypoint detection network was used with body and hand keypoint detection enabled. OpenPose supports the ability to combine multiple keypoint detection methods in real-time with a single network. Body keypoint detection allows simple poses to act as the pilot identifier. The hand keypoint detection with ROIs for each finger can then offer a greater variety of signal options for the pilot once identified. For this work, the individual must raise their non-control arm to be identified as the operator and send commands with the hand on their other arm. The drone ignores all other individuals in the onboard camera feed until the current operator lowers their non-control arm. When another individual wish to operate the drone, they simply raise their arm once the current operator relinquishes control, and then they can begin controlling the drone with their other hand. This is all performed mid-flight with no landing or script editing required. When using a desktop with a discrete NVIDIA GPU, the drone’s 2.4 GHz Wi-Fi connection combined with OpenPose restrictions to only body and hand allows this control method to perform as intended while maintaining the responsiveness required for practical use.

Keywords: computer vision, drone control, keypoint detection, openpose

Procedia PDF Downloads 184
196 Design and Evaluation of a Prototype for Non-Invasive Screening of Diabetes – Skin Impedance Technique

Authors: Pavana Basavakumar, Devadas Bhat

Abstract:

Diabetes is a disease which often goes undiagnosed until its secondary effects are noticed. Early detection of the disease is necessary to avoid serious consequences which could lead to the death of the patient. Conventional invasive tests for screening of diabetes are mostly painful, time consuming and expensive. There’s also a risk of infection involved, therefore it is very essential to develop non-invasive methods to screen and estimate the level of blood glucose. Extensive research is going on with this perspective, involving various techniques that explore optical, electrical, chemical and thermal properties of the human body that directly or indirectly depend on the blood glucose concentration. Thus, non-invasive blood glucose monitoring has grown into a vast field of research. In this project, an attempt was made to device a prototype for screening of diabetes by measuring electrical impedance of the skin and building a model to predict a patient’s condition based on the measured impedance. The prototype developed, passes a negligible amount of constant current (0.5mA) across a subject’s index finger through tetra polar silver electrodes and measures output voltage across a wide range of frequencies (10 KHz – 4 MHz). The measured voltage is proportional to the impedance of the skin. The impedance was acquired in real-time for further analysis. Study was conducted on over 75 subjects with permission from the institutional ethics committee, along with impedance, subject’s blood glucose values were also noted, using conventional method. Nonlinear regression analysis was performed on the features extracted from the impedance data to obtain a model that predicts blood glucose values for a given set of features. When the predicted data was depicted on Clarke’s Error Grid, only 58% of the values predicted were clinically acceptable. Since the objective of the project was to screen diabetes and not actual estimation of blood glucose, the data was classified into three classes ‘NORMAL FASTING’,’NORMAL POSTPRANDIAL’ and ‘HIGH’ using linear Support Vector Machine (SVM). Classification accuracy obtained was 91.4%. The developed prototype was economical, fast and pain free. Thus, it can be used for mass screening of diabetes.

Keywords: Clarke’s error grid, electrical impedance of skin, linear SVM, nonlinear regression, non-invasive blood glucose monitoring, screening device for diabetes

Procedia PDF Downloads 325
195 Evaluation of Antioxidant Activity and Total Phenolic Content of Lens Esculenta Moench, Seeds

Authors: Vivek Kumar Gupta, Kripi Vohra, Monika Gupta

Abstract:

Pulses have been a vital ingredient of the balanced human diet in India. Lentil (Lens culinaris Medikus or Lens esculenta Moench.) is a common legume known since biblical times. Lentil seeds, with or without hulls, are cooked as dhal and this has been the main dish for millennia in the South Asian region. Oxidative stress can damage lipids, proteins, enzymes, carbohydrates and DNA in cells and tissues, resulting in membrane damage, fragmentation or random cross linking of molecules like DNA, enzymes and structural proteins and even lead to cell death induced by DNA fragmentation and lipid peroxidation. These consequences of oxidative stress construct the molecular basis in the development of cancer, neurodegenerative disorders, cardiovascular diseases, diabetes and autoimmune. The aim of the present work is to assess the antioxidant potential of the peteroleum ether, acetone, methanol and water extract of the Lens esculenta seeds. In vitro antioxidant assessment of the extracts was carried out using 1,1-diphenyl-2-picryl hydrazyl (DPPH) radical scavenging activity, hydroxyl radical scavenging activity, reducing power assay. The quantitative estimation of total phenolic content, total flavonoid content in extracts and in plant material, total saponin content, total alkaloid content, crude fibre content, total volatile content, fat content and mucilage content in drug material was also carried out. Though all the extracts exhibited dose dependent reducing power activity the acetone extract was found to possess significant hydrogen donating ability in DPPH (45.83%-93.13%) and hydroxyl radical scavenging system (28.7%-46.41%) than the peteroleum ether, methanol and water extracts. Total phenolic content in the acetone and methanol extract was found to be 608 and 188 mg gallic acid equivalent of phenol/g of sample respectively. Total flavonoid content of acetone and methanol extract was found to be 128 and 30.6 mg quercetin equivalent/g of sample respectively. It is evident that acetone extract of Lentil seeds possess high levels of polyphenolics and flavonoids that could be utilized as antioxidants and neutraceuticals.

Keywords: antioxidant, flavanoids, Lens esculenta, polyphenols

Procedia PDF Downloads 484
194 Artificial Neural Networks Application on Nusselt Number and Pressure Drop Prediction in Triangular Corrugated Plate Heat Exchanger

Authors: Hany Elsaid Fawaz Abdallah

Abstract:

This study presents a new artificial neural network(ANN) model to predict the Nusselt Number and pressure drop for the turbulent flow in a triangular corrugated plate heat exchanger for forced air and turbulent water flow. An experimental investigation was performed to create a new dataset for the Nusselt Number and pressure drop values in the following range of dimensionless parameters: The plate corrugation angles (from 0° to 60°), the Reynolds number (from 10000 to 40000), pitch to height ratio (from 1 to 4), and Prandtl number (from 0.7 to 200). Based on the ANN performance graph, the three-layer structure with {12-8-6} hidden neurons has been chosen. The training procedure includes back-propagation with the biases and weight adjustment, the evaluation of the loss function for the training and validation dataset and feed-forward propagation of the input parameters. The linear function was used at the output layer as the activation function, while for the hidden layers, the rectified linear unit activation function was utilized. In order to accelerate the ANN training, the loss function minimization may be achieved by the adaptive moment estimation algorithm (ADAM). The ‘‘MinMax’’ normalization approach was utilized to avoid the increase in the training time due to drastic differences in the loss function gradients with respect to the values of weights. Since the test dataset is not being used for the ANN training, a cross-validation technique is applied to the ANN network using the new data. Such procedure was repeated until loss function convergence was achieved or for 4000 epochs with a batch size of 200 points. The program code was written in Python 3.0 using open-source ANN libraries such as Scikit learn, TensorFlow and Keras libraries. The mean average percent error values of 9.4% for the Nusselt number and 8.2% for pressure drop for the ANN model have been achieved. Therefore, higher accuracy compared to the generalized correlations was achieved. The performance validation of the obtained model was based on a comparison of predicted data with the experimental results yielding excellent accuracy.

Keywords: artificial neural networks, corrugated channel, heat transfer enhancement, Nusselt number, pressure drop, generalized correlations

Procedia PDF Downloads 88
193 Estimation of Ribb Dam Catchment Sediment Yield and Reservoir Effective Life Using Soil and Water Assessment Tool Model and Empirical Methods

Authors: Getalem E. Haylia

Abstract:

The Ribb dam is one of the irrigation projects in the Upper Blue Nile basin, Ethiopia, to irrigate the Fogera plain. Reservoir sedimentation is a major problem because it reduces the useful reservoir capacity by the accumulation of sediments coming from the watersheds. Estimates of sediment yield are needed for studies of reservoir sedimentation and planning of soil and water conservation measures. The objective of this study was to simulate the Ribb dam catchment sediment yield using SWAT model and to estimate Ribb reservoir effective life according to trap efficiency methods. The Ribb dam catchment is found in North Western part of Ethiopia highlands, and it belongs to the upper Blue Nile and Lake Tana basins. Soil and Water Assessment Tool (SWAT) was selected to simulate flow and sediment yield in the Ribb dam catchment. The model sensitivity, calibration, and validation analysis at Ambo Bahir site were performed with Sequential Uncertainty Fitting (SUFI-2). The flow data at this site was obtained by transforming the Lower Ribb gauge station (2002-2013) flow data using Area Ratio Method. The sediment load was derived based on the sediment concentration yield curve of Ambo site. Stream flow results showed that the Nash-Sutcliffe efficiency coefficient (NSE) was 0.81 and the coefficient of determination (R²) was 0.86 in calibration period (2004-2010) and, 0.74 and 0.77 in validation period (2011-2013), respectively. Using the same periods, the NS and R² for the sediment load calibration were 0.85 and 0.79 and, for the validation, it became 0.83 and 0.78, respectively. The simulated average daily flow rate and sediment yield generated from Ribb dam watershed were 3.38 m³/s and 1772.96 tons/km²/yr, respectively. The effective life of Ribb reservoir was estimated using the developed empirical methods of the Brune (1953), Churchill (1948) and Brown (1958) methods and found to be 30, 38 and 29 years respectively. To conclude, massive sediment comes from the steep slope agricultural areas, and approximately 98-100% of this incoming annual sediment loads have been trapped by the Ribb reservoir. In Ribb catchment, as well as reservoir systematic and thorough consideration of technical, social, environmental, and catchment managements and practices should be made to lengthen the useful life of Ribb reservoir.

Keywords: catchment, reservoir effective life, reservoir sedimentation, Ribb, sediment yield, SWAT model

Procedia PDF Downloads 187
192 Investigating the Role of Supplier Involvement in the Design Process as an Approach for Enhancing Building Maintainability

Authors: Kamal Ahmed, Othman Ayman, Refat Mostafa

Abstract:

The post-construction phase represents a critical milestone in the project lifecycle. This is because design errors and omissions, as well as construction defects, are examined during this phase. The traditional procurement approaches that are commonly adopted in construction projects separate design from construction, which ultimately inhibits contractors, suppliers and other parties from providing the design team with constructive comments and feedback to improve the project design. As a result, a lack of considering maintainability aspects during the design process results in increasing maintenance and operation costs as well as reducing building performance. This research aims to investigate the role of Early Supplier Involvement (ESI) in the design process as an approach to enhancing building maintainability. In order to achieve this aim, a research methodology consisting of a literature review, case studies and a survey questionnaire was designed to accomplish four objectives. Firstly, a literature review was used to examine the concepts of building maintenance, maintainability, the design process and ESI. Secondly, three case studies were presented and analyzed to investigate the role of ESI in enhancing building maintainability during the design process. Thirdly, a survey questionnaire was conducted with a representative sample of Architectural Design Firms (ADFs) in Egypt to investigate their perception and application of ESI towards enhancing building maintainability during the design process. Finally, the research developed a framework to facilitate ESI in the design process in ADFs in Egypt. Data analysis showed that the ‘Difficulty of trusting external parties and sharing information with transparency’ was ranked the highest challenge of ESI in ADFs in Egypt, followed by ‘Legal competitive advantage restrictions’. Moreover, ‘Better estimation for operation and maintenance costs’ was ranked the highest contribution of ESI towards enhancing building maintainability, followed by ‘Reduce the number of operation and maintenance problems or reworks’. Finally, ‘Innovation, technical expertise, and competence’ was ranked the highest supplier’s selection criteria, while ‘paying consultation fees for offering advice and recommendations to the design team’ was ranked the highest form of supplier’s remuneration. The proposed framework represents a synthesis that is creative in thought and adds value to the knowledge in a manner that has not previously occurred.

Keywords: maintenance, building maintainability, building life cycle cost (ICC), material supplier

Procedia PDF Downloads 48
191 An Optimal Control Method for Reconstruction of Topography in Dam-Break Flows

Authors: Alia Alghosoun, Nabil El Moçayd, Mohammed Seaid

Abstract:

Modeling dam-break flows over non-flat beds requires an accurate representation of the topography which is the main source of uncertainty in the model. Therefore, developing robust and accurate techniques for reconstructing topography in this class of problems would reduce the uncertainty in the flow system. In many hydraulic applications, experimental techniques have been widely used to measure the bed topography. In practice, experimental work in hydraulics may be very demanding in both time and cost. Meanwhile, computational hydraulics have served as an alternative for laboratory and field experiments. Unlike the forward problem, the inverse problem is used to identify the bed parameters from the given experimental data. In this case, the shallow water equations used for modeling the hydraulics need to be rearranged in a way that the model parameters can be evaluated from measured data. However, this approach is not always possible and it suffers from stability restrictions. In the present work, we propose an adaptive optimal control technique to numerically identify the underlying bed topography from a given set of free-surface observation data. In this approach, a minimization function is defined to iteratively determine the model parameters. The proposed technique can be interpreted as a fractional-stage scheme. In the first stage, the forward problem is solved to determine the measurable parameters from known data. In the second stage, the adaptive control Ensemble Kalman Filter is implemented to combine the optimality of observation data in order to obtain the accurate estimation of the topography. The main features of this method are on one hand, the ability to solve for different complex geometries with no need for any rearrangements in the original model to rewrite it in an explicit form. On the other hand, its achievement of strong stability for simulations of flows in different regimes containing shocks or discontinuities over any geometry. Numerical results are presented for a dam-break flow problem over non-flat bed using different solvers for the shallow water equations. The robustness of the proposed method is investigated using different numbers of loops, sensitivity parameters, initial samples and location of observations. The obtained results demonstrate high reliability and accuracy of the proposed techniques.

Keywords: erodible beds, finite element method, finite volume method, nonlinear elasticity, shallow water equations, stresses in soil

Procedia PDF Downloads 130
190 Referencing Anna: Findings From Eye-tracking During Dutch Pronoun Resolution

Authors: Robin Devillers, Chantal van Dijk

Abstract:

Children face ambiguities in everyday language use. Particularly ambiguity in pronoun resolution can be challenging, whereas adults can rapidly identify the antecedent of the mentioned pronoun. Two main factors underlie this process, namely the accessibility of the referent and the syntactic cues of the pronoun. After 200ms, adults have converged the accessibility and the syntactic constraints, while relieving cognitive effort by considering contextual cues. As children are still developing their cognitive capacity, they are not able yet to simultaneously assess and integrate accessibility, contextual cues and syntactic information. As such, they fail to identify the correct referent and possibly fixate more on the competitor in comparison to adults. In this study, Dutch while-clauses were used to investigate the interpretation of pronouns by children. The aim is to a) examine the extent to which 7-10 year old children are able to utilise discourse and syntactic information during online and offline sentence processing and b) analyse the contribution of individual factors, including age, working memory, condition and vocabulary. Adult and child participants are presented with filler-items and while-clauses, and the latter follows a particular structure: ‘Anna and Sophie are sitting in the library. While Anna is reading a book, she is taking a sip of water.’ This sentence illustrates the ambiguous situation, as it is unclear whether ‘she’ refers to Anna or Sophie. In the unambiguous situation, either Anna or Sophie would be substituted by a boy, such as ‘Peter’. The pronoun in the second sentence will unambiguously refer to one of the characters due to the syntactic constraints of the pronoun. Children’s and adults’ responses were measured by means of a visual world paradigm. This paradigm consisted of two characters, of which one was the referent (the target) and the other was the competitor. A sentence was presented and followed by a question, which required the participant to choose which character was the referent. Subsequently, this paradigm yields an online (fixations) and offline (accuracy) score. These findings will be analysed using Generalised Additive Mixed Models, which allow for a thorough estimation of the individual variables. These findings will contribute to the scientific literature in several ways; firstly, the use of while-clauses has not been studied much and it’s processing has not yet been identified. Moreover, online pronoun resolution has not been investigated much in both children and adults, and therefore, this study will contribute to adults and child’s pronoun resolution literature. Lastly, pronoun resolution has not been studied yet in Dutch and as such, this study adds to the languages

Keywords: pronouns, online language processing, Dutch, eye-tracking, first language acquisition, language development

Procedia PDF Downloads 100
189 Fracture Toughness Characterizations of Single Edge Notch (SENB) Testing Using DIC System

Authors: Amr Mohamadien, Ali Imanpour, Sylvester Agbo, Nader Yoosef-Ghodsi, Samer Adeeb

Abstract:

The fracture toughness resistance curve (e.g., J-R curve and crack tip opening displacement (CTOD) or δ-R curve) is important in facilitating strain-based design and integrity assessment of oil and gas pipelines. This paper aims to present laboratory experimental data to characterize the fracture behavior of pipeline steel. The influential parameters associated with the fracture of API 5L X52 pipeline steel, including different initial crack sizes, were experimentally investigated for a single notch edge bend (SENB). A total of 9 small-scale specimens with different crack length to specimen depth ratios were conducted and tested using single edge notch bending (SENB). ASTM E1820 and BS7448 provide testing procedures to construct the fracture resistance curve (Load-CTOD, CTOD-R, or J-R) from test results. However, these procedures are limited by standard specimens’ dimensions, displacement gauges, and calibration curves. To overcome these limitations, this paper presents the use of small-scale specimens and a 3D-digital image correlation (DIC) system to extract the parameters required for fracture toughness estimation. Fracture resistance curve parameters in terms of crack mouth open displacement (CMOD), crack tip opening displacement (CTOD), and crack growth length (∆a) were carried out from test results by utilizing the DIC system, and an improved regression fitting resistance function (CTOD Vs. crack growth), or (J-integral Vs. crack growth) that is dependent on a variety of initial crack sizes was constructed and presented. The obtained results were compared to the available results of the classical physical measurement techniques, and acceptable matchings were observed. Moreover, a case study was implemented to estimate the maximum strain value that initiates the stable crack growth. This might be of interest to developing more accurate strain-based damage models. The results of laboratory testing in this study offer a valuable database to develop and validate damage models that are able to predict crack propagation of pipeline steel, accounting for the influential parameters associated with fracture toughness.

Keywords: fracture toughness, crack propagation in pipeline steels, CTOD-R, strain-based damage model

Procedia PDF Downloads 63
188 Sustainability Impact Assessment of Construction Ecology to Engineering Systems and Climate Change

Authors: Moustafa Osman Mohammed

Abstract:

Construction industry, as one of the main contributor in depletion of natural resources, influences climate change. This paper discusses incremental and evolutionary development of the proposed models for optimization of a life-cycle analysis to explicit strategy for evaluation systems. The main categories are virtually irresistible for introducing uncertainties, uptake composite structure model (CSM) as environmental management systems (EMSs) in a practice science of evaluation small and medium-sized enterprises (SMEs). The model simplified complex systems to reflect nature systems’ input, output and outcomes mode influence “framework measures” and give a maximum likelihood estimation of how elements are simulated over the composite structure. The traditional knowledge of modeling is based on physical dynamic and static patterns regarding parameters influence environment. It unified methods to demonstrate how construction systems ecology interrelated from management prospective in procedure reflects the effect of the effects of engineering systems to ecology as ultimately unified technologies in extensive range beyond constructions impact so as, - energy systems. Sustainability broadens socioeconomic parameters to practice science that meets recovery performance, engineering reflects the generic control of protective systems. When the environmental model employed properly, management decision process in governments or corporations could address policy for accomplishment strategic plans precisely. The management and engineering limitation focuses on autocatalytic control as a close cellular system to naturally balance anthropogenic insertions or aggregation structure systems to pound equilibrium as steady stable conditions. Thereby, construction systems ecology incorporates engineering and management scheme, as a midpoint stage between biotic and abiotic components to predict constructions impact. The later outcomes’ theory of environmental obligation suggests either a procedures of method or technique that is achieved in sustainability impact of construction system ecology (SICSE), as a relative mitigation measure of deviation control, ultimately.

Keywords: sustainability, environmental impact assessment, environemtal management, construction ecology

Procedia PDF Downloads 393
187 A Double Ended AC Series Arc Fault Location Algorithm Based on Currents Estimation and a Fault Map Trace Generation

Authors: Edwin Calderon-Mendoza, Patrick Schweitzer, Serge Weber

Abstract:

Series arc faults appear frequently and unpredictably in low voltage distribution systems. Many methods have been developed to detect this type of faults and commercial protection systems such AFCI (arc fault circuit interrupter) have been used successfully in electrical networks to prevent damage and catastrophic incidents like fires. However, these devices do not allow series arc faults to be located on the line in operating mode. This paper presents a location algorithm for series arc fault in a low-voltage indoor power line in an AC 230 V-50Hz home network. The method is validated through simulations using the MATLAB software. The fault location method uses electrical parameters (resistance, inductance, capacitance, and conductance) of a 49 m indoor power line. The mathematical model of a series arc fault is based on the analysis of the V-I characteristics of the arc and consists basically of two antiparallel diodes and DC voltage sources. In a first step, the arc fault model is inserted at some different positions across the line which is modeled using lumped parameters. At both ends of the line, currents and voltages are recorded for each arc fault generation at different distances. In the second step, a fault map trace is created by using signature coefficients obtained from Kirchhoff equations which allow a virtual decoupling of the line’s mutual capacitance. Each signature coefficient obtained from the subtraction of estimated currents is calculated taking into account the Discrete Fast Fourier Transform of currents and voltages and also the fault distance value. These parameters are then substituted into Kirchhoff equations. In a third step, the same procedure described previously to calculate signature coefficients is employed but this time by considering hypothetical fault distances where the fault can appear. In this step the fault distance is unknown. The iterative calculus from Kirchhoff equations considering stepped variations of the fault distance entails the obtaining of a curve with a linear trend. Finally, the fault distance location is estimated at the intersection of two curves obtained in steps 2 and 3. The series arc fault model is validated by comparing current registered from simulation with real recorded currents. The model of the complete circuit is obtained for a 49m line with a resistive load. Also, 11 different arc fault positions are considered for the map trace generation. By carrying out the complete simulation, the performance of the method and the perspectives of the work will be presented.

Keywords: indoor power line, fault location, fault map trace, series arc fault

Procedia PDF Downloads 137
186 Intersubjectivity of Forensic Handwriting Analysis

Authors: Marta Nawrocka

Abstract:

In each of the legal proceedings, in which expert evidence is carried out, a major concern is the assessment of the evidential value of expert reports. Judicial institutions, while making decisions, rely heavily on the expert reports, because they usually do not possess 'special knowledge' from a certain fields of science which makes it impossible for them to verify the results presented in the processes. In handwriting studies, the standards of analysis are developed. They unify procedures used by experts in comparing signs and in constructing expert reports. However, the methods used by experts are usually of a qualitative nature. They rely on the application of knowledge and experience of expert and in effect give significant range of margin in the assessment. Moreover, the standards used by experts are still not very precise and the process of reaching the conclusions is poorly understood. The above-mentioned circumstances indicate that expert opinions in the field of handwriting analysis, for many reasons, may not be sufficiently reliable. It is assumed that this state of affairs has its source in a very low level of intersubjectivity of measuring scales and analysis procedures, which consist elements of this kind of analysis. Intersubjectivity is a feature of cognition which (in relation to methods) indicates the degree of consistency of results that different people receive using the same method. The higher the level of intersubjectivity is, the more reliable and credible the method can be considered. The aim of the conducted research was to determine the degree of intersubjectivity of the methods used by the experts from the scope of handwriting analysis. 30 experts took part in the study and each of them received two signatures, with varying degrees of readability, for analysis. Their task was to distinguish graphic characteristics in the signature, estimate the evidential value of the found characteristics and estimate the evidential value of the signature. The obtained results were compared with each other using the Alpha Krippendorff’s statistic, which numerically determines the degree of compatibility of the results (assessments) that different people receive under the same conditions using the same method. The estimation of the degree of compatibility of the experts' results for each of these tasks allowed to determine the degree of intersubjectivity of the studied method. The study showed that during the analysis, the experts identified different signature characteristics and attributed different evidential value to them. In this scope, intersubjectivity turned out to be low. In addition, it turned out that experts in various ways called and described the same characteristics, and the language used was often inconsistent and imprecise. Thus, significant differences have been noted on the basis of language and applied nomenclature. On the other hand, experts attributed a similar evidential value to the entire signature (set of characteristics), which indicates that in this range, they were relatively consistent.

Keywords: forensic sciences experts, handwriting analysis, inter-rater reliability, reliability of methods

Procedia PDF Downloads 149
185 Component Test of Martensitic/Ferritic Steels and Nickel-Based Alloys and Their Welded Joints under Creep and Thermo-Mechanical Fatigue Loading

Authors: Daniel Osorio, Andreas Klenk, Stefan Weihe, Andreas Kopp, Frank Rödiger

Abstract:

Future power plants currently face high design requirements due to worsening climate change and environmental restrictions, which demand high operational flexibility, superior thermal performance, minimal emissions, and higher cyclic capability. The aim of the paper is, therefore, to investigate the creep and thermo-mechanical material behavior of improved materials experimentally and welded joints at component scale under near-to-service operating conditions, which are promising for application in highly efficient and flexible future power plants. These materials promise an increase in flexibility and a reduction in manufacturing costs by providing enhanced creep strength and, therefore, the possibility for wall thickness reduction. At the temperature range between 550°C and 625°C, the investigation focuses on the in-phase thermo-mechanical fatigue behavior of dissimilar welded joints of conventional materials (ferritic and martensitic material T24 and T92) to nickel-based alloys (A617B and HR6W) by means of membrane test panels. The temperature and external load are varied in phase during the test, while the internal pressure remains constant. At the temperature range between 650°C and 750°C, it focuses on the creep behavior under multiaxial stress loading of similar and dissimilar welded joints of high temperature resistant nickel-based alloys (A740H, A617B, and HR6W) by means of a thick-walled-component test. In this case, the temperature, the external axial load, and the internal pressure remain constant during testing. Numerical simulations are used for the estimation of the axial component load in order to induce a meaningful damage evolution without causing a total component failure. Metallographic investigations after testing will provide support for understanding the damage mechanism and the influence of the thermo-mechanical load and multiaxiality on the microstructure change and on the creep and TMF- strength.

Keywords: creep, creep-fatigue, component behaviour, weld joints, high temperature material behaviour, nickel-alloys, high temperature resistant steels

Procedia PDF Downloads 119
184 The Usefulness of Premature Chromosome Condensation Scoring Module in Cell Response to Ionizing Radiation

Authors: K. Rawojć, J. Miszczyk, A. Możdżeń, A. Panek, J. Swakoń, M. Rydygier

Abstract:

Due to the mitotic delay, poor mitotic index and disappearance of lymphocytes from peripheral blood circulation, assessing the DNA damage after high dose exposure is less effective. Conventional chromosome aberration analysis or cytokinesis-blocked micronucleus assay do not provide an accurate dose estimation or radiosensitivity prediction in doses higher than 6.0 Gy. For this reason, there is a need to establish reliable methods allowing analysis of biological effects after exposure in high dose range i.e., during particle radiotherapy. Lately, Premature Chromosome Condensation (PCC) has become an important method in high dose biodosimetry and a promising treatment modality to cancer patients. The aim of the study was to evaluate the usefulness of drug-induced PCC scoring procedure in an experimental mode, where 100 G2/M cells were analyzed in different dose ranges. To test the consistency of obtained results, scoring was performed by 3 independent persons in the same mode and following identical scoring criteria. Whole-body exposure was simulated in an in vitro experiment by irradiating whole blood collected from healthy donors with 60 MeV protons and 250 keV X-rays, in the range of 4.0 – 20.0 Gy. Drug-induced PCC assay was performed on human peripheral blood lymphocytes (HPBL) isolated after in vitro exposure. Cells were cultured for 48 hours with PHA. Then to achieve premature condensation, calyculin A was added. After Giemsa staining, chromosome spreads were photographed and manually analyzed by scorers. The dose-effect curves were derived by counting the excess chromosome fragments. The results indicated adequate dose estimates for the whole-body exposure scenario in the high dose range for both studied types of radiation. Moreover, compared results revealed no significant differences between scores, which has an important meaning in reducing the analysis time. These investigations were conducted as a part of an extended examination of 60 MeV protons from AIC-144 isochronous cyclotron, at the Institute of Nuclear Physics in Kraków, Poland (IFJ PAN) by cytogenetic and molecular methods and were partially supported by grant DEC-2013/09/D/NZ7/00324 from the National Science Centre, Poland.

Keywords: cell response to radiation exposure, drug induced premature chromosome condensation, premature chromosome condensation procedure, proton therapy

Procedia PDF Downloads 352
183 Developing City-Level Sustainability Indicators in the Mena Region with the Case of Benghazi and Amman

Authors: Serag El Hegazi

Abstract:

The development of an assessment methodological framework for local and institutional sustainability is a key factor for future development plans and visions. This paper develops an approach to local and institutional sustainability assessment (ALISA). The ALISA methodology is a methodological framework that assists in the clarification, formulation, preparation, selection, and ranking of key indicators to facilitate the assessment of the level of sustainability at the local and institutional levels in North African and Middle Eastern cities. According to the literature review, this paper formulates a methodological framework, ALISA, which is a combination of the UNCSD (2001) Theme Indicators Framework and the issue-based Framework illustrated by McLaren (1996). The methodological framework has been implemented to formulate, select, and prioritise key indicators that most directly reflect the issues of a case study at the local community and institutional level. Yet, in the meantime, there is a lack of clear indicators and frameworks that can be developed to apply successfully at the local and institutional levels in the MENA Region, particularly in the cities of Benghazi and Amman. This is an essential issue for sustainability development estimation. Therefore, a conceptual framework was developed to be tested as a methodology to collect and classify data. The Approach to Local and Institutional Sustainability Assessment (ALISA) is a methodological framework that was developed to apply to certain cities in the MENA region. The main goal is to develop the ALISA framework to formulate, choose, and prioritize sustainability key indicators, which then can assist in guiding an assessment progress to improve decisions and policymakers towards the development of sustainable cities at the local and institutional level in the city of Benghazi. The conceptual, methodological framework, which supports this research with joint documentary and analysed data in two case studies, including focus-group discussions, semi-structured interviews, and questionnaires, reflects the approach required to develop a combined framework that assists the development of sustainability indicators. To achieve this progress and reach the aim of this paper, which is developing a practical approach for sustainability indicators framework that could be used as a tool to develop local and institutional sustainability indicators, appropriate stages must be applied to propose a set of local and institutional sustainability indicators as follows: Step one: issues clarifications, Step two: objectives formation/analysing of issues and boundaries, Step three: indicators preparation, First list of proposed indictors, Step four: indicator selection, Step five: indicator rating/ranking.

Keywords: sustainability indicators, approach to local and institutional level, ALISA, policymakers

Procedia PDF Downloads 21
182 Internal Financing Constraints and Corporate Investment: Evidence from Indian Manufacturing Firms

Authors: Gaurav Gupta, Jitendra Mahakud

Abstract:

This study focuses on the significance of internal financing constraints on the determination of corporate fixed investments in the case of Indian manufacturing companies. Financing constraints companies which have less internal fund or retained earnings face more transaction and borrowing costs due to imperfections in the capital market. The period of study is 1999-2000 to 2013-2014 and we consider 618 manufacturing companies for which the continuous data is available throughout the study period. The data is collected from PROWESS data base maintained by Centre for Monitoring Indian Economy Pvt. Ltd. Panel data methods like fixed effect and random effect methods are used for the analysis. The Likelihood Ratio test, Lagrange Multiplier test, and Hausman test results conclude the suitability of the fixed effect model for the estimation. The cash flow and liquidity of the company have been used as the proxies for the internal financial constraints. In accordance with various theories of corporate investments, we consider other firm specific variable like firm age, firm size, profitability, sales and leverage as the control variables in the model. From the econometric analysis, we find internal cash flow and liquidity have the significant and positive impact on the corporate investments. The variables like cost of capital, sales growth and growth opportunities are found to be significantly determining the corporate investments in India, which is consistent with the neoclassical, accelerator and Tobin’s q theory of corporate investment. To check the robustness of results, we divided the sample on the basis of cash flow and liquidity. Firms having cash flow greater than zero are put under one group, and firms with cash flow less than zero are put under another group. Also, the firms are divided on the basis of liquidity following the same approach. We find that the results are robust to both types of companies having positive and negative cash flow and liquidity. The results for other variables are also in the same line as we find for the whole sample. These findings confirm that internal financing constraints play a significant role for determination of corporate investment in India. The findings of this study have the implications for the corporate managers to focus on the projects having higher expected cash inflows to avoid the financing constraints. Apart from that, they should also maintain adequate liquidity to minimize the external financing costs.

Keywords: cash flow, corporate investment, financing constraints, panel data method

Procedia PDF Downloads 241
181 Results of Operation of Online Medical Care System

Authors: Mahsa Houshdar, Seyed Mehdi Samimi Ardestani , ُSeyed Saeed Sadr

Abstract:

Introduction: Online Medicare is a method in which parts of a medical process - whether its diagnostics, monitoring or the treatment itself will be done by using online services. This system has been operated in one boy’s high school, one girl’s high school and one high school in deprived aria. Method: At the first step the students registered for using the system. It was not mandatory and not free. They participated in estimating depression scale, anxiety scale and clinical interview by online medical care system. During this estimation, we could find the existence and severity of depression and anxiety in each one of the participants, also we could find the consequent needs of each one, such as supportive therapy in mild depression or anxiety, need to visited by psychologist in moderate cases, need to visited by psychiatrist in moderate-severe cases, need to visited by psychiatrist and psychologist in severe cases and need to perform medical lab examination tests. The lab examination tests were performed on persons specified by the system. The lab examinations were included: serum level of vitamin D, serum level of vitamin B12, serum level of calcium, fasting blood sugar, HbA1c, thyroid function tests and CBC. All of the students were solely treated by vitamins or minerals therapy and/ or treatment of medical problem (such as hypothyroidism). After a few months, we came back to high schools and estimated the existence and severity of depression and anxiety in treated students. With comparing these results, the affectability of the system could be prof. Results: Totally, we operate this project in 1077 participants in 243 of participant, the lab examination test were performed. In girls high schools: the existence and severity of depression significantly deceased (P value= 0.018<0.05 & P value 0.004< 0.05), but results about anxiety was not significant. In boys high schools: the existence and severity of depression significantly decreased (P value= 0.023<0.05 & P value = 0.004< 0.05 & P value= 0.049< 0.05). In boys high schools: the existence and severity of anxiety significantly decreased (P value= 0.041<0.05 & P value = 0.046< 0.05 &) but in one high school results about anxiety was not significant. In high school in deprived area the students did not have any problem paying for participating in the project, but they could not pay for medical lab examination tests. Thus, operation of the system was not possible in deprived area without a sponsor. Conclusion: This online medical system was successful in creating medical and psychiatric profile without attending physician. It was successful in decreasing depression without using antidepressants, but it was partially successful in decreasing anxiety.

Keywords: depression, diabetes, online medicare, vitamin D deficiency

Procedia PDF Downloads 325
180 Estimation of Morbidity Level of Industrial Labour Conditions at Zestafoni Ferroalloy Plant

Authors: M. Turmanauli, T. Todua, O. Gvaberidze, R. Javakhadze, N. Chkhaidze, N. Khatiashvili

Abstract:

Background: Mining process has the significant influence on human health and quality of life. In recent years the events in Georgia were reflected on the industry working process, especially minimal requirements of labor safety, hygiene standards of workplace and the regime of work and rest are not observed. This situation is often caused by the lack of responsibility, awareness, and knowledge both of workers and employers. The control of working conditions and its protection has been worsened in many of industries. Materials and Methods: For evaluation of the current situation the prospective epidemiological study by face to face interview method was conducted at Georgian “Manganese Zestafoni Ferroalloy Plant” in 2011-2013. 65.7% of employees (1428 bulletin) were surveyed and the incidence rates of temporary disability days were studied. Results: The average length of a temporary disability single accident was studied taking into consideration as sex groups as well as the whole cohort. According to the classes of harmfulness the following results were received: Class 2.0-10.3%; 3.1-12.4%; 3.2-35.1%; 3.3-12.1%; 3.4-17.6%; 4.0-12.5%. Among the employees 47.5% and 83.1% were tobacco and alcohol consumers respectively. According to the age groups and years of work on the base of previous experience ≥50 ages and ≥21 years of work data prevalence respectively. The obtained data revealed increased morbidity rate according to age and years of work. It was found that the bone and articulate system and connective tissue diseases, aggravation of chronic respiratory diseases, ischemic heart diseases, hypertension and cerebral blood discirculation were the leading among the other diseases. High prevalence of morbidity observed in the workplace with not satisfactory labor conditions from the hygienic point of view. Conclusion: According to received data the causes of morbidity are the followings: unsafety labor conditions; incomplete of preventive medical examinations (preliminary and periodic); lack of access to appropriate health care services; derangement of gathering, recording, and analysis of morbidity data. This epidemiological study was conducted at the JSC “Manganese Ferro Alloy Plant” according to State program “ Prevention of Occupational Diseases” (Program code is 35 03 02 05).

Keywords: occupational health, mining process, morbidity level, cerebral blood discirculation

Procedia PDF Downloads 428
179 Measuring the Economic Impact of Cultural Heritage: Comparative Analysis of the Multiplier Approach and the Value Chain Approach

Authors: Nina Ponikvar, Katja Zajc Kejžar

Abstract:

While the positive impacts of heritage on a broad societal spectrum have long been recognized and measured, the economic effects of the heritage sector are often less visible and frequently underestimated. At macro level, economic effects are usually studied based on one of the two mainstream approach, i.e. either the multiplier approach or the value chain approach. Consequently, there is limited comparability of the empirical results due to the use of different methodological approach in the literature. Furthermore, it is also not clear on which criteria the used approach was selected. Our aim is to bring the attention to the difference in the scope of effects that are encompassed by the two most frequent methodological approaches to valuation of economic effects of cultural heritage on macroeconomic level, i.e. the multiplier approach and the value chain approach. We show that while the multiplier approach provides a systematic, theory-based view of economic impacts but requires more data and analysis, the value chain approach has less solid theoretical foundations and depends on the availability of appropriate data to identify the contribution of cultural heritage to other sectors. We conclude that the multiplier approach underestimates the economic impact of cultural heritage, mainly due to the narrow definition of cultural heritage in the statistical classification and the inability to identify part of the contribution of cultural heritage that is hidden in other sectors. Yet it is not possible to clearly determine whether the value chain method overestimates or underestimates the actual economic impact of cultural heritage since there is a risk that the direct effects are overestimated and double counted, but not all indirect and induced effects are considered. Accordingly, these two approaches are not substitutes but rather complementary. Consequently, a direct comparison of the estimated impacts is not possible and should not be done due to the different scope. To illustrate the difference of the impact assessment of the cultural heritage, we apply both approaches to the case of Slovenia in the 2015-2022 period and measure the economic impact of cultural heritage sector in terms of turnover, gross value added and employment. The empirical results clearly show that the estimation of the economic impact of a sector using the multiplier approach is more conservative, while the estimates based on value added capture a much broader range of impacts. According to the multiplier approach, each euro in cultural heritage sector generates an additional 0.14 euros in indirect effects and an additional 0.44 euros in induced effects. Based on the value-added approach, the indirect economic effect of the “narrow” heritage sectors is amplified by the impact of cultural heritage activities on other sectors. Accordingly, every euro of sales and every euro of gross value added in the cultural heritage sector generates approximately 6 euros of sales and 4 to 5 euros of value added in other sectors. In addition, each employee in the cultural heritage sector is linked to 4 to 5 jobs in other sectors.

Keywords: economic value of cultural heritage, multiplier approach, value chain approach, indirect effects, slovenia

Procedia PDF Downloads 77
178 Predicting Costs in Construction Projects with Machine Learning: A Detailed Study Based on Activity-Level Data

Authors: Soheila Sadeghi

Abstract:

Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.

Keywords: cost prediction, machine learning, project management, random forest, neural networks

Procedia PDF Downloads 58
177 Ribotaxa: Combined Approaches for Taxonomic Resolution Down to the Species Level from Metagenomics Data Revealing Novelties

Authors: Oshma Chakoory, Sophie Comtet-Marre, Pierre Peyret

Abstract:

Metagenomic classifiers are widely used for the taxonomic profiling of metagenomic data and estimation of taxa relative abundance. Small subunit rRNA genes are nowadays a gold standard for the phylogenetic resolution of complex microbial communities, although the power of this marker comes down to its use as full-length. We benchmarked the performance and accuracy of rRNA-specialized versus general-purpose read mappers, reference-targeted assemblers and taxonomic classifiers. We then built a pipeline called RiboTaxa to generate a highly sensitive and specific metataxonomic approach. Using metagenomics data, RiboTaxa gave the best results compared to other tools (Kraken2, Centrifuge (1), METAXA2 (2), PhyloFlash (3)) with precise taxonomic identification and relative abundance description, giving no false positive detection. Using real datasets from various environments (ocean, soil, human gut) and from different approaches (metagenomics and gene capture by hybridization), RiboTaxa revealed microbial novelties not seen by current bioinformatics analysis opening new biological perspectives in human and environmental health. In a study focused on corals’ health involving 20 metagenomic samples (4), an affiliation of prokaryotes was limited to the family level with Endozoicomonadaceae characterising healthy octocoral tissue. RiboTaxa highlighted 2 species of uncultured Endozoicomonas which were dominant in the healthy tissue. Both species belonged to a genus not yet described, opening new research perspectives on corals’ health. Applied to metagenomics data from a study on human gut and extreme longevity (5), RiboTaxa detected the presence of an uncultured archaeon in semi-supercentenarians (aged 105 to 109 years) highlighting an archaeal genus, not yet described, and 3 uncultured species belonging to the Enorma genus that could be species of interest participating in the longevity process. RiboTaxa is user-friendly, rapid, allowing microbiota structure description from any environment and the results can be easily interpreted. This software is freely available at https://github.com/oschakoory/RiboTaxa under the GNU Affero General Public License 3.0.

Keywords: metagenomics profiling, microbial diversity, SSU rRNA genes, full-length phylogenetic marker

Procedia PDF Downloads 121
176 Using Arellano-Bover/Blundell-Bond Estimator in Dynamic Panel Data Analysis – Case of Finnish Housing Price Dynamics

Authors: Janne Engblom, Elias Oikarinen

Abstract:

A panel dataset is one that follows a given sample of individuals over time, and thus provides multiple observations on each individual in the sample. Panel data models include a variety of fixed and random effects models which form a wide range of linear models. A special case of panel data models are dynamic in nature. A complication regarding a dynamic panel data model that includes the lagged dependent variable is endogeneity bias of estimates. Several approaches have been developed to account for this problem. In this paper, the panel models were estimated using the Arellano-Bover/Blundell-Bond Generalized method of moments (GMM) estimator which is an extension of the Arellano-Bond model where past values and different transformations of past values of the potentially problematic independent variable are used as instruments together with other instrumental variables. The Arellano–Bover/Blundell–Bond estimator augments Arellano–Bond by making an additional assumption that first differences of instrument variables are uncorrelated with the fixed effects. This allows the introduction of more instruments and can dramatically improve efficiency. It builds a system of two equations—the original equation and the transformed one—and is also known as system GMM. In this study, Finnish housing price dynamics were examined empirically by using the Arellano–Bover/Blundell–Bond estimation technique together with ordinary OLS. The aim of the analysis was to provide a comparison between conventional fixed-effects panel data models and dynamic panel data models. The Arellano–Bover/Blundell–Bond estimator is suitable for this analysis for a number of reasons: It is a general estimator designed for situations with 1) a linear functional relationship; 2) one left-hand-side variable that is dynamic, depending on its own past realizations; 3) independent variables that are not strictly exogenous, meaning they are correlated with past and possibly current realizations of the error; 4) fixed individual effects; and 5) heteroskedasticity and autocorrelation within individuals but not across them. Based on data of 14 Finnish cities over 1988-2012 differences of short-run housing price dynamics estimates were considerable when different models and instrumenting were used. Especially, the use of different instrumental variables caused variation of model estimates together with their statistical significance. This was particularly clear when comparing estimates of OLS with different dynamic panel data models. Estimates provided by dynamic panel data models were more in line with theory of housing price dynamics.

Keywords: dynamic model, fixed effects, panel data, price dynamics

Procedia PDF Downloads 1508