Search results for: robust estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3164

Search results for: robust estimation

944 Modelling Conceptual Quantities Using Support Vector Machines

Authors: Ka C. Lam, Oluwafunmibi S. Idowu

Abstract:

Uncertainty in cost is a major factor affecting performance of construction projects. To our knowledge, several conceptual cost models have been developed with varying degrees of accuracy. Incorporating conceptual quantities into conceptual cost models could improve the accuracy of early predesign cost estimates. Hence, the development of quantity models for estimating conceptual quantities of framed reinforced concrete structures using supervised machine learning is the aim of the current research. Using measured quantities of structural elements and design variables such as live loads and soil bearing pressures, response and predictor variables were defined and used for constructing conceptual quantities models. Twenty-four models were developed for comparison using a combination of non-parametric support vector regression, linear regression, and bootstrap resampling techniques. R programming language was used for data analysis and model implementation. Gross soil bearing pressure and gross floor loading were discovered to have a major influence on the quantities of concrete and reinforcement used for foundations. Building footprint and gross floor loading had a similar influence on beams and slabs. Future research could explore the modelling of other conceptual quantities for walls, finishes, and services using machine learning techniques. Estimation of conceptual quantities would assist construction planners in early resource planning and enable detailed performance evaluation of early cost predictions.

Keywords: bootstrapping, conceptual quantities, modelling, reinforced concrete, support vector regression

Procedia PDF Downloads 202
943 Software Development to Empowering Digital Libraries with Effortless Digital Cataloging and Access

Authors: Abdul Basit Kiani

Abstract:

The software for the digital library system is a cutting-edge solution designed to revolutionize the way libraries manage and provide access to their vast collections of digital content. This advanced software leverages the power of technology to offer a seamless and user-friendly experience for both library staff and patrons. By implementing this software, libraries can efficiently organize, store, and retrieve digital resources, including e-books, audiobooks, journals, articles, and multimedia content. Its intuitive interface allows library staff to effortlessly manage cataloging, metadata extraction, and content enrichment, ensuring accurate and comprehensive access to digital materials. For patrons, the software offers a personalized and immersive digital library experience. They can easily browse the digital catalog, search for specific items, and explore related content through intelligent recommendation algorithms. The software also facilitates seamless borrowing, lending, and preservation of digital items, enabling users to access their favorite resources anytime, anywhere, on multiple devices. With robust security features, the software ensures the protection of intellectual property rights and enforces access controls to safeguard sensitive content. Integration with external authentication systems and user management tools streamlines the library's administration processes, while advanced analytics provide valuable insights into patron behavior and content usage. Overall, this software for the digital library system empowers libraries to embrace the digital era, offering enhanced access, convenience, and discoverability of their vast collections. It paves the way for a more inclusive and engaging library experience, catering to the evolving needs of tech-savvy patrons.

Keywords: software development, empowering digital libraries, digital cataloging and access, management system

Procedia PDF Downloads 64
942 Determination of Nutritional Value and Steroidal Saponin of Fenugreek Genotypes

Authors: Anita Singh, Richa Naula, Manoj Raghav

Abstract:

Nutrient rich and high-yielding varieties of fenugreek can be developed by using genotypes which are naturally high in nutrients. Gene banks harbour scanty germplasm collection of Trigonella spp. and a very little background information about its genetic diversity. The extent of genetic diversity in a specific breeding population depends upon the genotype included in it. The present investigation aims at the estimation of macronutrient (phosphorus by spectrophotometer and potassium by flame photometer), micronutrients, namely, iron, zinc, manganese, and copper from seeds of fenugreek genotypes using atomic absorption spectrophotometer, protein by Rapid N Cube Analyser and Steroidal Saponins. Twenty-eight genotypes of fenugreek along with two standard checks, namely, Pant Ragini and Pusa Early Bunching were collected from different parts of India, and nutrient contents of each genotype were determined at G. B. P. U. A. & T. Laboratory, Pantnagar. Highest potassium content was observed in PFG-35 (1207 mg/100g). PFG-37 and PFG-20 were richest in phosphorus, iron and manganese content among all the genotypes. The lowest zinc content was found in PFG-26 (1.19 mg/100g), while the maximum zinc content was found in PFG- 28 (4.43 mg/100g). The highest content of copper was found in PFG-26 (1.97 mg/100g). PFG-39 has the highest protein content (29.60 %). Significant differences were observed in the steroidal saponin among the genotypes. Saponin content ranged from 0.38 g/100g to 1.31 g/100g. Steroidal Saponins content was found the maximum in PFG-36 (1.31 g/100g) followed by PFG-17 (1.28 g/100g). Therefore, the genotypes which are rich in nutrient and oil content can be used for plant biofortification, dietary supplements, and herbal products.

Keywords: genotypes, macronutrients, micronutrient, protein, seeds

Procedia PDF Downloads 244
941 Saving Energy through Scalable Architecture

Authors: John Lamb, Robert Epstein, Vasundhara L. Bhupathi, Sanjeev Kumar Marimekala

Abstract:

In this paper, we focus on the importance of scalable architecture for data centers and buildings in general to help an enterprise achieve environmental sustainability. The scalable architecture helps in many ways, such as adaptability to the business and user requirements, promotes high availability and disaster recovery solutions that are cost effective and low maintenance. The scalable architecture also plays a vital role in three core areas of sustainability: economy, environment, and social, which are also known as the 3 pillars of a sustainability model. If the architecture is scalable, it has many advantages. A few examples are that scalable architecture helps businesses and industries to adapt to changing technology, drive innovation, promote platform independence, and build resilience against natural disasters. Most importantly, having a scalable architecture helps industries bring in cost-effective measures for energy consumption, reduce wastage, increase productivity, and enable a robust environment. It also helps in the reduction of carbon emissions with advanced monitoring and metering capabilities. Scalable architectures help in reducing waste by optimizing the designs to utilize materials efficiently, minimize resources, decrease carbon footprints by using low-impact materials that are environmentally friendly. In this paper we also emphasize the importance of cultural shift towards the reuse and recycling of natural resources for a balanced ecosystem and maintain a circular economy. Also, since all of us are involved in the use of computers, much of the scalable architecture we have studied is related to data centers.

Keywords: scalable architectures, sustainability, application design, disruptive technology, machine learning and natural language processing, AI, social media platform, cloud computing, advanced networking and storage devices, advanced monitoring and metering infrastructure, climate change

Procedia PDF Downloads 77
940 Revisiting Ryan v Lennon to Make the Case against Judicial Supremacy

Authors: Tom Hickey

Abstract:

It is difficult to conceive of a case that might more starkly bring the arguments concerning judicial review to the fore than State (Ryan) v Lennon. Small wonder that it has attracted so much scholarly attention, although the fact that almost all of it has been in an Irish setting is perhaps surprising, given the illustrative value of the case in respect of a philosophical quandary that continues to command attention in all developed constitutional democracies. Should judges have power to invalidate legislation? This article revisits Ryan v Lennon with an eye on the importance of the idea of “democracy” in the case. It assesses the meaning of democracy: what its purpose might be and what practical implications might follow, specifically in respect of judicial review. Based on this assessment, it argues for a particular institutional model for the vindication of constitutional rights. In the context of calls for the drafting of a new constitution for Ireland, however forlorn these calls might be for the moment, it makes a broad and general case for the abandonment of judicial supremacy and for the taking up of a model in which judges have a constrained rights reviewing role that informs a more robust role that legislators would play, thereby enhancing the quality of the control that citizens have over their own laws. The article is in three parts. Part I assesses the exercise of judicial power over legislation in Ireland, with the primary emphasis on Ryan v Lennon. It considers the role played by the idea of democracy in that case and relates it to certain apparently intractable dilemmas that emerged in later Irish constitutional jurisprudence. Part II considers the concept of democracy more generally, with an eye on overall implications for judicial power. It argues for an account of democracy based on the idea of equally shared popular control over government. Part III assesses how this understanding might inform a new constitutional arrangement in the Irish setting for the vindication of fundamental rights.

Keywords: constitutional rights, democracy as popular control, Ireland, judicial power, republican theory, Ryan v Lennon

Procedia PDF Downloads 533
939 Recognition and Counting Algorithm for Sub-Regional Objects in a Handwritten Image through Image Sets

Authors: Kothuri Sriraman, Mattupalli Komal Teja

Abstract:

In this paper, a novel algorithm is proposed for the recognition of hulls in a hand written images that might be irregular or digit or character shape. Identification of objects and internal objects is quite difficult to extract, when the structure of the image is having bulk of clusters. The estimation results are easily obtained while going through identifying the sub-regional objects by using the SASK algorithm. Focusing mainly to recognize the number of internal objects exist in a given image, so as it is shadow-free and error-free. The hard clustering and density clustering process of obtained image rough set is used to recognize the differentiated internal objects, if any. In order to find out the internal hull regions it involves three steps pre-processing, Boundary Extraction and finally, apply the Hull Detection system. By detecting the sub-regional hulls it can increase the machine learning capability in detection of characters and it can also be extend in order to get the hull recognition even in irregular shape objects like wise black holes in the space exploration with their intensities. Layered hulls are those having the structured layers inside while it is useful in the Military Services and Traffic to identify the number of vehicles or persons. This proposed SASK algorithm is helpful in making of that kind of identifying the regions and can useful in undergo for the decision process (to clear the traffic, to identify the number of persons in the opponent’s in the war).

Keywords: chain code, Hull regions, Hough transform, Hull recognition, Layered Outline Extraction, SASK algorithm

Procedia PDF Downloads 333
938 Balanced Scorecard (BSC) Project : A Methodological Proposal for Decision Support in a Corporate Scenario

Authors: David de Oliveira Costa, Miguel Ângelo Lellis Moreira, Carlos Francisco Simões Gomes, Daniel Augusto de Moura Pereira, Marcos dos Santos

Abstract:

Strategic management is a fundamental process for global companies that intend to remain competitive in an increasingly dynamic and complex market. To do so, it is necessary to maintain alignment with their principles and values. The Balanced Scorecard (BSC) proposes to ensure that the overall business performance is based on different perspectives (financial, customer, internal processes, and learning and growth). However, relying solely on the BSC may not be enough to ensure the success of strategic management. It is essential that companies also evaluate and prioritize strategic projects that need to be implemented to ensure they are aligned with the business vision and contribute to achieving established goals and objectives. In this context, the proposition involves the incorporation of the SAPEVO-M multicriteria method to indicate the degree of relevance between different perspectives. Thus, the strategic objectives linked to these perspectives have greater weight in the classification of structural projects. Additionally, it is proposed to apply the concept of the Impact & Probability Matrix (I&PM) to structure and ensure that strategic projects are evaluated according to their relevance and impact on the business. By structuring the business's strategic management in this way, alignment and prioritization of projects and actions related to strategic planning are ensured. This ensures that resources are directed towards the most relevant and impactful initiatives. Therefore, the objective of this article is to present the proposal for integrating the BSC methodology, the SAPEVO-M multicriteria method, and the prioritization matrix to establish a concrete weighting of strategic planning and obtain coherence in defining strategic projects aligned with the business vision. This ensures a robust decision-making support process.

Keywords: MCDA process, prioritization problematic, corporate strategy, multicriteria method

Procedia PDF Downloads 65
937 Formulation and Anticancer Evaluation of Beta-Sitosterol in Henna Methanolic Extract Embedded in Controlled Release Nanocomposite

Authors: Sanjukta Badhai, Durga Barik, Bairagi C. Mallick

Abstract:

In the present study, Beta-Sitosterol in Lawsonia methanolic leaf extract embedded in controlled release nanocomposite was prepared and evaluated for in vivo anticancer efficacy in dimethyl hydrazine (DMH) induced colon cancer. In the present study, colon cancer was induced by s.c injection of DMH (20 mg/kg b.wt) for 15 weeks. The animals were divided into five groups as follows control, DMH alone, DMH and Beta Sitosterol nanocomposite (50mg/kg), DMH and Beta Sitosterol nanocomposite (100 mg/kg) and DMH and Standard Silymarin (100mg/kg) and the treatment was carried out for 15 weeks. At the end of the study period, the blood was withdrawn, and serum was separated for haematological, biochemical analysis and tumor markers. Further, the colonic tissue was removed for the estimation of antioxidants and histopathological analysis. The results of the study displays that DMH intoxication elicits altered haematological parameters (RBC,WBC, and Hb), elevated lipid peroxidation and decreased antioxidants level (SOD, CAT, GPX, GST and GSH), elevated lipid profiles (cholesterol and triglycerides), tumor markers (CEA and AFP) and altered colonic tissue histology. Meanwhile, treatment with Beta Sitosterol nanocomposites significantly restored the altered biochemicals parameters in DMH induced colon cancer mediated by its anticancer efficacy. Further, Beta Sitosterol nanocomposite (100 mg/kg) showed marked efficacy.

Keywords: nanocomposites, herbal formulation, henna, beta sitosterol, colon cancer, dimethyl hydrazine, antioxidant, lipid peroxidation

Procedia PDF Downloads 153
936 Computational Fluid Dynamics Simulation of a Boiler Outlet Header Constructed of Inconel Alloy 740H

Authors: Sherman Ho, Ahmed Cherif Megri

Abstract:

Headers play a critical role in conveying steam to regulate heating system temperatures. While various materials like steel grades 91 and 92 have been traditionally used for pipes, this research proposes the use of a robust and innovative material, INCONEL Alloy 740H. Boilers in power plant configurations are exposed to cycling conditions due to factors such as daily, seasonal, and yearly variations in weather. These cycling conditions can lead to the deterioration of headers, which are vital components with intricate geometries. Header failures result in substantial financial losses from repair costs and power plant shutdowns, along with significant public inconveniences such as the loss of heating and hot water. To address this issue and seek solutions, a mechanical analysis, as well as a structural analysis, are recommended. Transient analysis to predict heat transfer conditions is of paramount importance, as the direction of heat transfer within the header walls and the passing steam can vary based on the location of interest, load, and operating conditions. The geometry and material of the header are also crucial design factors, and the choice of pipe material depends on its usage. In this context, the heat transfer coefficient plays a vital role in header design and analysis. This research employs ANSYS Fluent, a numerical simulation program, to understand header behavior, predict heat transfer, and analyze mechanical phenomena within the header. Transient simulations are conducted to investigate parameters like heat transfer coefficient, pressure loss coefficients, and heat flux, with the results used to optimize header design.

Keywords: CFD, header, power plant, heat transfer coefficient, simulation using experimental data

Procedia PDF Downloads 54
935 Experimental and Simulation Analysis of an Innovative Steel Shear Wall with Semi-Rigid Beam-to-Column Connections

Authors: E. Faizan, Wahab Abdul Ghafar, Tao Zhong

Abstract:

Steel plate shear walls (SPSWs) are a robust lateral load resistance structure because of their high flexibility and efficient energy dissipation when subjected to seismic loads. This research investigates the seismic performance of an innovative infill web strip (IWS-SPSW) and a typical unstiffened steel plate shear wall (USPSW). As a result, two 1:3 scale specimens of an IWS-SPSW and USPSW with a single story and a single bay were built and subjected to a cyclic lateral loading methodology. In the prototype, the beam-to-column connections were accomplished with the assistance of semi-rigid end-plate connectors. IWS-SPSW demonstrated exceptional ductility and shear load-bearing capacity during the testing process, with no cracks or other damage occurring. In addition, the IWS-SPSW could effectively dissipate energy without causing a significant amount of beam-column connection distortion. The shear load-bearing capacity of the USPSW was exceptional. However, it exhibited low ductility, severe infill plate corner ripping, and huge infill web plate cracks. The FE models were created and then confirmed using the experimental data. It has been demonstrated that the infill web strips of an SPSW system can affect the system's high performance and total energy dissipation. In addition, a parametric analysis was carried out to evaluate the material qualities of the IWS, which can considerably improve the system's seismic performances. These properties include the steel's strength as well as its thickness.

Keywords: steel shear walls, seismic performance, failure mode, hysteresis response, nonlinear finite element analysis, parametric study

Procedia PDF Downloads 67
934 Effect of Coaching Related Incompetency to Stand Trial on Symptom Validity Test: Robustness, Sensitivity, and Specificity

Authors: Natthawut Arin

Abstract:

In forensic contexts, competency to stand trial assessments are the most common referrals. The defendants may attempt to endorse psychopathology symptoms and feign incompetent. Coaching, which can be teaching them test-taking strategies to avoid detection of psychopathological symptoms feigning. Recently, the Symptom Validity Testings (SVTs) were created to detect feigning. Moreover, the works of the literature showed that the effects of coaching on SVTs may be more robust to the effects of coaching. Thai Symptom Validity Test (SVT-Th) was designed as SVTs which demonstrated adequate psychometric properties and ability to classify between feigners and honest responders. Thus, the current study to examine the utility as the robustness of SVT-Th in the detection of feigned psychopathology. Participants consisted of 120 were recruited from undergraduate courses in psychology, randomly assigned to one of three groups. The SVT-Th was administered to those three scenario-experimental groups: (a) Uncoached group were asked to respond honestly (n=40), (b) Symptom-coached without warning group were asked to feign psychiatric symptoms to gain incompetency to stand trial (n=40), while (c) Test-coached with warning group were asked to feign psychiatric symptoms to avoid test detection but being incompetency to stand trial (n=40). Group differences were analyzed using one-way ANOVAs. The result revealed an uncoached group (M = 4.23, SD.= 5.20) had significantly lower SVT-Th mean scores than those both coached groups (M =185.00, SD.= 72.88 and M = 132.10, SD.= 54.06, respectively). Classification rates were calculated to determine the classification accuracy. Result indicated that SVT-Th had overall classification accuracy rates of 96.67% with acceptable of 95% sensitivity and 100% specificity rates. Overall, the results of the present study indicate that the SVT-Th yielded high adequate indices of accuracy and these findings suggest that the SVT-Th is robustness against coaching.

Keywords: incompetency to stand trial, coaching, robustness, classification accuracy

Procedia PDF Downloads 125
933 Impact of Proposed Modal Shift from Private Users to Bus Rapid Transit System: An Indian City Case Study

Authors: Rakesh Kumar, Fatima Electricwala

Abstract:

One of the major thrusts of the Bus Rapid Transit System is to reduce the commuter’s dependency on private vehicles and increase the shares of public transport to make urban transportation system environmentally sustainable. In this study, commuter mode choice analysis is performed that examines behavioral responses to the proposed Bus Rapid Transit System (BRTS) in Surat, with estimation of the probable shift from private mode to public mode. Further, evaluation of the BRTS scenarios, using Surat’s transportation ecological footprint was done. A multi-modal simulation model was developed in Biogeme environment to explicitly consider private users behaviors and non-linear environmental impact. The data of the different factors (variables) and its impact that might cause modal shift of private mode users to proposed BRTS were collected through home-interview survey using revealed and stated preference approach. A multi modal logit model of mode-choice was then calibrated using the collected data and validated using proposed sample. From this study, a set of perception factors, with reliable and predictable data base, to explain the variation in modal shift behaviour and their impact on Surat’s ecological environment has been identified. A case study of the proposed BRTS connecting the Surat Industrial Hub to the coastal area is provided to illustrate the approach.

Keywords: BRTS, private modes, mode choice models, ecological footprint

Procedia PDF Downloads 508
932 Modeling of Bipolar Charge Transport through Nanocomposite Films for Energy Storage

Authors: Meng H. Lean, Wei-Ping L. Chu

Abstract:

The effects of ferroelectric nanofiller size, shape, loading, and polarization, on bipolar charge injection, transport, and recombination through amorphous and semicrystalline polymers are studied. A 3D particle-in-cell model extends the classical electrical double layer representation to treat ferroelectric nanoparticles. Metal-polymer charge injection assumes Schottky emission and Fowler-Nordheim tunneling, migration through field-dependent Poole-Frenkel mobility, and recombination with Monte Carlo selection based on collision probability. A boundary integral equation method is used for solution of the Poisson equation coupled with a second-order predictor-corrector scheme for robust time integration of the equations of motion. The stability criterion of the explicit algorithm conforms to the Courant-Friedrichs-Levy limit. Trajectories for charge that make it through the film are curvilinear paths that meander through the interspaces. Results indicate that charge transport behavior depends on nanoparticle polarization with anti-parallel orientation showing the highest leakage conduction and lowest level of charge trapping in the interaction zone. Simulation prediction of a size range of 80 to 100 nm to minimize attachment and maximize conduction is validated by theory. Attached charge fractions go from 2.2% to 97% as nanofiller size is decreased from 150 nm to 60 nm. Computed conductivity of 0.4 x 1014 S/cm is in agreement with published data for plastics. Charge attachment is increased with spheroids due to the increase in surface area, and especially so for oblate spheroids showing the influence of larger cross-sections. Charge attachment to nanofillers and nanocrystallites increase with vol.% loading or degree of crystallinity, and saturate at about 40 vol.%.

Keywords: nanocomposites, nanofillers, electrical double layer, bipolar charge transport

Procedia PDF Downloads 342
931 A Method to Estimate Wheat Yield Using Landsat Data

Authors: Zama Mahmood

Abstract:

The increasing demand of food management, monitoring of the crop growth and forecasting its yield well before harvest is very important. These days, yield assessment together with monitoring of crop development and its growth are being identified with the help of satellite and remote sensing images. Studies using remote sensing data along with field survey validation reported high correlation between vegetation indices and yield. With the development of remote sensing technique, the detection of crop and its mechanism using remote sensing data on regional or global scales have become popular topics in remote sensing applications. Punjab, specially the southern Punjab region is extremely favourable for wheat production. But measuring the exact amount of wheat production is a tedious job for the farmers and workers using traditional ground based measurements. However, remote sensing can provide the most real time information. In this study, using the Normalized Differentiate Vegetation Index (NDVI) indicator developed from Landsat satellite images, the yield of wheat has been estimated during the season of 2013-2014 for the agricultural area around Bahawalpur. The average yield of the wheat was found 35 kg/acre by analysing field survey data. The field survey data is in fair agreement with the NDVI values extracted from Landsat images. A correlation between wheat production (ton) and number of wheat pixels has also been calculated which is in proportional pattern with each other. Also a strong correlation between the NDVI and wheat area was found (R2=0.71) which represents the effectiveness of the remote sensing tools for crop monitoring and production estimation.

Keywords: landsat, NDVI, remote sensing, satellite images, yield

Procedia PDF Downloads 322
930 Ameliorative Effect of Martynia annua Linn. on Collagen-Induced Arthritis via Modulating Cytokines and Oxidative Stress in Mice

Authors: Alok Pal Jain, Santram Lodhi

Abstract:

Martynia annua Linn. (Martyniaccae) is traditionally used in inflammation and applied locally to tuberculosis glands of camel’s neck. The leaves used topically to bites of venomous insects and wounds of domestic animals. Chemical examination of Martynia annua leaves revealed the presence of glycosides, tannins, proteins, phenols and flavonoids. The present study was aimed to evaluate the anti-arthritic activity of methanolic extract of Martynia annua leaves. Methanolic extract of Martynia annua leaves was tested by using in vivo collagen-induced arthritis mouse model to investigate the anti-rheumatoid arthritis activity. In addition, antioxidant effect of methanolic extract was determined by the estimation of antioxidants level in joint tissues. The severity of arthritis was assessed by arthritis score and edema. Levels of cytokines TNF-α and IL-6, in the joint tissue homogenate were measured using ELISA. A high dose (250 mg/kg) of methanolic extract was significantly reduced the degree of inflammation in mice as compared with reference drug. Antioxidants level and malondialdehyde (MDA) in joint tissue homogenate found significantly (p < 0.05) higher. Methanolic extract at dose of 250 mg/kg modulated the cytokines production and suppressed the oxidative stress in the mice with collagen-induced arthritis. This study suggested that Martynia annua might be alternative herbal medicine for the management of rheumatoid arthritis.

Keywords: Martynia annua, collagen, rheumatoid arthritis, antioxidants

Procedia PDF Downloads 285
929 An Intelligent Controller Augmented with Variable Zero Lag Compensation for Antilock Braking System

Authors: Benjamin Chijioke Agwah, Paulinus Chinaenye Eze

Abstract:

Antilock braking system (ABS) is one of the important contributions by the automobile industry, designed to ensure road safety in such way that vehicles are kept steerable and stable when during emergency braking. This paper presents a wheel slip-based intelligent controller with variable zero lag compensation for ABS. It is required to achieve a very fast perfect wheel slip tracking during hard braking condition and eliminate chattering with improved transient and steady state performance, while shortening the stopping distance using effective braking torque less than maximum allowable torque to bring a braking vehicle to a stop. The dynamic of a vehicle braking with a braking velocity of 30 ms⁻¹ on a straight line was determined and modelled in MATLAB/Simulink environment to represent a conventional ABS system without a controller. Simulation results indicated that system without a controller was not able to track desired wheel slip and the stopping distance was 135.2 m. Hence, an intelligent control based on fuzzy logic controller (FLC) was designed with a variable zero lag compensator (VZLC) added to enhance the performance of FLC control variable by eliminating steady state error, provide improve bandwidth to eliminate the effect of high frequency noise such as chattering during braking. The simulation results showed that FLC- VZLC provided fast tracking of desired wheel slip, eliminate chattering, and reduced stopping distance by 70.5% (39.92 m), 63.3% (49.59 m), 57.6% (57.35 m) and 50% (69.13 m) on dry, wet, cobblestone and snow road surface conditions respectively. Generally, the proposed system used effective braking torque that is less than the maximum allowable braking torque to achieve efficient wheel slip tracking and overall robust control performance on different road surfaces.

Keywords: ABS, fuzzy logic controller, variable zero lag compensator, wheel slip tracking

Procedia PDF Downloads 141
928 Evaluating the Effects of Rainfall and Agricultural Practices on Soil Erosion (Palapye Case Study)

Authors: Mpaphi Major

Abstract:

Soil erosion is becoming an important aspect of land degradation. Therefore it is of great consideration to note any factor that may escalate the rate of soil erosion in our arable land. There exist 3 main driving forces in soil erosion which are rainfall, wind and land use of which in this project only rainfall and land use will be looked at. With the increase in world population at an alarming rate, the demand for food production is expected to increase which will in turn lead to more land being converted from forests to agricultural use of which very few of it are now fertile. In our country Botswana, the rate of crop production is decreasing due to the wearing away of the fertile top soil and poor arable land management. As a result, some studies on the rate of soil loss and farm management practices should be conducted so that best soil and water conservation practices should be employed and hence reduce the risk of soil loss and increase the rate of crop production and yield. The Soil loss estimation model for Southern Africa (SLEMSA) will be used to estimate the rate of soil loss in some selected arable farms within the Palapye watershed and some field observations will be made to determine the management practices used and their impact on the arable land. Upon observations it have been found that many arable fields have been exposed to soil erosion, of which the affected parts are no longer suitable for any crop production unless the land areas are modified. Improper land practices such as ploughing along the slope and land cultivation practices were observed. As a result farmers need to be educated on best conservation practices that can be used to manage their arable land hence reduced risk of soil erosion and improved crop production.

Keywords: soil and water conservation, soil erosion, SLEMSA, land degradation

Procedia PDF Downloads 392
927 Economic Assessment of the Fish Solar Tent Dryers

Authors: Collen Kawiya

Abstract:

In an effort of reducing post-harvest losses and improving the supply of quality fish products in Malawi, the fish solar tent dryers have been designed in the southern part of Lake Malawi for processing small fish species under the project of Cultivate Africa’s Future (CultiAF). This study was done to promote the adoption of the fish solar tent dryers by the many small scale fish processors in Malawi through the assessment of the economic viability of these dryers. With the use of the project’s baseline survey data, a business model for a constructed ‘ready for use’ solar tent dryer was developed where investment appraisal techniques were calculated in addition with the sensitivity analysis. The study also conducted a risk analysis through the use of the Monte Carlo simulation technique and a probabilistic net present value was found. The investment appraisal results showed that the net present value was US$8,756.85, the internal rate of return was 62% higher than the 16.32% cost of capital and the payback period was 1.64 years. The sensitivity analysis results showed that only two input variables influenced the fish solar dryer investment’s net present value. These are the dried fish selling prices that were correlating positively with the net present value and the fresh fish buying prices that were negatively correlating with the net present value. Risk analysis results showed that the chances that fish processors will make a loss from this type of investment are 17.56%. It was also observed that there exist only a 0.20 probability of experiencing a negative net present value from this type of investment. Lastly, the study found that the net present value of the fish solar tent dryer’s investment is still robust in spite of any changes in the levels of investors risk preferences. With these results, it is concluded that the fish solar tent dryers in Malawi are an economically viable investment because they are able to improve the returns in the fish processing activity. As such, fish processors need to adopt them by investing their money to construct and use them.

Keywords: investment appraisal, risk analysis, sensitivity analysis, solar tent drying

Procedia PDF Downloads 264
926 Angiogenic and Immunomodulatory Properties and Phenotype of Mesenchymal Stromal Cells Can Be Regulated by Cytokine Treatment

Authors: Ekaterina Zubkova, Irina Beloglazova, Iurii Stafeev, Konsyantin Dergilev, Yelena Parfyonova, Mikhail Menshikov

Abstract:

Mesenchymal stromal cells from adipose tissue (MSC) currently are widely used in regenerative medicine to restore the function of damaged tissues, but that is significantly hampered by their heterogeneity. One of the modern approaches to overcoming this obstacle is the polarization of cell subpopulations into a specific phenotype under the influence of cytokines and other factors that activate receptors and signal transmission to cells. We polarized MSC with factors affecting the inflammatory signaling and functional properties of cells, followed by verification of their expression profile and ability to affect the polarization of macrophages. RT-PCR evaluation showed that cells treated with LPS, interleukin-17, tumor necrosis factor α (TNF α), primarily express pro-inflammatory factors and cytokines, and after treatment with polyninosin polycytidic acid and interleukin-4 (IL4) anti-inflammatory factors and some proinflammatory factors. MSC polarized with pro-inflammatory cytokines showed a more robust pro-angiogenic effect in fibrin gel bead 3D angiogenesis assay. Further, we evaluated the possibility of paracrine effects of MSCs on the polarization of intact macrophages. Polarization efficiency was assesed by expression of M1/M2 phenotype markers CD80 and CD206. We showed that conditioned media from MSC preincubated in the presence of IL-4 cause an increase in CD206 expression similar to that observed in M2 macrophages. Conditioned media from MSC polarized in the presence of LPS or TNF-α increased the expression of CD80 antigen in macrophages, similar to that observed in M1 macrophages. In other cases, a pronounced paracrine effect of MSC on the polarization of macrophages was not detected. Thus, our study showed that the polarization of MSC along the pro-inflammatory or anti-inflammatory pathway allows us to obtain cell subpopulations that have a multidirectional modulating effect on the polarization of macrophages. (RFBR grants 20-015-00405 and 18-015-00398.)

Keywords: angiogenesis, cytokines, mesenchymal, polarization, inflammation

Procedia PDF Downloads 159
925 Testing of Protective Coatings on Automotive Steel, a Correlation Between Salt Spray, Electrochemical Impedance Spectroscopy, and Linear Polarization Resistance Test

Authors: Dhanashree Aole, V. Hariharan, Swati Surushe

Abstract:

Corrosion can cause serious and expensive damage to the automobile components. Various proven techniques for controlling and preventing corrosion depend on the specific material to be protected. Electrochemical Impedance Spectroscopy (EIS) and salt spray tests are commonly used to assess the corrosion degradation mechanism of coatings on metallic surfaces. While, the only test which monitors the corrosion rate in real time is known as Linear Polarisation Resistance (LPR). In this study, electrochemical tests (EIS & LPR) and spray test are reviewed to assess the corrosion resistance and durability of different coatings. The main objective of this study is to correlate the test results obtained using linear polarization resistance (LPR) and Electrochemical Impedance Spectroscopy (EIS) with the results obtained using standard salt spray test. Another objective of this work is to evaluate the performance of various coating systems- CED, Epoxy, Powder coating, Autophoretic, and Zn-trivalent coating for vehicle underbody application. The corrosion resistance coating are assessed. From this study, a promising correlation between different corrosion testing techniques is noted. The most profound observation is that electrochemical tests gives quick estimation of corrosion resistance and can detect the degradation of coatings well before visible signs of damage appear. Furthermore, the corrosion resistances and salt spray life of the coatings investigated were found to be according to the order as follows- CED> powder coating > Autophoretic > epoxy coating > Zn- Trivalent plating.

Keywords: Linear Polarization Resistance (LPR), Electrochemical Impedance Spectroscopy (EIS), salt spray test, sacrificial and barrier coatings

Procedia PDF Downloads 513
924 Prediction of Pounding between Two SDOF Systems by Using Link Element Based On Mathematic Relations and Suggestion of New Equation for Impact Damping Ratio

Authors: Seyed M. Khatami, H. Naderpour, R. Vahdani, R. C. Barros

Abstract:

Many previous studies have been carried out to calculate the impact force and the dissipated energy between two neighboring buildings during seismic excitation, when they collide with each other. Numerical studies are an important part of impact, which several researchers have tried to simulate the impact by using different formulas. Estimation of the impact force and the dissipated energy depends significantly on some parameters of impact. Mass of bodies, stiffness of spring, coefficient of restitution, damping ratio of dashpot and impact velocity are some known and unknown parameters to simulate the impact and measure dissipated energy during collision. Collision is usually shown by force-displacement hysteresis curve. The enclosed area of the hysteresis loop explains the dissipated energy during impact. In this paper, the effect of using different types of impact models is investigated in order to calculate the impact force. To increase the accuracy of impact model and to optimize the results of simulations, a new damping equation is assumed and is validated to get the best results of impact force and dissipated energy, which can show the accuracy of suggested equation of motion in comparison with other formulas. This relation is called "n-m". Based on mathematical relation, an initial value is selected for the mentioned coefficients and kinetic energy loss is calculated. After each simulation, kinetic energy loss and energy dissipation are compared with each other. If they are equal, selected parameters are true and, if not, the constant of parameters are modified and a new analysis is performed. Finally, two unknown parameters are suggested to estimate the impact force and calculate the dissipated energy.

Keywords: impact force, dissipated energy, kinetic energy loss, damping relation

Procedia PDF Downloads 541
923 Computational Aided Approach for Strut and Tie Model for Non-Flexural Elements

Authors: Mihaja Razafimbelo, Guillaume Herve-Secourgeon, Fabrice Gatuingt, Marina Bottoni, Tulio Honorio-De-Faria

Abstract:

The challenge of the research is to provide engineering with a robust, semi-automatic method for calculating optimal reinforcement for massive structural elements. In the absence of such a digital post-processing tool, design office engineers make intensive use of plate modelling, for which automatic post-processing is available. Plate models in massive areas, on the other hand, produce conservative results. In addition, the theoretical foundations of automatic post-processing tools for reinforcement are those of reinforced concrete beam sections. As long as there is no suitable alternative for automatic post-processing of plates, optimal modelling and a significant improvement of the constructability of massive areas cannot be expected. A method called strut-and-tie is commonly used in civil engineering, but the result itself remains very subjective to the calculation engineer. The tool developed will facilitate the work of supporting the engineers in their choice of structure. The method implemented consists of defining a ground-structure built on the basis of the main constraints resulting from an elastic analysis of the structure and then to start an optimization of this structure according to the fully stressed design method. The first results allow to obtain a coherent return in the first network of connecting struts and ties, compared to the cases encountered in the literature. The evolution of the tool will then make it possible to adapt the obtained latticework in relation to the cracking states resulting from the loads applied during the life of the structure, cyclic or dynamic loads. In addition, with the constructability constraint, a final result of reinforcement with an orthogonal arrangement with a regulated spacing will be implemented in the tool.

Keywords: strut and tie, optimization, reinforcement, massive structure

Procedia PDF Downloads 136
922 Development of a Predictive Model to Prevent Financial Crisis

Authors: Tengqin Han

Abstract:

Delinquency has been a crucial factor in economics throughout the years. Commonly seen in credit card and mortgage, it played one of the crucial roles in causing the most recent financial crisis in 2008. In each case, a delinquency is a sign of the loaner being unable to pay off the debt, and thus may cause a lost of property in the end. Individually, one case of delinquency seems unimportant compared to the entire credit system. China, as an emerging economic entity, the national strength and economic strength has grown rapidly, and the gross domestic product (GDP) growth rate has remained as high as 8% in the past decades. However, potential risks exist behind the appearance of prosperity. Among the risks, the credit system is the most significant one. Due to long term and a large amount of balance of the mortgage, it is critical to monitor the risk during the performance period. In this project, about 300,000 mortgage account data are analyzed in order to develop a predictive model to predict the probability of delinquency. Through univariate analysis, the data is cleaned up, and through bivariate analysis, the variables with strong predictive power are detected. The project is divided into two parts. In the first part, the analysis data of 2005 are split into 2 parts, 60% for model development, and 40% for in-time model validation. The KS of model development is 31, and the KS for in-time validation is 31, indicating the model is stable. In addition, the model is further validation by out-of-time validation, which uses 40% of 2006 data, and KS is 33. This indicates the model is still stable and robust. In the second part, the model is improved by the addition of macroeconomic economic indexes, including GDP, consumer price index, unemployment rate, inflation rate, etc. The data of 2005 to 2010 is used for model development and validation. Compared with the base model (without microeconomic variables), KS is increased from 41 to 44, indicating that the macroeconomic variables can be used to improve the separation power of the model, and make the prediction more accurate.

Keywords: delinquency, mortgage, model development, model validation

Procedia PDF Downloads 215
921 The Effect Analysis of Monetary Instruments through Islamic Banking Financing Channel toward Economic Growth in Indonesia, Period January 2008-December 2015

Authors: Sobar M. Johari, Ida Putri Anjarsari

Abstract:

In the transmission of monetary instrument towards real sector of the economy, Bank Indonesia as monetary authority has developed Islamic Bank Indonesia Certificate (abbreviated as SBIS) as an instrument in Islamic open market operation. One of the monetary transmission channels could take place through financing channel from which the fund is used as the source of banking financing. This study aims to analyse the impact of Islamic monetary instrument towards output or economic growth. Data used in this research is taken from Bank Indonesia and Central Board of Statistics for the period of January 2008 until December 2015. The study employs Granger Causality Test, Vector Error Correction Model (VECM), Impulse Response Function (IRF) technique and Forecast Error Variance Decomposition (FEVD) as its analytical methods. The results show that, first, the transmission mechanism of banking financing channel are not linked to output. Second, estimation results of VECM show that SBIS, PUAS, and FIN have significant impact in the long term towards output. When there is monetary shock, output or economic growth could be recovered and stabilized in the short term. FEVD results show that Islamic banking financing contributes 1.33 percent to increase economic growth.

Keywords: Islamic monetary instrument, Islamic banking financing channel, economic growth, Vector Error Correction Model (VECM)

Procedia PDF Downloads 263
920 Use Cloud-Based Watson Deep Learning Platform to Train Models Faster and More Accurate

Authors: Susan Diamond

Abstract:

Machine Learning workloads have traditionally been run in high-performance computing (HPC) environments, where users log in to dedicated machines and utilize the attached GPUs to run training jobs on huge datasets. Training of large neural network models is very resource intensive, and even after exploiting parallelism and accelerators such as GPUs, a single training job can still take days. Consequently, the cost of hardware is a barrier to entry. Even when upfront cost is not a concern, the lead time to set up such an HPC environment takes months from acquiring hardware to set up the hardware with the right set of firmware, software installed and configured. Furthermore, scalability is hard to achieve in a rigid traditional lab environment. Therefore, it is slow to react to the dynamic change in the artificial intelligent industry. Watson Deep Learning as a service, a cloud-based deep learning platform that mitigates the long lead time and high upfront investment in hardware. It enables robust and scalable sharing of resources among the teams in an organization. It is designed for on-demand cloud environments. Providing a similar user experience in a multi-tenant cloud environment comes with its own unique challenges regarding fault tolerance, performance, and security. Watson Deep Learning as a service tackles these challenges and present a deep learning stack for the cloud environments in a secure, scalable and fault-tolerant manner. It supports a wide range of deep-learning frameworks such as Tensorflow, PyTorch, Caffe, Torch, Theano, and MXNet etc. These frameworks reduce the effort and skillset required to design, train, and use deep learning models. Deep Learning as a service is used at IBM by AI researchers in areas including machine translation, computer vision, and healthcare. 

Keywords: deep learning, machine learning, cognitive computing, model training

Procedia PDF Downloads 199
919 Elucidating Microstructural Evolution Mechanisms in Tungsten via Layerwise Rolling in Additive Manufacturing: An Integrated Simulation and Experimental Approach

Authors: Sadman Durlov, Aditya Ganesh-Ram, Hamidreza Hekmatjou, Md Najmus Salehin, Nora Shayesteh Ameri

Abstract:

In the field of additive manufacturing, tungsten stands out for its exceptional resistance to high temperatures, making it an ideal candidate for use in extreme conditions. However, its inherent brittleness and vulnerability to thermal cracking pose significant challenges to its manufacturability. This study explores the microstructural evolution of tungsten processed through layer-wise rolling in laser powder bed fusion additive manufacturing, utilizing a comprehensive approach that combines advanced simulation techniques with empirical research. We aim to uncover the complex processes of plastic deformation and microstructural transformations, with a particular focus on the dynamics of grain size, boundary evolution, and phase distribution. Our methodology employs a combination of simulation and experimental data, allowing for a detailed comparison that elucidates the key mechanisms influencing microstructural alterations during the rolling process. This approach facilitates a deeper understanding of the material's behavior under additive manufacturing conditions, specifically in terms of deformation and recrystallization. The insights derived from this research not only deepen our theoretical knowledge but also provide actionable strategies for refining manufacturing parameters to improve the tungsten components' mechanical properties and functional performance. By integrating simulation with practical experimentation, this study significantly enhances the field of materials science, offering a robust framework for the development of durable materials suited for challenging operational environments. Our findings pave the way for optimizing additive manufacturing techniques and expanding the use of tungsten across various demanding sectors.

Keywords: additive manufacturing, layer wise rolling, refractory materials, in-situ microstructure modifications

Procedia PDF Downloads 47
918 Object Detection in Digital Images under Non-Standardized Conditions Using Illumination and Shadow Filtering

Authors: Waqqas-ur-Rehman Butt, Martin Servin, Marion Pause

Abstract:

In recent years, object detection has gained much attention and very encouraging research area in the field of computer vision. The robust object boundaries detection in an image is demanded in numerous applications of human computer interaction and automated surveillance systems. Many methods and approaches have been developed for automatic object detection in various fields, such as automotive, quality control management and environmental services. Inappropriately, to the best of our knowledge, object detection under illumination with shadow consideration has not been well solved yet. Furthermore, this problem is also one of the major hurdles to keeping an object detection method from the practical applications. This paper presents an approach to automatic object detection in images under non-standardized environmental conditions. A key challenge is how to detect the object, particularly under uneven illumination conditions. Image capturing conditions the algorithms need to consider a variety of possible environmental factors as the colour information, lightening and shadows varies from image to image. Existing methods mostly failed to produce the appropriate result due to variation in colour information, lightening effects, threshold specifications, histogram dependencies and colour ranges. To overcome these limitations we propose an object detection algorithm, with pre-processing methods, to reduce the interference caused by shadow and illumination effects without fixed parameters. We use the Y CrCb colour model without any specific colour ranges and predefined threshold values. The segmented object regions are further classified using morphological operations (Erosion and Dilation) and contours. Proposed approach applied on a large image data set acquired under various environmental conditions for wood stack detection. Experiments show the promising result of the proposed approach in comparison with existing methods.

Keywords: image processing, illumination equalization, shadow filtering, object detection

Procedia PDF Downloads 206
917 Performance Evaluation of a Small Microturbine Cogeneration Functional Model

Authors: Jeni A. Popescu, Sorin G. Tomescu, Valeriu A. Vilag

Abstract:

The paper focuses on the potential methods of increasing the performance of a microturbine by combining additional elements available for utilization in a cogeneration plant. The activity is carried out within the framework of a project aiming to develop, manufacture and test a microturbine functional model with high potential in energetic industry utilization. The main goal of the analysis is to determine the parameters of the fluid flow passing through each section of the turbine, based on limited data available in literature for the focus output power range or provided by experimental studies, starting from a reference cycle, and considering different cycle options, including simple, intercooled and recuperated options, in order to optimize a small cogeneration plant operation. The studied configurations operate under the same initial thermodynamic conditions and are based on a series of assumptions, in terms of individual performance of the components, pressure/velocity losses, compression ratios, and efficiencies. The thermodynamic analysis evaluates the expected performance of the microturbine cycle, while providing a series of input data and limitations to be included in the development of the experimental plan. To simplify the calculations and to allow a clear estimation of the effect of heat transfer between fluids, the working fluid for all the thermodynamic evolutions is, initially, air, the combustion being modelled by simple heat addition to the system. The theoretical results, along with preliminary experimental results are presented, aiming for a correlation in terms of microturbine performance.

Keywords: cogeneration, microturbine, performance, thermodynamic analysis

Procedia PDF Downloads 158
916 Infrastructure Change Monitoring Using Multitemporal Multispectral Satellite Images

Authors: U. Datta

Abstract:

The main objective of this study is to find a suitable approach to monitor the land infrastructure growth over a period of time using multispectral satellite images. Bi-temporal change detection method is unable to indicate the continuous change occurring over a long period of time. To achieve this objective, the approach used here estimates a statistical model from series of multispectral image data over a long period of time, assuming there is no considerable change during that time period and then compare it with the multispectral image data obtained at a later time. The change is estimated pixel-wise. Statistical composite hypothesis technique is used for estimating pixel based change detection in a defined region. The generalized likelihood ratio test (GLRT) is used to detect the changed pixel from probabilistic estimated model of the corresponding pixel. The changed pixel is detected assuming that the images have been co-registered prior to estimation. To minimize error due to co-registration, 8-neighborhood pixels around the pixel under test are also considered. The multispectral images from Sentinel-2 and Landsat-8 from 2015 to 2018 are used for this purpose. There are different challenges in this method. First and foremost challenge is to get quite a large number of datasets for multivariate distribution modelling. A large number of images are always discarded due to cloud coverage. Due to imperfect modelling there will be high probability of false alarm. Overall conclusion that can be drawn from this work is that the probabilistic method described in this paper has given some promising results, which need to be pursued further.

Keywords: co-registration, GLRT, infrastructure growth, multispectral, multitemporal, pixel-based change detection

Procedia PDF Downloads 124
915 Analysis and Design of Inductive Power Transfer Systems for Automotive Battery Charging Applications

Authors: Wahab Ali Shah, Junjia He

Abstract:

Transferring electrical power without any wiring has been a dream since late 19th century. There were some advances in this area as to know more about microwave systems. However, this subject has recently become very attractive due to their practiScal systems. There are low power applications such as charging the batteries of contactless tooth brushes or implanted devices, and higher power applications such as charging the batteries of electrical automobiles or buses. In the first group of applications operating frequencies are in microwave range while the frequency is lower in high power applications. In the latter, the concept is also called inductive power transfer. The aim of the paper is to have an overview of the inductive power transfer for electrical vehicles with a special concentration on coil design and power converter simulation for static charging. Coil design is very important for an efficient and safe power transfer. Coil design is one of the most critical tasks. Power converters are used in both side of the system. The converter on the primary side is used to generate a high frequency voltage to excite the primary coil. The purpose of the converter in the secondary is to rectify the voltage transferred from the primary to charge the battery. In this paper, an inductive power transfer system is studied. Inductive power transfer is a promising technology with several possible applications. Operation principles of these systems are explained, and components of the system are described. Finally, a single phase 2 kW system was simulated and results were presented. The work presented in this paper is just an introduction to the concept. A reformed compensation network based on traditional inductor-capacitor-inductor (LCL) topology is proposed to realize robust reaction to large coupling variation that is common in dynamic wireless charging application. In the future, this type compensation should be studied. Also, comparison of different compensation topologies should be done for the same power level.

Keywords: coil design, contactless charging, electrical automobiles, inductive power transfer, operating frequency

Procedia PDF Downloads 243