Search results for: location-allocation models
4307 A Learning-Based EM Mixture Regression Algorithm
Authors: Yi-Cheng Tian, Miin-Shen Yang
Abstract:
The mixture likelihood approach to clustering is a popular clustering method where the expectation and maximization (EM) algorithm is the most used mixture likelihood method. In the literature, the EM algorithm had been used for mixture regression models. However, these EM mixture regression algorithms are sensitive to initial values with a priori number of clusters. In this paper, to resolve these drawbacks, we construct a learning-based schema for the EM mixture regression algorithm such that it is free of initializations and can automatically obtain an approximately optimal number of clusters. Some numerical examples and comparisons demonstrate the superiority and usefulness of the proposed learning-based EM mixture regression algorithm.Keywords: clustering, EM algorithm, Gaussian mixture model, mixture regression model
Procedia PDF Downloads 5124306 Targeting TACI Signaling Enhances Immune Function and Halts Chronic Lymphocytic Leukemia Progression
Authors: Yong H Sheng, Beatriz Garcillán, Eden Whitlock, Yukli Freedman, SiLing Yang, M Arifur Rahman, Nicholas Weber, Fabienne Mackay
Abstract:
Chronic lymphocytic leukemia (CLL) is closely associated with immune dysfunction, yet the mechanisms underlying this immune deficiency remain poorly understood. Transmembrane Activator and CAML Interactor (TACI), a receptor known for its role in IL-10 regulation and autoimmunity, to the best of our knowledge has not been investigated in the context of anti-tumor immunity or its impact on CLL progression. This study addresses the gap by exploring the role of TACI in regulating CLL cells within the tumor microenvironment and its broader effects on disease progression and immune competence. We utilized the Eµ-TCL1 mouse model to generate CLL mice deficient in TACI and examined the consequences of TACI loss in adoptive transfer models over a five-week period. Comprehensive transcriptomic analysis, including RNA sequencing and microarray, was employed to determine TACI’s influence on the CLL gene expression profile. Additionally, we studied TACI’s direct role in CLL cell migration and immune modulation using patient-derived CLL cells in culture and Patient-Derived Xenograph (PDX) models. Our findings demonstrate that TACI signaling plays a pivotal role in promoting CLL progression and immune suppression. Loss of TACI signaling significantly inhibited CLL development and enhanced immune functionality. When TACI+/+ or TACI-/- TCL1 CLL cells were transferred into wild-type recipient mice, those receiving TACI-deficient cells showed reduced disease progression and lower incidence of CLL. Mice with TACI-/- CLL cells exhibited normalized serum levels of pro-inflammatory cytokines IL-6 and IL-10, restored proportions of T-cell subsets, and improved immune compartment function compared to counterparts with TACI+/+ CLL cells. Mechanistically, TACI-deficient CLL cells expressed significantly lower levels of IL-10, TNF, and inhibitory receptors such as PD-L1 and PD-L2. These cells also display restored circulating immunoglobulin levels and responses to T cell-dependent antigens, highlighting a recovery of immune competence. Further mechanistic studies revealed that TACI signaling drives CLL cell migration and homing to the spleen, where these cells actively establish an immunosuppressive microenvironment that supports immune evasion and tumor growth. Patient-derived CLL cells and PDX models confirmed TACI’s direct role in enhancing CLL cell migration and fostering immune suppression, emphasizing its critical function in the tumor microenvironment. By disrupting TACI signaling, we observed a reduction in CLL-associated immune suppression and tumor progression, offering a promising therapeutic avenue. This study establishes, for the first time, that targeting TACI disrupts key mechanisms underlying CLL progression while preserving vital immune functions. Unlike existing treatments that often impair immunity and lead to infection-related complications, TACI inhibition offers the dual benefit of controlling disease and maintaining immune homeostasis. These findings provide a strong rationale for developing therapeutic strategies that inhibit TACI as a means to improve outcomes in CLL patients. Beyond its implications for CLL, this research underscores the broader importance of TACI in regulating immune-tumor interactions, paving the way for future studies into its role in other malignancies.Keywords: chronic lymphocytic leukemia, TACI, IL-10, immune suppression
Procedia PDF Downloads 174305 New Standardized Framework for Developing Mobile Applications (Based On Real Case Studies and CMMI)
Authors: Ammar Khader Almasri
Abstract:
The software processes play a vital role for delivering a high quality software system that meets the user’s needs. There are many software development models which are used by most system developers, which can be categorized into two categories (traditional and new methodologies). Mobile applications like other desktop applications need appropriate and well-working software development process. Nevertheless, mobile applications have different features which limit their performance and efficiency like application size, mobile hardware features. Moreover, this research aims to help developers in using a standardized model for developing mobile applications.Keywords: software development process, agile methods , moblile application development, traditional methods
Procedia PDF Downloads 3894304 Travel Behaviour and Perceptions in Trips with a Ferry Connection
Authors: Trude Tørset, María Díez Gutiérrez
Abstract:
The west coast of Norway features numerous islands and fjords. Ferry services connect the roads when these features make the construction challenging. Currently, scientific effort is designated to assess potential ferry replacement projects along the European road E-39. The inconvenience of ferry dependency is imprecisely represented in the transport models, thus transport analyses of ferry replacement projects appear as guesstimates rather than reliable input to decision-making processes of such costly projects. Trips including ferry connections imply more inconvenient elements than just travel time and cost. The goal of this paper is to understand and explain the extra inconveniences associated to the dependency of the ferry. The first scientific approach is to identify the characteristics of the ferry travelers and their trips’ features, as well as whether the ferry represents an obstacle for some specific trip types. In doing so, a survey was conducted in 2011 in eight E-39 ferries and in 2013 in 18 ferries connecting different road categories. More than 20,000 passengers answered with their trip and socioeconomic characteristics. The travel patterns in the different ferry connections were compared. The analysis showed that the trip features differed based on the location of the ferry connections, yet independently of the road category. Additionally, the patterns were compared to the national travel survey to detect differences in the travel patterns due to the use of the ferry connections. The results showed that the share of commuting trips within the same travel time was lower if the ferry was part of the trip. The second scientific approach is to know how the different travelers perceive potential benefits for a ferry replacement project. In the 2011 survey, some of the questions were about the relevance of nine different benefits this project might bring. Travelers identified the better access to public services and job market as the most valuable benefits, followed by the reduced planning of the trip. In 2016, a follow-up survey in some of the ferry connections was carried out in order to investigate variations in travelers’ perceptions. The growing interest in ferry replacement projects might make travelers more aware of the potential benefits these would bring to their daily lives. This paper describes the travel behaviour of travelers using a ferry connection as part of their trips, as well as the potential inconveniences associated to these trips. The findings might provide valuable input to further development of transport models, concept evaluations and cost benefit analysis methods.Keywords: ferry connections, ferry trip, inconvenience costs, travel behaviour
Procedia PDF Downloads 2294303 Modern Work Modules in Construction Practice
Authors: Robin Becker, Nane Roetmann, Manfred Helmus
Abstract:
Construction companies lack junior staff for construction management. According to a nationwide survey of students, however, the profession lacks attractiveness. The conflict between the traditional job profile and the current desires of junior staff for contemporary and flexible working models must be resolved. Increasing flexibility is essential for the future viability of small and medium-sized enterprises. The implementation of modern work modules can help here. The following report will present the validation results of the developed work modules in construction practice.Keywords: modern construction management, construction industry, work modules, shortage of junior staff, sustainable personnel management, making construction management more attractive, working time model
Procedia PDF Downloads 894302 Modelling Agricultural Commodity Price Volatility with Markov-Switching Regression, Single Regime GARCH and Markov-Switching GARCH Models: Empirical Evidence from South Africa
Authors: Yegnanew A. Shiferaw
Abstract:
Background: commodity price volatility originating from excessive commodity price fluctuation has been a global problem especially after the recent financial crises. Volatility is a measure of risk or uncertainty in financial analysis. It plays a vital role in risk management, portfolio management, and pricing equity. Objectives: the core objective of this paper is to examine the relationship between the prices of agricultural commodities with oil price, gas price, coal price and exchange rate (USD/Rand). In addition, the paper tries to fit an appropriate model that best describes the log return price volatility and estimate Value-at-Risk and expected shortfall. Data and methods: the data used in this study are the daily returns of agricultural commodity prices from 02 January 2007 to 31st October 2016. The data sets consists of the daily returns of agricultural commodity prices namely: white maize, yellow maize, wheat, sunflower, soya, corn, and sorghum. The paper applies the three-state Markov-switching (MS) regression, the standard single-regime GARCH and the two regime Markov-switching GARCH (MS-GARCH) models. Results: to choose the best fit model, the log-likelihood function, Akaike information criterion (AIC), Bayesian information criterion (BIC) and deviance information criterion (DIC) are employed under three distributions for innovations. The results indicate that: (i) the price of agricultural commodities was found to be significantly associated with the price of coal, price of natural gas, price of oil and exchange rate, (ii) for all agricultural commodities except sunflower, k=3 had higher log-likelihood values and lower AIC and BIC values. Thus, the three-state MS regression model outperformed the two-state MS regression model (iii) MS-GARCH(1,1) with generalized error distribution (ged) innovation performs best for white maize and yellow maize; MS-GARCH(1,1) with student-t distribution (std) innovation performs better for sorghum; MS-gjrGARCH(1,1) with ged innovation performs better for wheat, sunflower and soya and MS-GARCH(1,1) with std innovation performs better for corn. In conclusion, this paper provided a practical guide for modelling agricultural commodity prices by MS regression and MS-GARCH processes. This paper can be good as a reference when facing modelling agricultural commodity price problems.Keywords: commodity prices, MS-GARCH model, MS regression model, South Africa, volatility
Procedia PDF Downloads 2054301 Performance of Total Vector Error of an Estimated Phasor within Local Area Networks
Authors: Ahmed Abdolkhalig, Rastko Zivanovic
Abstract:
This paper evaluates the Total Vector Error of an estimated Phasor as define in IEEE C37.118 standard within different medium access in Local Area Networks (LAN). Three different LAN models (CSMA/CD, CSMA/AMP, and Switched Ethernet) are evaluated. The Total Vector Error of the estimated Phasor has been evaluated for the effect of Nodes Number under the standardized network Band-width values defined in IEC 61850-9-2 communication standard (i.e. 0.1, 1, and 10 Gbps).Keywords: phasor, local area network, total vector error, IEEE C37.118, IEC 61850
Procedia PDF Downloads 3144300 Economic Evaluation of Degradation by Corrosion of an On-Grid Battery Energy Storage System: A Case Study in Algeria Territory
Authors: Fouzia Brihmat
Abstract:
Economic planning models, which are used to build microgrids and distributed energy resources, are the current norm for expressing such confidence (DER). These models often decide both short-term DER dispatch and long-term DER investments. This research investigates the most cost-effective hybrid (photovoltaic-diesel) renewable energy system (HRES) based on Total Net Present Cost (TNPC) in an Algerian Saharan area, which has a high potential for solar irradiation and has a production capacity of 1GW/h. Lead-acid batteries have been around much longer and are easier to understand, but have limited storage capacity. Lithium-ion batteries last longer, are lighter, but generally more expensive. By combining the advantages of each chemistry, we produce cost-effective high-capacity battery banks that operate solely on AC coupling. The financial implications of this research describe the corrosion process that occurs at the interface between the active material and grid material of the positive plate of a lead-acid battery. The best cost study for the HRES is completed with the assistance of the HOMER Pro MATLAB Link. Additionally, during the course of the project's 20 years, the system is simulated for each time step. In this model, which takes into consideration decline in solar efficiency, changes in battery storage levels over time, and rises in fuel prices above the rate of inflation. The trade-off is that the model is more accurate, but it took longer to compute. As a consequence, the model is more precise, but the computation takes longer. We initially utilized the Optimizer to run the model without MultiYear in order to discover the best system architecture. The optimal system for the single-year scenario is the Danvest generator, which has 760 kW, 200 kWh of the necessary quantity of lead-acid storage, and a somewhat lower COE of $0.309/kWh. Different scenarios that account for fluctuations in the gasified biomass generator's production of electricity have been simulated, and various strategies to guarantee the balance between generation and consumption have been investigated. The technological optimization of the same system has been finished and is being reviewed in a recent paper study.Keywords: battery, corrosion, diesel, economic planning optimization, hybrid energy system, lead-acid battery, multi-year planning, microgrid, price forecast, PV, total net present cost
Procedia PDF Downloads 904299 Kinetic Modelling of Fermented Probiotic Beverage from Enzymatically Extracted Annona Muricata Fruit
Authors: Calister Wingang Makebe, Wilson Ambindei Agwanande, Emmanuel Jong Nso, P. Nisha
Abstract:
Traditional liquid-state fermentation processes of Annona muricata L. juice can result in fluctuating product quality and quantity due to difficulties in control and scale up. This work describes a laboratory-scale batch fermentation process to produce a probiotic Annona muricata L. enzymatically extracted juice, which was modeled using the Doehlert design with independent extraction factors being incubation time, temperature, and enzyme concentration. It aimed at a better understanding of the traditional process as an initial step for future optimization. Annona muricata L. juice was fermented with L. acidophilus (NCDC 291) (LA), L. casei (NCDC 17) (LC), and a blend of LA and LC (LCA) for 72 h at 37 °C. Experimental data were fitted into mathematical models (Monod, Logistic and Luedeking and Piret models) using MATLAB software, to describe biomass growth, sugar utilization, and organic acid production. The optimal fermentation time was obtained based on cell viability, which was 24 h for LC and 36 h for LA and LCA. The model was particularly effective in estimating biomass growth, reducing sugar consumption, and lactic acid production. The values of the determination coefficient, R2, were 0.9946, 0.9913 and 0.9946, while the residual sum of square error, SSE, was 0.2876, 0.1738 and 0.1589 for LC, LA and LCA, respectively. The growth kinetic parameters included the maximum specific growth rate, µm, which was 0.2876 h-1, 0.1738 h-1 and 0.1589 h-1 as well as the substrate saturation, Ks, with 9.0680 g/L, 9.9337 g/L and 9.0709 g/L respectively for LC, LA and LCA. For the stoichiometric parameters, the yield of biomass based on utilized substrate (YXS) was 50.7932, 3.3940 and 61.0202, and the yield of product based on utilized substrate (YPS) was 2.4524, 0.2307 and 0.7415 for LC, LA, and LCA, respectively. In addition, the maintenance energy parameter (ms) was 0.0128, 0.0001 and 0.0004 with respect to LC, LA and LCA. With the kinetic model proposed by Luedeking and Piret for lactic acid production rate, the growth associated, and non-growth associated coefficients were determined as 1.0028 and 0.0109, respectively. The model was demonstrated for batch growth of LA, LC, and LCA in Annona muricata L. juice. The present investigation validates the potential of Annona muricata L. based medium for heightened economical production of a probiotic medium.Keywords: L. acidophilus, L. casei, fermentation, modelling, kinetics
Procedia PDF Downloads 844298 Operator Splitting Scheme for the Inverse Nagumo Equation
Authors: Sharon-Yasotha Veerayah-Mcgregor, Valipuram Manoranjan
Abstract:
A backward or inverse problem is known to be an ill-posed problem due to its instability that easily emerges with any slight change within the conditions of the problem. Therefore, only a limited number of numerical approaches are available to solve a backward problem. This paper considers the Nagumo equation, an equation that describes impulse propagation in nerve axons, which also models population growth with the Allee effect. A creative operator splitting numerical scheme is constructed to solve the inverse Nagumo equation. Computational simulations are used to verify that this scheme is stable, accurate, and efficient.Keywords: inverse/backward equation, operator-splitting, Nagumo equation, ill-posed, finite-difference
Procedia PDF Downloads 1014297 Predicting Costs in Construction Projects with Machine Learning: A Detailed Study Based on Activity-Level Data
Authors: Soheila Sadeghi
Abstract:
Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.Keywords: cost prediction, machine learning, project management, random forest, neural networks
Procedia PDF Downloads 634296 A CFD Analysis of Flow through a High-Pressure Natural Gas Pipeline with an Undeformed and Deformed Orifice Plate
Authors: R. Kiš, M. Malcho, M. Janovcová
Abstract:
This work aims to present a numerical analysis of the natural gas which flows through a high-pressure pipeline and an orifice plate, through the use of CFD methods. The paper contains CFD calculations for the flow of natural gas in a pipe with different geometry used for the orifice plates. One of them has a standard geometry and a shape without any deformation and the other is deformed by the action of the pressure differential. It shows the behaviour of natural gas in a pipeline using the velocity profiles and pressure fields of the gas in both models with their differences. The entire research is based on the elimination of any inaccuracy which should appear in the flow of the natural gas measured in the high-pressure pipelines of the gas industry and which is currently not given in the relevant standard.Keywords: orifice plate, high-pressure pipeline, natural gas, CFD analysis
Procedia PDF Downloads 3884295 A Review on Stormwater Harvesting and Reuse
Authors: Fatema Akram, Mohammad G. Rasul, M. Masud K. Khan, M. Sharif I. I. Amir
Abstract:
Australia is a country of some 7,700 million square kilometres with a population of about 22.6 million. At present water security is a major challenge for Australia. In some areas the use of water resources is approaching and in some parts it is exceeding the limits of sustainability. A focal point of proposed national water conservation programs is the recycling of both urban storm-water and treated wastewater. But till now it is not widely practiced in Australia, and particularly storm-water is neglected. In Australia, only 4% of storm-water and rainwater is recycled, whereas less than 1% of reclaimed wastewater is reused within urban areas. Therefore, accurately monitoring, assessing and predicting the availability, quality and use of this precious resource are required for better management. As storm-water is usually of better quality than untreated sewage or industrial discharge, it has better public acceptance for recycling and reuse, particularly for non-potable use such as irrigation, watering lawns, gardens, etc. Existing storm-water recycling practice is far behind of research and no robust technologies developed for this purpose. Therefore, there is a clear need for using modern technologies for assessing feasibility of storm-water harvesting and reuse. Numerical modelling has, in recent times, become a popular tool for doing this job. It includes complex hydrological and hydraulic processes of the study area. The hydrologic model computes storm-water quantity to design the system components, and the hydraulic model helps to route the flow through storm-water infrastructures. Nowadays water quality module is incorporated with these models. Integration of Geographic Information System (GIS) with these models provides extra advantage of managing spatial information. However for the overall management of a storm-water harvesting project, Decision Support System (DSS) plays an important role incorporating database with model and GIS for the proper management of temporal information. Additionally DSS includes evaluation tools and Graphical user interface. This research aims to critically review and discuss all the aspects of storm-water harvesting and reuse such as available guidelines of storm-water harvesting and reuse, public acceptance of water reuse, the scopes and recommendation for future studies. In addition to these, this paper identifies, understand and address the importance of modern technologies capable of proper management of storm-water harvesting and reuse.Keywords: storm-water management, storm-water harvesting and reuse, numerical modelling, geographic information system, decision support system, database
Procedia PDF Downloads 3744294 Shape-Changing Structure: A Prototype for the Study of a Dynamic and Modular Structure
Authors: Annarita Zarrillo
Abstract:
This research is part of adaptive architecture, reflecting the evolution that the world of architectural design is going through. Today's architecture is no longer seen as a static system but, conversely, as a dynamic system that changes in response to the environment and the needs of users. One of the major forms of adaptivity is represented by kinetic structures. This study aims to underline the importance of experimentation on physical scale models for the study of dynamic structures and to present the case study of a modular kinetic structure designed through the use of parametric design software and created as a prototype in the laboratories of the Royal Danish Academy in Copenhagen.Keywords: adaptive architecture, architectural application, kinetic structures, modular prototype
Procedia PDF Downloads 1404293 Research on the Application of Flexible and Programmable Systems in Electronic Systems
Authors: Yang Xiaodong
Abstract:
This article explores the application and structural characteristics of flexible and programmable systems in electronic systems, with a focus on analyzing their advantages and architectural differences in dealing with complex environments. By introducing mathematical models and simulation experiments, the performance of dynamic module combination in flexible systems and fixed path selection in programmable systems in resource utilization and performance optimization was demonstrated. This article also discusses the mutual transformation between the two in practical applications and proposes a solution to improve system flexibility and performance through dynamic reconfiguration technology. This study provides theoretical reference for the design and optimization of flexible and programmable systems.Keywords: flexibility, programmable, electronic systems, system architecture
Procedia PDF Downloads 154292 Winkler Springs for Embedded Beams Subjected to S-Waves
Authors: Franco Primo Soffietti, Diego Fernando Turello, Federico Pinto
Abstract:
Shear waves that propagate through the ground impose deformations that must be taken into account in the design and assessment of buried longitudinal structures such as tunnels, pipelines, and piles. Conventional engineering approaches for seismic evaluation often rely on a Euler-Bernoulli beam models supported by a Winkler foundation. This approach, however, falls short in capturing the distortions induced when the structure is subjected to shear waves. To overcome these limitations, in the present work an analytical solution is proposed considering a Timoshenko beam and including transverse and rotational springs. The present research proposes ground springs derived as closed-form analytical solutions of the equations of elasticity including the seismic wavelength. These proposed springs extend the applicability of previous plane-strain models. By considering variations in displacements along the longitudinal direction, the presented approach ensures the springs do not approach zero at low frequencies. This characteristic makes them suitable for assessing pseudo-static cases, which typically govern structural forces in kinematic interaction analyses. The results obtained, validated against existing literature and a 3D Finite Element model, reveal several key insights: i) the cutoff frequency significantly influences transverse and rotational springs; ii) neglecting displacement variations along the structure axis (i.e., assuming plane-strain deformation) results in unrealistically low transverse springs, particularly for wavelengths shorter than the structure length; iii) disregarding lateral displacement components in rotational springs and neglecting variations along the structure axis leads to inaccurately low spring values, misrepresenting interaction phenomena; iv) transverse springs exhibit a notable drop in resonance frequency, followed by increasing damping as frequency rises; v) rotational springs show minor frequency-dependent variations, with radiation damping occurring beyond resonance frequencies, starting from negative values. This comprehensive analysis sheds light on the complex behavior of embedded longitudinal structures when subjected to shear waves and provides valuable insights for the seismic assessment.Keywords: shear waves, Timoshenko beams, Winkler springs, sol-structure interaction
Procedia PDF Downloads 644291 A Machine Learning Approach for Efficient Resource Management in Construction Projects
Authors: Soheila Sadeghi
Abstract:
Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.Keywords: resource allocation, machine learning, optimization, data-driven decision-making, project management
Procedia PDF Downloads 424290 Improving the Technology of Assembly by Use of Computer Calculations
Authors: Mariya V. Yanyukina, Michael A. Bolotov
Abstract:
Assembling accuracy is the degree of accordance between the actual values of the parameters obtained during assembly, and the values specified in the assembly drawings and technical specifications. However, the assembling accuracy depends not only on the quality of the production process but also on the correctness of the assembly process. Therefore, preliminary calculations of assembly stages are carried out to verify the correspondence of real geometric parameters to their acceptable values. In the aviation industry, most calculations involve interacting dimensional chains. This greatly complicates the task. Solving such problems requires a special approach. The purpose of this article is to carry out the problem of improving the technology of assembly of aviation units by use of computer calculations. One of the actual examples of the assembly unit, in which there is an interacting dimensional chain, is the turbine wheel of gas turbine engine. Dimensional chain of turbine wheel is formed by geometric parameters of disk and set of blades. The interaction of the dimensional chain consists in the formation of two chains. The first chain is formed by the dimensions that determine the location of the grooves for the installation of the blades, and the dimensions of the blade roots. The second dimensional chain is formed by the dimensions of the airfoil shroud platform. The interaction of the dimensional chain of the turbine wheel is the interdependence of the first and second chains by means of power circuits formed by a plurality of middle parts of the turbine blades. The timeliness of the calculation of the dimensional chain of the turbine wheel is the need to improve the technology of assembly of this unit. The task at hand contains geometric and mathematical components; therefore, its solution can be implemented following the algorithm: 1) research and analysis of production errors by geometric parameters; 2) development of a parametric model in the CAD system; 3) creation of set of CAD-models of details taking into account actual or generalized distributions of errors of geometrical parameters; 4) calculation model in the CAE-system, loading of various combinations of models of parts; 5) the accumulation of statistics and analysis. The main task is to pre-simulate the assembly process by calculating the interacting dimensional chains. The article describes the approach to the solution from the point of view of mathematical statistics, implemented in the software package Matlab. Within the framework of the study, there are data on the measurement of the components of the turbine wheel-blades and disks, as a result of which it is expected that the assembly process of the unit will be optimized by solving dimensional chains.Keywords: accuracy, assembly, interacting dimension chains, turbine
Procedia PDF Downloads 3754289 Eradicating Rural Poverty in Nigeria through Entrepreneurship Education
Authors: Nwachukwu Ihiejeto Celestine
Abstract:
Rural poverty in Nigeria has been the bake of the society. It has been a canker worm which has eaten deep into the fabric of Nigerian society. Different models and principles have been applied to eradicate it, such as operation feed the nation, green revolution, NAPEP etc. Little or nothing has been done in the area of entrepreneurship education to tame this monster. It is based on this that the author wants to x-ray the role entrepreneurship education which studies “the process of identifying, bringing a vision to life” could play in the eradication of rural poverty in Nigeria. This will go along in providing appropriate principles for poverty alleviation and eradication in Nigeria. Some selected states in the eastern Geo-political region could be x-rayed in this circumstance. It is hoped that policy makers etc will find the work cogent in formulating and implementing policy decisions.Keywords: poverty, entrepreneurship, education, Nigeria
Procedia PDF Downloads 4704288 Application of ANN and Fuzzy Logic Algorithms for Runoff and Sediment Yield Modelling of Kal River, India
Authors: Mahesh Kothari, K. D. Gharde
Abstract:
The ANN and fuzzy logic (FL) models were developed to predict the runoff and sediment yield for catchment of Kal river, India using 21 years (1991 to 2011) rainfall and other hydrological data (evaporation, temperature and streamflow lag by one and two day) and 7 years data for sediment yield modelling. The ANN model performance improved with increasing the input vectors. The fuzzy logic model was performing with R value more than 0.95 during developmental stage and validation stage. The comparatively FL model found to be performing well to ANN in prediction of runoff and sediment yield for Kal river.Keywords: transferred function, sigmoid, backpropagation, membership function, defuzzification
Procedia PDF Downloads 5714287 The Applications and Effects of the Career Courses of Taiwanese College Students with LEGO® SERIOUS PLAY®
Authors: Payling Harn
Abstract:
LEGO® SERIOUS PLAY® is a kind of facilitated workshop of thinking and problem-solving approach. Participants built symbolic and metaphorical brick models in response to tasks given by the facilitator and presented these models to other participants. LEGO® SERIOUS PLAY® applied the positive psychological mechanism of Flow and positive emotions to help participants perceiving self-experience and unknown fact and increasing the happiness of life by building bricks and narrating story. At present, LEGO® SERIOUS PLAY® is often utilized for facilitating professional identity and strategy development to assist workers in career development. The researcher desires to apply LEGO® SERIOUS PLAY® to the career courses of college students in order to promote their career ability. This study aimed to use the facilitative method of LEGO® SERIOUS PLAY® to develop the career courses of college students, then explore the effects of Taiwanese college students' positive and negative emotions, career adaptabilities, and career sense of hope by LEGO® SERIOUS PLAY® career courses. The researcher regarded strength as the core concept and use the facilitative mode of LEGO® SERIOUS PLAY® to develop the 8 weeks’ career courses, which including ‘emotion of college life’ ‘career highlights’, ‘career strengths’, ‘professional identity’, ‘business model’, ‘career coping’, ‘strength guiding principles’, ‘career visions’,’ career hope’, etc. The researcher will adopt problem-oriented teaching method to give tasks which according to the weekly theme, use the facilitative mode of LEGO® SERIOUS PLAY® to guide participants to respond tasks by building bricks. Then participants will conduct group discussions, reports, and writing reflection journals weekly. Participants will be 24 second-grade college students. They will attend LEGO® SERIOUS PLAY® career courses for 2 hours a week. The researcher used’ ‘Career Adaptability Scale’ and ‘Career Hope Scale’ to conduct pre-test and post-test. The time points of implementation testing will be one week before courses starting, one day after courses ending respectively. Then the researcher will adopt repeated measures one-way ANOVA for analyzing data. The results revealed that the participants significantly presented immediate positive effect in career adaptability and career hope. The researcher hopes to construct the mode of LEGO® SERIOUS PLAY® career courses by this study and to make a substantial contribution to the future career teaching and researches of LEGO® SERIOUS PLAY®.Keywords: LEGO® SERIOUS PLAY®, career courses, strength, positive and negative affect, career hope
Procedia PDF Downloads 2554286 Experimental Study of Local Scour Depth around Cylindrical Bridge Pier
Authors: Mohammed T. Shukri
Abstract:
The failure of bridges due to excessive local scour during floods poses a challenging problem to hydraulic engineers. The failure of bridges piers is due to many reasons such as localized scour combined with general riverbed degradation. In this paper, we try to estimate the temporal variation of scour depth at nonuniform cylindrical bridge pier, by experimental work conducted in hydraulic laboratories of Gaziantep University Civil Engineering Department on a flume having dimensions of 8.3 m length, 0.8 m width and 0.9 m depth. The experiments will be carried on 20 cm depth of sediment layer having d50=0.4 mm. Three bridge pier shapes having different scaled models will be constructed in a 1.5m of test section in the channel.Keywords: scour, local scour, bridge piers, scour depth
Procedia PDF Downloads 2624285 Insulin Receptor Substrate-1 (IRS1) and Transcription Factor 7-Like 2 (TCF7L2) Gene Polymorphisms Associated with Type 2 Diabetes Mellitus in Eritreans
Authors: Mengistu G. Woldu, Hani Y. Zaki, Areeg Faggad, Badreldin E. Abdalla
Abstract:
Background: Type 2 diabetes mellitus (T2DM) is a complex, degenerative, and multi-factorial disease, which is culpable for huge mortality and morbidity worldwide. Even though relatively significant numbers of studies are conducted on the genetics domain of this disease in the developed world, there is huge information gap in the sub-Saharan Africa region in general and in Eritrea in particular. Objective: The principal aim of this study was to investigate the association of common variants of the Insulin Receptor Substrate 1 (IRS1) and Transcription Factor 7-Like 2 (TCF7L2) genes with T2DM in the Eritrean population. Method: In this cross-sectional case control study 200 T2DM patients and 112 non-diabetes subjects were participated and genotyping of the IRS1 (rs13431179, rs16822615, 16822644rs, rs1801123) and TCF7L2 (rs7092484) tag SNPs were carries out using PCR-RFLP method of analysis. Haplotype analyses were carried out using Plink version 1.07, and Haploview 4.2 software. Linkage disequilibrium (LD), and Hardy-Weinberg equilibrium (HWE) analyses were performed using the Plink software. All descriptive statistical data analyses were carried out using SPSS (Version-20) software. Throughout the analysis p-value ≤0.05 was considered statistically significant. Result: Significant association was found between rs13431179 SNP of the IRS1 gene and T2DM under the recessive model of inheritance (OR=9.00, 95%CI=1.17-69.07, p=0.035), and marginally significant association found in the genotypic model (OR=7.50, 95%CI=0.94-60.06, p=0.058). The rs7092484 SNP of the TCF7L2 gene also showed markedly significant association with T2DM in the recessive (OR=3.61, 95%CI=1.70-7.67, p=0.001); and allelic (OR=1.80, 95%CI=1.23-2.62, p=0.002) models. Moreover, eight haplotypes of the IRS1 gene found to have significant association withT2DM (p=0.013 to 0.049). Assessments made on the interactions of genotypes of the rs13431179 and rs7092484 SNPs with various parameters demonstrated that high density lipoprotein (HDL), low density lipoprotein (LDL), waist circumference (WC), and systolic blood pressure (SBP) are the best T2DM onset predicting models. Furthermore, genotypes of the rs7092484 SNP showed significant association with various atherogenic indexes (Atherogenic index of plasma, LDL/HDL, and CHLO/HDL); and Eritreans carrying the GG or GA genotypes were predicted to be more susceptible to cardiovascular diseases onset. Conclusions: Results of this study suggest that IRS1 (rs13431179) and TCF7L2 (rs7092484) gene polymorphisms are associated with increased risk of T2DM in Eritreans.Keywords: IRS1, SNP, TCF7L2, type 2 diabetes
Procedia PDF Downloads 2264284 Analysis of the Statistical Characterization of Significant Wave Data Exceedances for Designing Offshore Structures
Authors: Rui Teixeira, Alan O’Connor, Maria Nogal
Abstract:
The statistical theory of extreme events is progressively a topic of growing interest in all the fields of science and engineering. The changes currently experienced by the world, economic and environmental, emphasized the importance of dealing with extreme occurrences with improved accuracy. When it comes to the design of offshore structures, particularly offshore wind turbines, the importance of efficiently characterizing extreme events is of major relevance. Extreme events are commonly characterized by extreme values theory. As an alternative, the accurate modeling of the tails of statistical distributions and the characterization of the low occurrence events can be achieved with the application of the Peak-Over-Threshold (POT) methodology. The POT methodology allows for a more refined fit of the statistical distribution by truncating the data with a minimum value of a predefined threshold u. For mathematically approximating the tail of the empirical statistical distribution the Generalised Pareto is widely used. Although, in the case of the exceedances of significant wave data (H_s) the 2 parameters Weibull and the Exponential distribution, which is a specific case of the Generalised Pareto distribution, are frequently used as an alternative. The Generalized Pareto, despite the existence of practical cases where it is applied, is not completely recognized as the adequate solution to model exceedances over a certain threshold u. References that set the Generalised Pareto distribution as a secondary solution in the case of significant wave data can be identified in the literature. In this framework, the current study intends to tackle the discussion of the application of statistical models to characterize exceedances of wave data. Comparison of the application of the Generalised Pareto, the 2 parameters Weibull and the Exponential distribution are presented for different values of the threshold u. Real wave data obtained in four buoys along the Irish coast was used in the comparative analysis. Results show that the application of the statistical distributions to characterize significant wave data needs to be addressed carefully and in each particular case one of the statistical models mentioned fits better the data than the others. Depending on the value of the threshold u different results are obtained. Other variables of the fit, as the number of points and the estimation of the model parameters, are analyzed and the respective conclusions were drawn. Some guidelines on the application of the POT method are presented. Modeling the tail of the distributions shows to be, for the present case, a highly non-linear task and, due to its growing importance, should be addressed carefully for an efficient estimation of very low occurrence events.Keywords: extreme events, offshore structures, peak-over-threshold, significant wave data
Procedia PDF Downloads 2744283 Effects of Bone Marrow Derived Mesenchymal Stem Cells (MSC) in Acute Respiratory Distress Syndrome (ARDS) Lung Remodeling
Authors: Diana Islam, Juan Fang, Vito Fanelli, Bing Han, Julie Khang, Jianfeng Wu, Arthur S. Slutsky, Haibo Zhang
Abstract:
Introduction: MSC delivery in preclinical models of ARDS has demonstrated significant improvements in lung function and recovery from acute injury. However, the role of MSC delivery in ARDS associated pulmonary fibrosis is not well understood. Some animal studies using bleomycin, asbestos, and silica-induced pulmonary fibrosis show that MSC delivery can suppress fibrosis. While other animal studies using radiation induced pulmonary fibrosis, liver, and kidney fibrosis models show that MSC delivery can contribute to fibrosis. Hypothesis: The beneficial and deleterious effects of MSC in ARDS are modulated by the lung microenvironment at the time of MSC delivery. Methods: To induce ARDS a two-hit mouse model of Hydrochloric acid (HCl) aspiration (day 0) and mechanical ventilation (MV) (day 2) was used. HCl and injurious MV generated fibrosis within 14-28 days. 0.5x106 mouse MSCs were delivered (via both intratracheal and intravenous routes) either in the active inflammatory phase (day 2) or during the remodeling phase (day 14) of ARDS (mouse fibroblasts or PBS used as a control). Lung injury accessed using inflammation score and elastance measurement. Pulmonary fibrosis was accessed using histological score, tissue collagen level, and collagen expression. In addition alveolar epithelial (E) and mesenchymal (M) marker expression profile was also measured. All measurements were taken at day 2, 14, and 28. Results: MSC delivery 2 days after HCl exacerbated lung injury and fibrosis compared to HCl alone, while the day 14 delivery showed protective effects. However in the absence of HCl, MSC significantly reduced the injurious MV-induced fibrosis. HCl injury suppressed E markers and up-regulated M markers. MSC delivery 2 days after HCl further amplified M marker expression, indicating their role in myofibroblast proliferation/activation. While with 14-day delivery E marker up-regulation was observed indicating their role in epithelial restoration. Conclusions: Early MSC delivery can be protective of injurious MV. Late MSC delivery during repair phase may also aid in recovery. However, early MSC delivery during the exudative inflammatory phase of HCl-induced ARDS can result in pro-fibrotic profiles. It is critical to understand the interaction between MSC and the lung microenvironment before MSC-based therapies are utilized for ARDS.Keywords: acute respiratory distress syndrome (ARDS), mesenchymal stem cells (MSC), hydrochloric acid (HCl), mechanical ventilation (MV)
Procedia PDF Downloads 6724282 3D Modeling of Tunis Soft Soil Settlement Reinforced with Plastic Wastes
Authors: Aya Rezgui, Lasaad Ajam, Belgacem Jalleli
Abstract:
The Tunis soft soils present a difficult challenge as construction sites and for Geotechnical works. Currently, different techniques are used to improve such soil properties taking into account the environmental considerations. One of the recent methods is involving plastic wastes as a reinforcing materials. The present study pertains to the development of a numerical model for predicting the behavior of Tunis Soft soil (TSS) improved with recycled Monobloc chair wastes.3D numerical models for unreinforced TSS and reinforced TSS aims to evaluate settlement reduction and the values of consolidation times in oedometer conditions.Keywords: Tunis soft soil, settlement, plastic wastes, finte -difference, FLAC3D modeling
Procedia PDF Downloads 1384281 Employing Operations Research at Universities to Build Management Systems
Authors: Abdallah A. Hlayel
Abstract:
Operations research science (OR) deals with good success in developing and applying scientific methods for problem solving and decision-making. However, by using OR techniques, we can enhance the use of computer decision support systems to achieve optimal management for institutions. OR applies comprehensive analysis including all factors that affect on it and builds mathematical modeling to solve business or organizational problems. In addition, it improves decision-making and uses available resources efficiently. The adoption of OR by universities would definitely contributes to the development and enhancement of the performance of OR techniques. This paper provides an understanding of the structures, approaches and models of OR in problem solving and decision-making.Keywords: best candidates' method, decision making, decision support system, operations research
Procedia PDF Downloads 4504280 Simulation Research of Diesel Aircraft Engine
Authors: Łukasz Grabowski, Michał Gęca, Mirosław Wendeker
Abstract:
This paper presents the simulation results of a new opposed piston diesel engine to power a light aircraft. Created in the AVL Boost, the model covers the entire charge passage, from the inlet up to the outlet. The model shows fuel injection into cylinders and combustion in cylinders. The calculation uses the module for two-stroke engines. The model was created using sub-models available in this software that structure the model. Each of the sub-models is complemented with parameters in line with the design premise. Since engine weight resulting from geometric dimensions is fundamental in aircraft engines, two configurations of stroke were studied. For each of the values, there were calculated selected operating conditions defined by crankshaft speed. The required power was achieved by changing air fuel ratio (AFR). There was also studied brake specific fuel consumption (BSFC). For stroke S1, the BSFC was lowest at all of the three operating points. This difference is approximately 1-2%, which means higher overall engine efficiency but the amount of fuel injected into cylinders is larger by several mg for S1. The cylinder maximum pressure is lower for S2 due to the fact that compressor gear driving remained the same and boost pressure was identical in the both cases. Calculations for various values of boost pressure were the next stage of the study. In each of the calculation case, the amount of fuel was changed to achieve the required engine power. In the former case, the intake system dimensions were modified, i.e. the duct connecting the compressor and the air cooler, so its diameter D = 40 mm was equal to the diameter of the compressor outlet duct. The impact of duct length was also examined to be able to reduce the flow pulsation during the operating cycle. For the so selected geometry of the intake system, there were calculations for various values of boost pressure. The boost pressure was changed by modifying the gear driving the compressor. To reach the required level of cruising power N = 68 kW. Due to the mechanical power consumed by the compressor, high pressure ratio results in a worsened overall engine efficiency. The figure on the change in BSFC from 210 g/kWh to nearly 270 g/kWh shows this correlation and the overall engine efficiency is reduced by about 8%. Acknowledgement: This work has been realized in the cooperation with The Construction Office of WSK "PZL-KALISZ" S.A." and is part of Grant Agreement No. POIR.01.02.00-00-0002/15 financed by the Polish National Centre for Research and Development.Keywords: aircraft, diesel, engine, simulation
Procedia PDF Downloads 2094279 High Resolution Satellite Imagery and Lidar Data for Object-Based Tree Species Classification in Quebec, Canada
Authors: Bilel Chalghaf, Mathieu Varin
Abstract:
Forest characterization in Quebec, Canada, is usually assessed based on photo-interpretation at the stand level. For species identification, this often results in a lack of precision. Very high spatial resolution imagery, such as DigitalGlobe, and Light Detection and Ranging (LiDAR), have the potential to overcome the limitations of aerial imagery. To date, few studies have used that data to map a large number of species at the tree level using machine learning techniques. The main objective of this study is to map 11 individual high tree species ( > 17m) at the tree level using an object-based approach in the broadleaf forest of Kenauk Nature, Quebec. For the individual tree crown segmentation, three canopy-height models (CHMs) from LiDAR data were assessed: 1) the original, 2) a filtered, and 3) a corrected model. The corrected CHM gave the best accuracy and was then coupled with imagery to refine tree species crown identification. When compared with photo-interpretation, 90% of the objects represented a single species. For modeling, 313 variables were derived from 16-band WorldView-3 imagery and LiDAR data, using radiance, reflectance, pixel, and object-based calculation techniques. Variable selection procedures were employed to reduce their number from 313 to 16, using only 11 bands to aid reproducibility. For classification, a global approach using all 11 species was compared to a semi-hierarchical hybrid classification approach at two levels: (1) tree type (broadleaf/conifer) and (2) individual broadleaf (five) and conifer (six) species. Five different model techniques were used: (1) support vector machine (SVM), (2) classification and regression tree (CART), (3) random forest (RF), (4) k-nearest neighbors (k-NN), and (5) linear discriminant analysis (LDA). Each model was tuned separately for all approaches and levels. For the global approach, the best model was the SVM using eight variables (overall accuracy (OA): 80%, Kappa: 0.77). With the semi-hierarchical hybrid approach, at the tree type level, the best model was the k-NN using six variables (OA: 100% and Kappa: 1.00). At the level of identifying broadleaf and conifer species, the best model was the SVM, with OA of 80% and 97% and Kappa values of 0.74 and 0.97, respectively, using seven variables for both models. This paper demonstrates that a hybrid classification approach gives better results and that using 16-band WorldView-3 with LiDAR data leads to more precise predictions for tree segmentation and classification, especially when the number of tree species is large.Keywords: tree species, object-based, classification, multispectral, machine learning, WorldView-3, LiDAR
Procedia PDF Downloads 1384278 Profiling Risky Code Using Machine Learning
Authors: Zunaira Zaman, David Bohannon
Abstract:
This study explores the application of machine learning (ML) for detecting security vulnerabilities in source code. The research aims to assist organizations with large application portfolios and limited security testing capabilities in prioritizing security activities. ML-based approaches offer benefits such as increased confidence scores, false positives and negatives tuning, and automated feedback. The initial approach using natural language processing techniques to extract features achieved 86% accuracy during the training phase but suffered from overfitting and performed poorly on unseen datasets during testing. To address these issues, the study proposes using the abstract syntax tree (AST) for Java and C++ codebases to capture code semantics and structure and generate path-context representations for each function. The Code2Vec model architecture is used to learn distributed representations of source code snippets for training a machine-learning classifier for vulnerability prediction. The study evaluates the performance of the proposed methodology using two datasets and compares the results with existing approaches. The Devign dataset yielded 60% accuracy in predicting vulnerable code snippets and helped resist overfitting, while the Juliet Test Suite predicted specific vulnerabilities such as OS-Command Injection, Cryptographic, and Cross-Site Scripting vulnerabilities. The Code2Vec model achieved 75% accuracy and a 98% recall rate in predicting OS-Command Injection vulnerabilities. The study concludes that even partial AST representations of source code can be useful for vulnerability prediction. The approach has the potential for automated intelligent analysis of source code, including vulnerability prediction on unseen source code. State-of-the-art models using natural language processing techniques and CNN models with ensemble modelling techniques did not generalize well on unseen data and faced overfitting issues. However, predicting vulnerabilities in source code using machine learning poses challenges such as high dimensionality and complexity of source code, imbalanced datasets, and identifying specific types of vulnerabilities. Future work will address these challenges and expand the scope of the research.Keywords: code embeddings, neural networks, natural language processing, OS command injection, software security, code properties
Procedia PDF Downloads 110