Search results for: numerical modeling
486 Lessons Learned from Interlaboratory Noise Modelling in Scope of Environmental Impact Assessments in Slovenia
Abstract:
Noise assessment methods are regularly used in scope of Environmental Impact Assessments for planned projects to assess (predict) the expected noise emissions of these projects. Different noise assessment methods could be used. In recent years, we had an opportunity to collaborate in some noise assessment procedures where noise assessments of different laboratories have been performed simultaneously. We identified some significant differences in noise assessment results between laboratories in Slovenia. We estimate that despite good input Georeferenced Data to set up acoustic model exists in Slovenia; there is no clear consensus on methods for predictive noise methods for planned projects. We analyzed input data, methods and results of predictive noise methods for two planned industrial projects, both were done independently by two laboratories. We also analyzed the data, methods and results of two interlaboratory collaborative noise models for two existing noise sources (railway and motorway). In cases of predictive noise modelling, the validations of acoustic models were performed by noise measurements of surrounding existing noise sources, but in varying durations. The acoustic characteristics of existing buildings were also not described identically. The planned noise sources were described and digitized differently. Differences in noise assessment results between different laboratories have ranged up to 10 dBA, which considerably exceeds the acceptable uncertainty ranged between 3 to 6 dBA. Contrary to predictive noise modelling, in cases of collaborative noise modelling for two existing noise sources the possibility to perform the validation noise measurements of existing noise sources greatly increased the comparability of noise modelling results. In both cases of collaborative noise modelling for existing motorway and railway, the modelling results of different laboratories were comparable. Differences in noise modeling results between different laboratories were below 5 dBA, which was acceptable uncertainty set up by interlaboratory noise modelling organizer. The lessons learned from the study were: 1) Predictive noise calculation using formulae from International standard SIST ISO 9613-2: 1997 is not an appropriate method to predict noise emissions of planned projects since due to complexity of procedure they are not used strictly, 2) The noise measurements are important tools to minimize noise assessment errors of planned projects and should be in cases of predictive noise modelling performed at least for validation of acoustic model, 3) National guidelines should be made on the appropriate data, methods, noise source digitalization, validation of acoustic model etc. in order to unify the predictive noise models and their results in scope of Environmental Impact Assessments for planned projects.Keywords: environmental noise assessment, predictive noise modelling, spatial planning, noise measurements, national guidelines
Procedia PDF Downloads 233485 Sustainability Impact Assessment of Construction Ecology to Engineering Systems and Climate Change
Authors: Moustafa Osman Mohammed
Abstract:
Construction industry, as one of the main contributor in depletion of natural resources, influences climate change. This paper discusses incremental and evolutionary development of the proposed models for optimization of a life-cycle analysis to explicit strategy for evaluation systems. The main categories are virtually irresistible for introducing uncertainties, uptake composite structure model (CSM) as environmental management systems (EMSs) in a practice science of evaluation small and medium-sized enterprises (SMEs). The model simplified complex systems to reflect nature systems’ input, output and outcomes mode influence “framework measures” and give a maximum likelihood estimation of how elements are simulated over the composite structure. The traditional knowledge of modeling is based on physical dynamic and static patterns regarding parameters influence environment. It unified methods to demonstrate how construction systems ecology interrelated from management prospective in procedure reflects the effect of the effects of engineering systems to ecology as ultimately unified technologies in extensive range beyond constructions impact so as, - energy systems. Sustainability broadens socioeconomic parameters to practice science that meets recovery performance, engineering reflects the generic control of protective systems. When the environmental model employed properly, management decision process in governments or corporations could address policy for accomplishment strategic plans precisely. The management and engineering limitation focuses on autocatalytic control as a close cellular system to naturally balance anthropogenic insertions or aggregation structure systems to pound equilibrium as steady stable conditions. Thereby, construction systems ecology incorporates engineering and management scheme, as a midpoint stage between biotic and abiotic components to predict constructions impact. The later outcomes’ theory of environmental obligation suggests either a procedures of method or technique that is achieved in sustainability impact of construction system ecology (SICSE), as a relative mitigation measure of deviation control, ultimately.Keywords: sustainability, environmental impact assessment, environemtal management, construction ecology
Procedia PDF Downloads 392484 Enhancement of Mass Transport and Separations of Species in a Electroosmotic Flow by Distinct Oscillatory Signals
Authors: Carlos Teodoro, Oscar Bautista
Abstract:
In this work, we analyze theoretically the mass transport in a time-periodic electroosmotic flow through a parallel flat plate microchannel under different periodic functions of the applied external electric field. The microchannel connects two reservoirs having different constant concentrations of an electro-neutral solute, and the zeta potential of the microchannel walls are assumed to be uniform. The governing equations that allow determining the mass transport in the microchannel are given by the Poisson-Boltzmann equation, the modified Navier-Stokes equations, where the Debye-Hückel approximation is considered (the zeta potential is less than 25 mV), and the species conservation. These equations are nondimensionalized and four dimensionless parameters appear which control the mass transport phenomenon. In this sense, these parameters are an angular Reynolds, the Schmidt and the Péclet numbers, and an electrokinetic parameter representing the ratio of the half-height of the microchannel to the Debye length. To solve the mathematical model, first, the electric potential is determined from the Poisson-Boltzmann equation, which allows determining the electric force for various periodic functions of the external electric field expressed as Fourier series. In particular, three different excitation wave forms of the external electric field are assumed, a) sawteeth, b) step, and c) a periodic irregular functions. The periodic electric forces are substituted in the modified Navier-Stokes equations, and the hydrodynamic field is derived for each case of the electric force. From the obtained velocity fields, the species conservation equation is solved and the concentration fields are found. Numerical calculations were done by considering several binary systems where two dilute species are transported in the presence of a carrier. It is observed that there are different angular frequencies of the imposed external electric signal where the total mass transport of each species is the same, independently of the molecular diffusion coefficient. These frequencies are called crossover frequencies and are obtained graphically at the intersection when the total mass transport is plotted against the imposed frequency. The crossover frequencies are different depending on the Schmidt number, the electrokinetic parameter, the angular Reynolds number, and on the type of signal of the external electric field. It is demonstrated that the mass transport through the microchannel is strongly dependent on the modulation frequency of the applied particular alternating electric field. Possible extensions of the analysis to more complicated pulsation profiles are also outlined.Keywords: electroosmotic flow, mass transport, oscillatory flow, species separation
Procedia PDF Downloads 215483 Urban River As Living Infrastructure: Tidal Flooding And Sea Level Rise In A Working Waterway In Hampton Roads, Virginia
Authors: William Luke Hamel
Abstract:
Existing conceptions of urban flooding caused by tidal fluctuations and sea-level rise have been inadequately conceptualized by metrics of resilience and methods of flow modeling. While a great deal of research has been devoted to the effects of urbanization on pluvial flooding, the kind of tidal flooding experienced by locations like Hampton Roads, Virginia, has not been adequately conceptualized as being a result of human factors such as urbanization and gray infrastructure. Resilience from sea level rise and its associated flooding has been pioneered in the region with the 2015 Norfolk Resilience Plan from 100 Resilient Cities as well as the 2016 Norfolk Vision 2100 plan, which envisions different patterns of land use for the city. Urban resilience still conceptualizes the city as having the ability to maintain an equilibrium in the face of disruptions. This economic and social equilibrium relies on the Elizabeth River, narrowly conceptualized. Intentionally or accidentally, the river was made to be a piece of infrastructure. Its development was meant to serve the docks, shipyards, naval yards, and port infrastructure that gives the region so much of its economic life. Inasmuch as it functions to permit the movement of cargo; the raising and lowering of ships to be repaired, commissioned, or decommissioned; or the provisioning of military vessels, the river as infrastructure is functioning properly. The idea that the infrastructure is malfunctioning when high tides and sea-level rise create flooding is predicated on the idea that the infrastructure is truly a human creation and can be controlled. The natural flooding cycles of an urban river, combined with the action of climate change and sea-level rise, are only abnormal so much as they encroach on the development that first encroached on the river. The urban political ecology of water provides the ability to view the river as an infrastructural extension of urban networks while also calling for its emancipation from stationarity and human control. Understanding the river and city as a hydrosocial territory or as a socio-natural system liberates both actors from the duality of the natural and the social while repositioning river flooding as a normal part of coexistence on a floodplain. This paper argues for the adoption of an urban political ecology lens in the analysis and governance of urban rivers like the Elizabeth River as a departure from the equilibrium-seeking and stability metrics of urban resilience.Keywords: urban flooding, political ecology, Elizabeth river, Hampton roads
Procedia PDF Downloads 166482 Pill-Box Dispenser as a Strategy for Therapeutic Management: A Qualitative Evaluation
Authors: Bruno R. Mendes, Francisco J. Caldeira, Rita S. Luís
Abstract:
Population ageing is directly correlated to an increase in medicine consumption. Beyond the latter and the polymedicated profile of elderly, it is possible to see a need for pharmacotherapeutic monitoring due to cognitive and physical impairment. In this sense, the tracking, organization and administration of medicines become a daily challenge and the pill-box dispenser system a solution. The pill-box dispenser (system) consists in a small compartmentalized container to unit dose organization, which means a container able to correlate the patient’s prescribed dose regimen and the time schedule of intake. In many European countries, this system is part of pharmacist’s role in clinical pharmacy. Despite this simple solution, therapy compliance is only possible if the patient adheres to the system, so it is important to establish a qualitative and quantitative analysis on the perception of the patient on the benefits and risks of the pill-box dispenser as well as the identification of the ideal system. The analysis was conducted through an observational study, based on the application of a standardized questionnaire structured with the numerical scale of Likert (5 levels) and previously validated on the population. The study was performed during a limited period of time and under a randomized sample of 188 participants. The questionnaire consisted of 22 questions: 6 background measures and 16 specific measures. The standards for the final comparative analysis were obtained through the state-of-the-art on the subject. The study carried out using the Likert scale afforded a degree of agreement and discordance between measures (Sample vs. Standard) of 56,25% and 43,75%, respectively. It was concluded that the pill-box dispenser has greater acceptance among a younger population, that was not the initial target of the system. However, this allows us to guarantee a high adherence in the future. Additionally, it was noted that the cost associated with this service is not a limiting factor for its use. The pill-box dispenser system, as currently implemented, demonstrates an important weakness regarding the quality and effectiveness of the medicines, which is not understood by the patient, revealing a significant lack of literacy when it concerns with medicine area. The characteristics of an ideal system remain unchanged, which means that the size, appearance and availability of information in the pill-box continue to be indispensable elements for the compliance with the system. The pill-box dispenser remains unsuitable regarding container size and the type of treatment to which it applies. Despite that, it might be a future standard for clinical pharmacy, allowing a differentiation of the pharmacist role, as well as a wider range of applications to other age groups and treatments.Keywords: clinical pharmacy, medicines, patient safety, pill-box dispenser
Procedia PDF Downloads 196481 Incentive-Based Motivation to Network with Coworkers: Strengthening Professional Networks via Online Social Networks
Authors: Jung Lee
Abstract:
The last decade has witnessed more people than ever before using social media and broadening their social circles. Social media users connect not only with their friends but also with professional acquaintances, primarily coworkers, and clients; personal and professional social circles are mixed within the same social media platform. Considering the positive aspect of social media in facilitating communication and mutual understanding between individuals, we infer that social media interactions with co-workers could indeed benefit one’s professional life. However, given privacy issues, sharing all personal details with one’s co-workers is not necessarily the best practice. Should one connect with coworkers via social media? Will social media connections with coworkers eventually benefit one’s long-term career? Will the benefit differ across cultures? To answer, this study examines how social media can contribute to organizational communication by tracing the foundation of user motivation based on social capital theory, leader-member exchange (LMX) theory and expectancy theory of motivation. Although social media was originally designed for personal communication, users have shown intentions to extend social media use for professional communication, especially when the proper incentive is expected. To articulate the user motivation and the mechanism of the incentive expectation scheme, this study applies those three theories and identify six antecedents and three moderators of social media use motivation including social network flaunt, shared interest, perceived social inclusion. It also hypothesizes that the moderating effects of those constructs would significantly differ based on the relationship hierarchy among the workers. To validate, this study conducted a survey of 329 active social media users with acceptable levels of job experiences. The analysis result confirms the specific roles of the three moderators in social media adoption for organizational communication. The present study contributes to the literature by developing a theoretical modeling of ambivalent employee perceptions about establishing social media connections with co-workers. This framework shows not only how both positive and negative expectations of social media connections with co-workers are formed based on expectancy theory of motivation, but also how such expectations lead to behavioral intentions using career success model. It also enhances understanding of how various relationships among employees can be influenced through social media use and such usage can potentially affect both performance and careers. Finally, it shows how cultural factors induced by social media use can influence relations among the coworkers.Keywords: the social network, workplace, social capital, motivation
Procedia PDF Downloads 123480 Knowledge Creation and Diffusion Dynamics under Stable and Turbulent Environment for Organizational Performance Optimization
Authors: Jessica Gu, Yu Chen
Abstract:
Knowledge Management (KM) is undoubtable crucial to organizational value creation, learning, and adaptation. Although the rapidly growing KM domain has been fueled with full-fledged methodologies and technologies, studies on KM evolution that bridge the organizational performance and adaptation to the organizational environment are still rarely attempted. In particular, creation (or generation) and diffusion (or share/exchange) of knowledge are of the organizational primary concerns on the problem-solving perspective, however, the optimized distribution of knowledge creation and diffusion endeavors are still unknown to knowledge workers. This research proposed an agent-based model of knowledge creation and diffusion in an organization, aiming at elucidating how the intertwining knowledge flows at microscopic level lead to optimized organizational performance at macroscopic level through evolution, and exploring what exogenous interventions by the policy maker and endogenous adjustments of the knowledge workers can better cope with different environmental conditions. With the developed model, a series of simulation experiments are conducted. Both long-term steady-state and time-dependent developmental results on organizational performance, network and structure, social interaction and learning among individuals, knowledge audit and stocktaking, and the likelihood of choosing knowledge creation and diffusion by the knowledge workers are obtained. One of the interesting findings reveals a non-monotonic phenomenon on organizational performance under turbulent environment while a monotonic phenomenon on organizational performance under a stable environment. Hence, whether the environmental condition is turbulence or stable, the most suitable exogenous KM policy and endogenous knowledge creation and diffusion choice adjustments can be identified for achieving the optimized organizational performance. Additional influential variables are further discussed and future work directions are finally elaborated. The proposed agent-based model generates evidence on how knowledge worker strategically allocates efforts on knowledge creation and diffusion, how the bottom-up interactions among individuals lead to emerged structure and optimized performance, and how environmental conditions bring in challenges to the organization system. Meanwhile, it serves as a roadmap and offers great macro and long-term insights to policy makers without interrupting the real organizational operation, sacrificing huge overhead cost, or introducing undesired panic to employees.Keywords: knowledge creation, knowledge diffusion, agent-based modeling, organizational performance, decision making evolution
Procedia PDF Downloads 237479 Structural Equation Modelling Based Approach to Integrate Customers and Suppliers with Internal Practices for Lean Manufacturing Implementation in the Indian Context
Authors: Protik Basu, Indranil Ghosh, Pranab K. Dan
Abstract:
Lean management is an integrated socio-technical system to bring about a competitive state in an organization. The purpose of this paper is to explore and integrate the role of customers and suppliers with the internal practices of the Indian manufacturing industries towards successful implementation of lean manufacturing (LM). An extensive literature survey is carried out. An attempt is made to build an exhaustive list of all the input manifests related to customers, suppliers and internal practices necessary for LM implementation, coupled with a similar exhaustive list of the benefits accrued from its successful implementation. A structural model is thus conceptualized, which is empirically validated based on the data from the Indian manufacturing sector. With the current impetus on developing the industrial sector, the Government of India recently introduced the Lean Manufacturing Competitiveness Scheme that aims to increase competitiveness with the help of lean concepts. There is a huge scope to enrich the Indian industries with the lean benefits, the implementation status being quite low. Hardly any survey-based empirical study in India has been found to integrate customers and suppliers with the internal processes towards successful LM implementation. This empirical research is thus carried out in the Indian manufacturing industries. The basic steps of the research methodology followed in this research are the identification of input and output manifest variables and latent constructs, model proposition and hypotheses development, development of survey instrument, sampling and data collection and model validation (exploratory factor analysis, confirmatory factor analysis, and structural equation modeling). The analysis reveals six key input constructs and three output constructs, indicating that these constructs should act in unison to maximize the benefits of implementing lean. The structural model presented in this paper may be treated as a guide to integrating customers and suppliers with internal practices to successfully implement lean. Integrating customers and suppliers with internal practices into a unified, coherent manufacturing system will lead to an optimum utilization of resources. This work is one of the very first researches to have a survey-based empirical analysis of the role of customers, suppliers and internal practices of the Indian manufacturing sector towards an effective lean implementation.Keywords: customer management, internal manufacturing practices, lean benefits, lean implementation, lean manufacturing, structural model, supplier management
Procedia PDF Downloads 176478 Multi-Size Continuous Particle Separation on a Dielectrophoresis-Based Microfluidics Chip
Authors: Arash Dalili, Hamed Tahmouressi, Mina Hoorfar
Abstract:
Advances in lab-on-a-chip (LOC) devices have led to significant advances in the manipulation, separation, and isolation of particles and cells. Among the different active and passive particle manipulation methods, dielectrophoresis (DEP) has been proven to be a versatile mechanism as it is label-free, cost-effective, simple to operate, and has high manipulation efficiency. DEP has been applied for a wide range of biological and environmental applications. A popular form of DEP devices is the continuous manipulation of particles by using co-planar slanted electrodes, which utilizes a sheath flow to focus the particles into one side of the microchannel. When particles enter the DEP manipulation zone, the negative DEP (nDEP) force generated by the slanted electrodes deflects the particles laterally towards the opposite side of the microchannel. The lateral displacement of the particles is dependent on multiple parameters including the geometry of the electrodes, the width, length and height of the microchannel, the size of the particles and the throughput. In this study, COMSOL Multiphysics® modeling along with experimental studies are used to investigate the effect of the aforementioned parameters. The electric field between the electrodes and the induced DEP force on the particles are modelled by COMSOL Multiphysics®. The simulation model is used to show the effect of the DEP force on the particles, and how the geometry of the electrodes (width of the electrodes and the gap between them) plays a role in the manipulation of polystyrene microparticles. The simulation results show that increasing the electrode width to a certain limit, which depends on the height of the channel, increases the induced DEP force. Also, decreasing the gap between the electrodes leads to a stronger DEP force. Based on these results, criteria for the fabrication of the electrodes were found, and soft lithography was used to fabricate interdigitated slanted electrodes and microchannels. Experimental studies were run to find the effect of the flow rate, geometrical parameters of the microchannel such as length, width, and height as well as the electrodes’ angle on the displacement of 5 um, 10 um and 15 um polystyrene particles. An empirical equation is developed to predict the displacement of the particles under different conditions. It is shown that the displacement of the particles is more for longer and lower height channels, lower flow rates, and bigger particles. On the other hand, the effect of the angle of the electrodes on the displacement of the particles was negligible. Based on the results, we have developed an optimum design (in terms of efficiency and throughput) for three size separation of particles.Keywords: COMSOL Multiphysics, Dielectrophoresis, Microfluidics, Particle separation
Procedia PDF Downloads 183477 Reducing Pressure Drop in Microscale Channel Using Constructal Theory
Authors: K. X. Cheng, A. L. Goh, K. T. Ooi
Abstract:
The effectiveness of microchannels in enhancing heat transfer has been demonstrated in the semiconductor industry. In order to tap the microscale heat transfer effects into macro geometries, overcoming the cost and technological constraints, microscale passages were created in macro geometries machined using conventional fabrication methods. A cylindrical insert was placed within a pipe, and geometrical profiles were created on the outer surface of the insert to enhance heat transfer under steady-state single-phase liquid flow conditions. However, while heat transfer coefficient values of above 10 kW/m2·K were achieved, the heat transfer enhancement was accompanied by undesirable pressure drop increment. Therefore, this study aims to address the high pressure drop issue using Constructal theory, a universal design law for both animate and inanimate systems. Two designs based on Constructal theory were developed to study the effectiveness of Constructal features in reducing the pressure drop increment as compared to parallel channels, which are commonly found in microchannel fabrication. The hydrodynamic and heat transfer performance for the Tree insert and Constructal fin (Cfin) insert were studied using experimental methods, and the underlying mechanisms were substantiated by numerical results. In technical terms, the objective is to achieve at least comparable increment in both heat transfer coefficient and pressure drop, if not higher increment in the former parameter. Results show that the Tree insert improved the heat transfer performance by more than 16 percent at low flow rates, as compared to the Tree-parallel insert. However, the heat transfer enhancement reduced to less than 5 percent at high Reynolds numbers. On the other hand, the pressure drop increment stayed almost constant at 20 percent. This suggests that the Tree insert has better heat transfer performance in the low Reynolds number region. More importantly, the Cfin insert displayed improved heat transfer performance along with favourable hydrodynamic performance, as compared to Cfinparallel insert, at all flow rates in this study. At 2 L/min, the enhancement of heat transfer was more than 30 percent, with 20 percent pressure drop increment, as compared to Cfin-parallel insert. Furthermore, comparable increment in both heat transfer coefficient and pressure drop was observed at 8 L/min. In other words, the Cfin insert successfully achieved the objective of this study. Analysis of the results suggests that bifurcation of flows is effective in reducing the increment in pressure drop relative to heat transfer enhancement. Optimising the geometries of the Constructal fins is therefore the potential future study in achieving a bigger stride in energy efficiency at much lower costs.Keywords: constructal theory, enhanced heat transfer, microchannel, pressure drop
Procedia PDF Downloads 336476 Processes and Application of Casting Simulation and Its Software’s
Authors: Surinder Pal, Ajay Gupta, Johny Khajuria
Abstract:
Casting simulation helps visualize mold filling and casting solidification; predict related defects like cold shut, shrinkage porosity and hard spots; and optimize the casting design to achieve the desired quality with high yield. Flow and solidification of molten metals are, however, a very complex phenomenon that is difficult to simulate correctly by conventional computational techniques, especially when the part geometry is intricate and the required inputs (like thermo-physical properties and heat transfer coefficients) are not available. Simulation software is based on the process of modeling a real phenomenon with a set of mathematical formulas. It is, essentially, a program that allows the user to observe an operation through simulation without actually performing that operation. Simulation software is used widely to design equipment so that the final product will be as close to design specs as possible without expensive in process modification. Simulation software with real-time response is often used in gaming, but it also has important industrial applications. When the penalty for improper operation is costly, such as airplane pilots, nuclear power plant operators, or chemical plant operators, a mockup of the actual control panel is connected to a real-time simulation of the physical response, giving valuable training experience without fear of a disastrous outcome. The all casting simulation software has own requirements, like magma cast has only best for crack simulation. The latest generation software Auto CAST developed at IIT Bombay provides a host of functions to support method engineers, including part thickness visualization, core design, multi-cavity mold design with common gating and feeding, application of various feed aids (feeder sleeves, chills, padding, etc.), simulation of mold filling and casting solidification, automatic optimization of feeders and gating driven by the desired quality level, and what-if cost analysis. IIT Bombay has developed a set of applications for the foundry industry to improve casting yield and quality. Casting simulation is a fast and efficient solution for process for advanced tool which is the result of more than 20 years of collaboration with major industrial partners and academic institutions around the world. In this paper the process of casting simulation is studied.Keywords: casting simulation software’s, simulation technique’s, casting simulation, processes
Procedia PDF Downloads 474475 New Recombinant Netrin-a Protein of Lucilia Sericata Larvae by Bac to Bac Expression Vector System in Sf9 Insect Cell
Authors: Hamzeh Alipour, Masoumeh Bagheri, Abbasali Raz, Javad Dadgar Pakdel, Kourosh Azizi, Aboozar Soltani, Mohammad Djaefar Moemenbellah-Fard
Abstract:
Background: Maggot debridement therapy is an appropriate, effective, and controlled method using sterilized larvae of Luciliasericata (L.sericata) to treat wounds. Netrin-A is an enzyme in the Laminins family which secreted from salivary gland of L.sericata with a central role in neural regeneration and angiogenesis. This study aimed to production of new recombinant Netrin-A protein of Luciliasericata larvae by baculovirus expression vector system (BEVS) in SF9. Material and methods: In the first step, gene structure was subjected to the in silico studies, which were include determination of Antibacterial activity, Prion formation risk, homology modeling, Molecular docking analysis, and Optimization of recombinant protein. In the second step, the Netrin-A gene was cloned and amplified in pTG19 vector. After digestion with BamH1 and EcoR1 restriction enzymes, it was cloned in pFastBac HTA vector. It was then transformed into DH10Bac competent cells, and the recombinant Bacmid was subsequently transfected into insect Sf9 cells. The expressed recombinant Netrin-A was thus purified in the Ni-NTA agarose. This protein evaluation was done using SDS-PAGE and western blot, respectively. Finally, its concentration was calculated with the Bradford assay method. Results: The Bacmid vector structure with Netrin-A was successfully constructed and then expressed as Netrin-A protein in the Sf9 cell lane. The molecular weight of this protein was 52 kDa with 404 amino acids. In the in silico studies, fortunately, we predicted that recombinant LSNetrin-A have Antibacterial activity and without any prion formation risk.This molecule hasa high binding affinity to the Neogenin and a lower affinity to the DCC-specific receptors. Signal peptide located between amino acids 24 and 25. The concentration of Netrin-A recombinant protein was calculated to be 48.8 μg/ml. it was confirmed that the characterized gene in our previous study codes L. sericata Netrin-A enzyme. Conclusions: Successful generation of the recombinant Netrin-A, a secreted protein in L.sericata salivary glands, and because Luciliasericata larvae are used in larval therapy. Therefore, the findings of the present study could be useful to researchers in future studies on wound healing.Keywords: blowfly, BEVS, gene, immature insect, recombinant protein, Sf9
Procedia PDF Downloads 91474 Metabolic Profiling in Breast Cancer Applying Micro-Sampling of Biological Fluids and Analysis by Gas Chromatography – Mass Spectrometry
Authors: Mónica P. Cala, Juan S. Carreño, Roland J.W. Meesters
Abstract:
Recently, collection of biological fluids on special filter papers has become a popular micro-sampling technique. Especially, the dried blood spot (DBS) micro-sampling technique has gained much attention and is momently applied in various life sciences reserach areas. As a result of this popularity, DBS are not only intensively competing with the venous blood sampling method but are at this moment widely applied in numerous bioanalytical assays. In particular, in the screening of inherited metabolic diseases, pharmacokinetic modeling and in therapeutic drug monitoring. Recently, microsampling techniques were also introduced in “omics” areas, whereunder metabolomics. For a metabolic profiling study we applied micro-sampling of biological fluids (blood and plasma) from healthy controls and from women with breast cancer. From blood samples, dried blood and plasma samples were prepared by spotting 8uL sample onto pre-cutted 5-mm paper disks followed by drying of the disks for 100 minutes. Dried disks were then extracted by 100 uL of methanol. From liquid blood and plasma samples 40 uL were deproteinized with methanol followed by centrifugation and collection of supernatants. Supernatants and extracts were evaporated until dryness by nitrogen gas and residues derivated by O-methyxyamine and MSTFA. As internal standard C17:0-methylester in heptane (10 ppm) was used. Deconvolution and alignment of and full scan (m/z 50-500) MS data were done by AMDIS and SpectConnect (http://spectconnect.mit.edu) software, respectively. Statistical Data analysis was done by Principal Component Analysis (PCA) using R software. The results obtained from our preliminary study indicate that the use of dried blood/plasma on paper disks could be a powerful new tool in metabolic profiling. Many of the metabolites observed in plasma (liquid/dried) were also positively identified in whole blood samples (liquid/dried). Whole blood could be a potential substitute matrix for plasma in Metabolomic profiling studies as well also micro-sampling techniques for the collection of samples in clinical studies. It was concluded that the separation of the different sample methodologies (liquid vs. dried) as observed by PCA was due to different sample treatment protocols applied. More experiments need to be done to confirm obtained observations as well also a more rigorous validation .of these micro-sampling techniques is needed. The novelty of our approach can be found in the application of different biological fluid micro-sampling techniques for metabolic profiling.Keywords: biofluids, breast cancer, metabolic profiling, micro-sampling
Procedia PDF Downloads 410473 Analysis of Urban Flooding in Wazirabad Catchment of Kabul City with Help of Geo-SWMM
Authors: Fazli Rahim Shinwari, Ulrich Dittmer
Abstract:
Like many megacities around the world, Kabul is facing severe problems due to the rising frequency of urban flooding. Since 2001, Kabul is experiencing rapid population growth because of the repatriation of refugees and internal migration. Due to unplanned development, green areas inside city and hilly areas within and around the city are converted into new housing towns that had increased runoff. Trenches along the roadside comprise the unplanned drainage network of the city that drains the combined sewer flow. In rainy season overflow occurs, and after streets become dry, the dust particles contaminate the air which is a major cause of air pollution in Kabul city. In this study, a stormwater management model is introduced as a basis for a systematic approach to urban drainage planning in Kabul. For this purpose, Kabul city is delineated into 8 watersheds with the help of one-meter resolution LIDAR DEM. Storm, water management model, is developed for Wazirabad catchment by using available data and literature values. Due to lack of long term metrological data, the model is only run for hourly rainfall data of a rain event that occurred in April 2016. The rain event from 1st to 3rd April with maximum intensity of 3mm/hr caused huge flooding in Wazirabad Catchment of Kabul City. Model-estimated flooding at some points of the catchment as an actual measurement of flooding was not possible; results were compared with information obtained from local people, Kabul Municipality and Capital Region Independent Development Authority. The model helped to identify areas where flooding occurred because of less capacity of drainage system and areas where the main reason for flooding is due to blockage in the drainage canals. The model was used for further analysis to find a sustainable solution to the problem. The option to construct new canals was analyzed, and two new canals were proposed that will reduce the flooding frequency in Wazirabad catchment of Kabul city. By developing the methodology to develop a stormwater management model from digital data and information, the study had fulfilled the primary objective, and similar methodology can be used for other catchments of Kabul city to prepare an emergency and long-term plan for drainage system of Kabul city.Keywords: urban hydrology, storm water management, modeling, SWMM, GEO-SWMM, GIS, identification of flood vulnerable areas, urban flooding analysis, sustainable urban drainage
Procedia PDF Downloads 151472 Computational Modelling of pH-Responsive Nanovalves in Controlled-Release System
Authors: Tomilola J. Ajayi
Abstract:
A category of nanovalves system containing the α-cyclodextrin (α-CD) ring on a stalk tethered to the pores of mesoporous silica nanoparticles (MSN) is theoretically and computationally modelled. This functions to control opening and blocking of the MSN pores for efficient targeted drug release system. Modeling of the nanovalves is based on the interaction between α-CD and the stalk (p-anisidine) in relation to pH variation. Conformational analysis was carried out prior to the formation of the inclusion complex, to find the global minimum of both neutral and protonated stalk. B3LYP/6-311G**(d, p) basis set was employed to attain all theoretically possible conformers of the stalk. Six conformers were taken into considerations, and the dihedral angle (θ) around the reference atom (N17) of the p-anisidine stalk was scanned from 0° to 360° at 5° intervals. The most stable conformer was obtained at a dihedral angle of 85.3° and was fully optimized at B3LYP/6-311G**(d, p) level of theory. The most stable conformer obtained from conformational analysis was used as the starting structure to create the inclusion complexes. 9 complexes were formed by moving the neutral guest into the α-CD cavity along the Z-axis in 1 Å stepwise while keeping the distance between dummy atom and OMe oxygen atom on the stalk restricted. The dummy atom and the carbon atoms on α-CD structure were equally restricted for orientation A (see Scheme 1). The generated structures at each step were optimized with B3LYP/6-311G**(d, p) methods to determine their energy minima. Protonation of the nitrogen atom on the stalk occurs at acidic pH, leading to unsatisfactory host-guest interaction in the nanogate; hence there is dethreading. High required interaction energy and conformational change are theoretically established to drive the release of α-CD at a certain pH. The release was found to occur between pH 5-7 which agreed with reported experimental results. In this study, we applied the theoretical model for the prediction of the experimentally observed pH-responsive nanovalves which enables blocking, and opening of mesoporous silica nanoparticles pores for targeted drug release system. Our results show that two major factors are responsible for the cargo release at acidic pH. The higher interaction energy needed for the complex/nanovalve formation to exist after protonation as well as conformational change upon protonation are driving the release due to slight pH change from 5 to 7.Keywords: nanovalves, nanogate, mesoporous silica nanoparticles, cargo
Procedia PDF Downloads 122471 Investigate the Competencies Required for Sustainable Entrepreneurship Development in Agricultural Higher Education
Authors: Ehsan Moradi, Parisa Paikhaste, Amir Alam Beigi, Seyedeh Somayeh Bathaei
Abstract:
The need for entrepreneurial sustainability is as important as the entrepreneurship category itself. By transferring competencies in a sustainable entrepreneurship framework, entrepreneurship education can make a significant contribution to the effectiveness of businesses, especially for start-up entrepreneurs. This study analyzes the essential competencies of students in the development of sustainable entrepreneurship. It is an applied causal study in terms of nature and field in terms of data collection. The main purpose of this research project is to study and explain the dimensions of sustainability entrepreneurship competencies among agricultural students. The statistical population consists of 730 junior and senior undergraduate students of the Campus of Agriculture and Natural Resources, University of Tehran. The sample size was determined to be 120 using the Cochran's formula, and the convenience sampling method was used. Face validity, structure validity, and diagnostic methods were used to evaluate the validity of the research tool and Cronbach's alpha and composite reliability to evaluate its reliability. Structural equation modeling (SEM) was used by the confirmatory factor analysis (CFA) method to prepare a measurement model for data processing. The results showed that seven key dimensions play a role in shaping sustainable entrepreneurial development competencies: systems thinking competence (STC), embracing diversity and interdisciplinary (EDI), foresighted thinking (FTC), normative competence (NC), action competence (AC), interpersonal competence (IC), and strategic management competence (SMC). It was found that acquiring skills in SMC by creating the ability to plan to achieve sustainable entrepreneurship in students through the relevant mechanisms can improve entrepreneurship in students by adopting a sustainability attitude. While increasing students' analytical ability in the field of social and environmental needs and challenges and emphasizing curriculum updates, AC should pay more attention to the relationship between the curriculum and its content in the form of entrepreneurship culture promotion programs. In the field of EDI, it was found that the success of entrepreneurs in terms of sustainability and business sustainability of start-up entrepreneurs depends on their interdisciplinary thinking. It was also found that STC plays an important role in explaining the relationship between sustainability and entrepreneurship. Therefore, focusing on these competencies in agricultural education to train start-up entrepreneurs can lead to sustainable entrepreneurship in the agricultural higher education system.Keywords: sustainable entrepreneurship, entrepreneurship education, competency, agricultural higher education
Procedia PDF Downloads 143470 The Production of Reinforced Insulation Bricks out of the Concentration of Ganoderma lucidum Fungal Inoculums and Cement Paste
Authors: Jovie Esquivias Nicolas, Ron Aldrin Lontoc Austria, Crisabelle Belleza Bautista, Mariane Chiho Espinosa Bundalian, Owwen Kervy Del Rosario Castillo, Mary Angelyn Mercado Dela Cruz, Heinrich Theraja Recana De Luna, Chriscell Gipanao Eustaquio, Desiree Laine Lauz Gilbas, Jordan Ignacio Legaspi, Larah Denise David Madrid, Charles Linelle Malapote Mendoza, Hazel Maxine Manalad Reyes, Carl Justine Nabora Saberdo, Claire Mae Rendon Santos
Abstract:
In response to the global race in discovering the next advanced sustainable material that will reduce our ecological footprint, the researchers aimed to create a masonry unit which is competent in physical edifices and other constructional facets. From different proven researches, mycelium has been concluded that when dried can be used as a robust and waterproof building material that can be grown into explicit forms, thus reducing the processing requirements. Hypothesizing inclusive measures to attest fungi’s impressive structural qualities and absorbency, the researchers projected to perform comparative analyses in creating mycelium bricks from mushroom spores of G. lucidum. Three treatments were intended to classify the most ideal concentration of clay and substrate fixings. The substrate bags fixed with 30% clay and 70% mixings indicated highest numerical frequencies in terms of full occupation of fungal mycelia. Subsequently, sorted parts of white portions from the treatment were settled in a thermoplastic mold and burnt. Three proportional concentrations of cultivated substrate and cement were also prioritized to gather results of variation focused on the weights of the bricks in the Water Absorption Test and Durability Test. Fungal inoculums with solutions of cement showed small to moderate amounts of decrease and increase in load. This proves that the treatments did not show any significant difference when it comes to strength, efficiency and absorption capacity. Each of the concentration is equally valid and could be used in supporting the worldwide demands of creating numerous bricks while also taking into consideration the recovery of our nature.Keywords: mycelium, fungi, fungal mycelia, durability test, water absorption test
Procedia PDF Downloads 133469 Development of Gully Erosion Prediction Model in Sokoto State, Nigeria, using Remote Sensing and Geographical Information System Techniques
Authors: Nathaniel Bayode Eniolorunda, Murtala Abubakar Gada, Sheikh Danjuma Abubakar
Abstract:
The challenge of erosion in the study area is persistent, suggesting the need for a better understanding of the mechanisms that drive it. Thus, the study evolved a predictive erosion model (RUSLE_Sok), deploying Remote Sensing (RS) and Geographical Information System (GIS) tools. The nature and pattern of the factors of erosion were characterized, while soil losses were quantified. Factors’ impacts were also measured, and the morphometry of gullies was described. Data on the five factors of RUSLE and distances to settlements, rivers and roads (K, R, LS, P, C, DS DRd and DRv) were combined and processed following standard RS and GIS algorithms. Harmonized World Soil Data (HWSD), Shuttle Radar Topographical Mission (SRTM) image, Climate Hazards Group InfraRed Precipitation with Station data (CHIRPS), Sentinel-2 image accessed and processed within the Google Earth Engine, road network and settlements were the data combined and calibrated into the factors for erosion modeling. A gully morphometric study was conducted at some purposively selected sites. Factors of soil erosion showed low, moderate, to high patterns. Soil losses ranged from 0 to 32.81 tons/ha/year, classified into low (97.6%), moderate (0.2%), severe (1.1%) and very severe (1.05%) forms. The multiple regression analysis shows that factors statistically significantly predicted soil loss, F (8, 153) = 55.663, p < .0005. Except for the C-Factor with a negative coefficient, all other factors were positive, with contributions in the order of LS>C>R>P>DRv>K>DS>DRd. Gullies are generally from less than 100m to about 3km in length. Average minimum and maximum depths at gully heads are 0.6 and 1.2m, while those at mid-stream are 1 and 1.9m, respectively. The minimum downstream depth is 1.3m, while that for the maximum is 4.7m. Deeper gullies exist in proximity to rivers. With minimum and maximum gully elevation values ranging between 229 and 338m and an average slope of about 3.2%, the study area is relatively flat. The study concluded that major erosion influencers in the study area are topography and vegetation cover and that the RUSLE_Sok well predicted soil loss more effectively than ordinary RUSLE. The adoption of conservation measures such as tree planting and contour ploughing on sloppy farmlands was recommended.Keywords: RUSLE_Sok, Sokoto, google earth engine, sentinel-2, erosion
Procedia PDF Downloads 73468 Microchip-Integrated Computational Models for Studying Gait and Motor Control Deficits in Autism
Authors: Noah Odion, Honest Jimu, Blessing Atinuke Afuape
Abstract:
Introduction: Motor control and gait abnormalities are commonly observed in individuals with autism spectrum disorder (ASD), affecting their mobility and coordination. Understanding the underlying neurological and biomechanical factors is essential for designing effective interventions. This study focuses on developing microchip-integrated wearable devices to capture real-time movement data from individuals with autism. By applying computational models to the collected data, we aim to analyze motor control patterns and gait abnormalities, bridging a crucial knowledge gap in autism-related motor dysfunction. Methods: We designed microchip-enabled wearable devices capable of capturing precise kinematic data, including joint angles, acceleration, and velocity during movement. A cross-sectional study was conducted on individuals with ASD and a control group to collect comparative data. Computational modelling was applied using machine learning algorithms to analyse motor control patterns, focusing on gait variability, balance, and coordination. Finite element models were also used to simulate muscle and joint dynamics. The study employed descriptive and analytical methods to interpret the motor data. Results: The wearable devices effectively captured detailed movement data, revealing significant gait variability in the ASD group. For example, gait cycle time was 25% longer, and stride length was reduced by 15% compared to the control group. Motor control analysis showed a 30% reduction in balance stability in individuals with autism. Computational models successfully predicted movement irregularities and helped identify motor control deficits, particularly in the lower limbs. Conclusions: The integration of microchip-based wearable devices with computational models offers a powerful tool for diagnosing and treating motor control deficits in autism. These results have significant implications for patient care, providing objective data to guide personalized therapeutic interventions. The findings also contribute to the broader field of neuroscience by improving our understanding of the motor dysfunctions associated with ASD and other neurodevelopmental disorders.Keywords: motor control, gait abnormalities, autism, wearable devices, microchips, computational modeling, kinematic analysis, neurodevelopmental disorders
Procedia PDF Downloads 21467 Optimization for Autonomous Robotic Construction by Visual Guidance through Machine Learning
Authors: Yangzhi Li
Abstract:
Network transfer of information and performance customization is now a viable method of digital industrial production in the era of Industry 4.0. Robot platforms and network platforms have grown more important in digital design and construction. The pressing need for novel building techniques is driven by the growing labor scarcity problem and increased awareness of construction safety. Robotic approaches in construction research are regarded as an extension of operational and production tools. Several technological theories related to robot autonomous recognition, which include high-performance computing, physical system modeling, extensive sensor coordination, and dataset deep learning, have not been explored using intelligent construction. Relevant transdisciplinary theory and practice research still has specific gaps. Optimizing high-performance computing and autonomous recognition visual guidance technologies improves the robot's grasp of the scene and capacity for autonomous operation. Intelligent vision guidance technology for industrial robots has a serious issue with camera calibration, and the use of intelligent visual guiding and identification technologies for industrial robots in industrial production has strict accuracy requirements. It can be considered that visual recognition systems have challenges with precision issues. In such a situation, it will directly impact the effectiveness and standard of industrial production, necessitating a strengthening of the visual guiding study on positioning precision in recognition technology. To best facilitate the handling of complicated components, an approach for the visual recognition of parts utilizing machine learning algorithms is proposed. This study will identify the position of target components by detecting the information at the boundary and corner of a dense point cloud and determining the aspect ratio in accordance with the guidelines for the modularization of building components. To collect and use components, operational processing systems assign them to the same coordinate system based on their locations and postures. The RGB image's inclination detection and the depth image's verification will be used to determine the component's present posture. Finally, a virtual environment model for the robot's obstacle-avoidance route will be constructed using the point cloud information.Keywords: robotic construction, robotic assembly, visual guidance, machine learning
Procedia PDF Downloads 86466 A Reduced Ablation Model for Laser Cutting and Laser Drilling
Authors: Torsten Hermanns, Thoufik Al Khawli, Wolfgang Schulz
Abstract:
In laser cutting as well as in long pulsed laser drilling of metals, it can be demonstrated that the ablation shape (the shape of cut faces respectively the hole shape) that is formed approaches a so-called asymptotic shape such that it changes only slightly or not at all with further irradiation. These findings are already known from the ultrashort pulse (USP) ablation of dielectric and semiconducting materials. The explanation for the occurrence of an asymptotic shape in laser cutting and long pulse drilling of metals is identified, its underlying mechanism numerically implemented, tested and clearly confirmed by comparison with experimental data. In detail, there now is a model that allows the simulation of the temporal (pulse-resolved) evolution of the hole shape in laser drilling as well as the final (asymptotic) shape of the cut faces in laser cutting. This simulation especially requires much less in the way of resources, such that it can even run on common desktop PCs or laptops. Individual parameters can be adjusted using sliders – the simulation result appears in an adjacent window and changes in real time. This is made possible by an application-specific reduction of the underlying ablation model. Because this reduction dramatically decreases the complexity of calculation, it produces a result much more quickly. This means that the simulation can be carried out directly at the laser machine. Time-intensive experiments can be reduced and set-up processes can be completed much faster. The high speed of simulation also opens up a range of entirely different options, such as metamodeling. Suitable for complex applications with many parameters, metamodeling involves generating high-dimensional data sets with the parameters and several evaluation criteria for process and product quality. These sets can then be used to create individual process maps that show the dependency of individual parameter pairs. This advanced simulation makes it possible to find global and local extreme values through mathematical manipulation. Such simultaneous optimization of multiple parameters is scarcely possible by experimental means. This means that new methods in manufacturing such as self-optimization can be executed much faster. However, the software’s potential does not stop there; time-intensive calculations exist in many areas of industry. In laser welding or laser additive manufacturing, for example, the simulation of thermal induced residual stresses still uses up considerable computing capacity or is even not possible. Transferring the principle of reduced models promises substantial savings there, too.Keywords: asymptotic ablation shape, interactive process simulation, laser drilling, laser cutting, metamodeling, reduced modeling
Procedia PDF Downloads 213465 Predicting Costs in Construction Projects with Machine Learning: A Detailed Study Based on Activity-Level Data
Authors: Soheila Sadeghi
Abstract:
Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.Keywords: cost prediction, machine learning, project management, random forest, neural networks
Procedia PDF Downloads 51464 Design and Implementation of Generative Models for Odor Classification Using Electronic Nose
Authors: Kumar Shashvat, Amol P. Bhondekar
Abstract:
In the midst of the five senses, odor is the most reminiscent and least understood. Odor testing has been mysterious and odor data fabled to most practitioners. The delinquent of recognition and classification of odor is important to achieve. The facility to smell and predict whether the artifact is of further use or it has become undesirable for consumption; the imitation of this problem hooked on a model is of consideration. The general industrial standard for this classification is color based anyhow; odor can be improved classifier than color based classification and if incorporated in machine will be awfully constructive. For cataloging of odor for peas, trees and cashews various discriminative approaches have been used Discriminative approaches offer good prognostic performance and have been widely used in many applications but are incapable to make effectual use of the unlabeled information. In such scenarios, generative approaches have better applicability, as they are able to knob glitches, such as in set-ups where variability in the series of possible input vectors is enormous. Generative models are integrated in machine learning for either modeling data directly or as a transitional step to form an indeterminate probability density function. The algorithms or models Linear Discriminant Analysis and Naive Bayes Classifier have been used for classification of the odor of cashews. Linear Discriminant Analysis is a method used in data classification, pattern recognition, and machine learning to discover a linear combination of features that typifies or divides two or more classes of objects or procedures. The Naive Bayes algorithm is a classification approach base on Bayes rule and a set of qualified independence theory. Naive Bayes classifiers are highly scalable, requiring a number of restraints linear in the number of variables (features/predictors) in a learning predicament. The main recompenses of using the generative models are generally a Generative Models make stronger assumptions about the data, specifically, about the distribution of predictors given the response variables. The Electronic instrument which is used for artificial odor sensing and classification is an electronic nose. This device is designed to imitate the anthropological sense of odor by providing an analysis of individual chemicals or chemical mixtures. The experimental results have been evaluated in the form of the performance measures i.e. are accuracy, precision and recall. The investigational results have proven that the overall performance of the Linear Discriminant Analysis was better in assessment to the Naive Bayes Classifier on cashew dataset.Keywords: odor classification, generative models, naive bayes, linear discriminant analysis
Procedia PDF Downloads 387463 The Usage of Bridge Estimator for Hegy Seasonal Unit Root Tests
Authors: Huseyin Guler, Cigdem Kosar
Abstract:
The aim of this study is to propose Bridge estimator for seasonal unit root tests. Seasonality is an important factor for many economic time series. Some variables may contain seasonal patterns and forecasts that ignore important seasonal patterns have a high variance. Therefore, it is very important to eliminate seasonality for seasonal macroeconomic data. There are some methods to eliminate the impacts of seasonality in time series. One of them is filtering the data. However, this method leads to undesired consequences in unit root tests, especially if the data is generated by a stochastic seasonal process. Another method to eliminate seasonality is using seasonal dummy variables. Some seasonal patterns may result from stationary seasonal processes, which are modelled using seasonal dummies but if there is a varying and changing seasonal pattern over time, so the seasonal process is non-stationary, deterministic seasonal dummies are inadequate to capture the seasonal process. It is not suitable to use seasonal dummies for modeling such seasonally nonstationary series. Instead of that, it is necessary to take seasonal difference if there are seasonal unit roots in the series. Different alternative methods are proposed in the literature to test seasonal unit roots, such as Dickey, Hazsa, Fuller (DHF) and Hylleberg, Engle, Granger, Yoo (HEGY) tests. HEGY test can be also used to test the seasonal unit root in different frequencies (monthly, quarterly, and semiannual). Another issue in unit root tests is the lag selection. Lagged dependent variables are added to the model in seasonal unit root tests as in the unit root tests to overcome the autocorrelation problem. In this case, it is necessary to choose the lag length and determine any deterministic components (i.e., a constant and trend) first, and then use the proper model to test for seasonal unit roots. However, this two-step procedure might lead size distortions and lack of power in seasonal unit root tests. Recent studies show that Bridge estimators are good in selecting optimal lag length while differentiating nonstationary versus stationary models for nonseasonal data. The advantage of this estimator is the elimination of the two-step nature of conventional unit root tests and this leads a gain in size and power. In this paper, the Bridge estimator is proposed to test seasonal unit roots in a HEGY model. A Monte-Carlo experiment is done to determine the efficiency of this approach and compare the size and power of this method with HEGY test. Since Bridge estimator performs well in model selection, our approach may lead to some gain in terms of size and power over HEGY test.Keywords: bridge estimators, HEGY test, model selection, seasonal unit root
Procedia PDF Downloads 339462 Partial M-Sequence Code Families Applied in Spectral Amplitude Coding Fiber-Optic Code-Division Multiple-Access Networks
Authors: Shin-Pin Tseng
Abstract:
Nowadays, numerous spectral amplitude coding (SAC) fiber-optic code-division-multiple-access (FO-CDMA) techniques were appealing due to their capable of providing moderate security and relieving the effects of multiuser interference (MUI). Nonetheless, the performance of the previous network is degraded due to fixed in-phase cross-correlation (IPCC) value. Based on the above problems, a new SAC FO-CDMA network using partial M-sequence (PMS) code is presented in this study. Because the proposed PMS code is originated from M-sequence code, the system using the PMS code could effectively suppress the effects of MUI. In addition, two-code keying (TCK) scheme can applied in the proposed SAC FO-CDMA network and enhance the whole network performance. According to the consideration of system flexibility, simple optical encoders/decoders (codecs) using fiber Bragg gratings (FBGs) were also developed. First, we constructed a diagram of the SAC FO-CDMA network, including (N/2-1) optical transmitters, (N/2-1) optical receivers, and one N×N star coupler for broadcasting transmitted optical signals to arrive at the input port of each optical receiver. Note that the parameter N for the PMS code was the code length. In addition, the proposed SAC network was using superluminescent diodes (SLDs) as light sources, which then can save a lot of system cost compared with the other FO-CDMA methods. For the design of each optical transmitter, it is composed of an SLD, one optical switch, and two optical encoders according to assigned PMS codewords. On the other hand, each optical receivers includes a 1 × 2 splitter, two optical decoders, and one balanced photodiode for mitigating the effect of MUI. In order to simplify the next analysis, the some assumptions were used. First, the unipolarized SLD has flat power spectral density (PSD). Second, the received optical power at the input port of each optical receiver is the same. Third, all photodiodes in the proposed network have the same electrical properties. Fourth, transmitting '1' and '0' has an equal probability. Subsequently, by taking the factors of phase‐induced intensity noise (PIIN) and thermal noise, the corresponding performance was displayed and compared with the performance of the previous SAC FO-CDMA networks. From the numerical result, it shows that the proposed network improved about 25% performance than that using other codes at BER=10-9. This is because the effect of PIIN was effectively mitigated and the received power was enhanced by two times. As a result, the SAC FO-CDMA network using PMS codes has an opportunity to apply in applications of the next-generation optical network.Keywords: spectral amplitude coding, SAC, fiber-optic code-division multiple-access, FO-CDMA, partial M-sequence, PMS code, fiber Bragg grating, FBG
Procedia PDF Downloads 384461 A Machine Learning Approach for Efficient Resource Management in Construction Projects
Authors: Soheila Sadeghi
Abstract:
Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.Keywords: resource allocation, machine learning, optimization, data-driven decision-making, project management
Procedia PDF Downloads 36460 A Multi-Scale Study of Potential-Dependent Ammonia Synthesis on IrO₂ (110): DFT, 3D-RISM, and Microkinetic Modeling
Authors: Shih-Huang Pan, Tsuyoshi Miyazaki, Minoru Otani, Santhanamoorthi Nachimuthu, Jyh-Chiang Jiang
Abstract:
Ammonia (NH₃) is crucial in renewable energy and agriculture, yet its traditional production via the Haber-Bosch process faces challenges due to the inherent inertness of nitrogen (N₂) and the need for high temperatures and pressures. The electrocatalytic nitrogen reduction (ENRR) presents a more sustainable option, functioning at ambient conditions. However, its advancement is limited by selectivity and efficiency challenges due to the competing hydrogen evolution reaction (HER). The critical roles of protonation of N-species and HER highlight the necessity of selecting optimal catalysts and solvents to enhance ENRR performance. Notably, transition metal oxides, with their adjustable electronic states and excellent chemical and thermal stability, have shown promising ENRR characteristics. In this study, we use density functional theory (DFT) methods to investigate the ENRR mechanisms on IrO₂ (110), a material known for its tunable electronic properties and exceptional chemical and thermal stability. Employing the constant electrode potential (CEP) model, where the electrode - electrolyte interface is treated as a polarizable continuum with implicit solvation, and adjusting electron counts to equalize work functions in the grand canonical ensemble, we further incorporate the advanced 3D Reference Interaction Site Model (3D-RISM) to accurately determine the ENRR limiting potential across various solvents and pH conditions. Our findings reveal that the limiting potential for ENRR on IrO₂ (110) is significantly more favorable than for HER, highlighting the efficiency of the IrO₂ catalyst for converting N₂ to NH₃. This is supported by the optimal *NH₃ desorption energy on IrO₂, which enhances the overall reaction efficiency. Microkinetic simulations further predict a promising NH₃ production rate, even at the solution's boiling point¸ reinforcing the catalytic viability of IrO₂ (110). This comprehensive approach provides an atomic-level understanding of the electrode-electrolyte interface in ENRR, demonstrating the practical application of IrO₂ in electrochemical catalysis. The findings provide a foundation for developing more efficient and selective catalytic strategies, potentially revolutionizing industrial NH₃ production.Keywords: density functional theory, electrocatalyst, nitrogen reduction reaction, electrochemistry
Procedia PDF Downloads 17459 Impacts of Climate Change and Natural Gas Operations on the Hydrology of Northeastern BC, Canada: Quantifying the Water Budget for Coles Lake
Authors: Sina Abadzadesahraei, Stephen Déry, John Rex
Abstract:
Climate research has repeatedly identified strong associations between anthropogenic emissions of ‘greenhouses gases’ and observed increases of global mean surface air temperature over the past century. Studies have also demonstrated that the degree of warming varies regionally. Canada is not exempt from this situation, and evidence is mounting that climate change is beginning to cause diverse impacts in both environmental and socio-economic spheres of interest. For example, northeastern British Columbia (BC), whose climate is controlled by a combination of maritime, continental and arctic influences, is warming at a greater rate than the remainder of the province. There are indications that these changing conditions are already leading to shifting patterns in the region’s hydrological cycle, and thus its available water resources. Coincident with these changes, northeastern BC is undergoing rapid development for oil and gas extraction: This depends largely on subsurface hydraulic fracturing (‘fracking’), which uses enormous amounts of freshwater. While this industrial activity has made substantial contributions to regional and provincial economies, it is important to ensure that sufficient and sustainable water supplies are available for all those dependent on the resource, including ecological systems. In this turn demands a comprehensive understanding of how water in all its forms interacts with landscapes, the atmosphere, and of the potential impacts of changing climatic conditions on these processes. The aim of this study is therefore to characterize and quantify all components of the water budget in the small watershed of Coles Lake (141.8 km², 100 km north of Fort Nelson, BC), through a combination of field observations and numerical modelling. Baseline information will aid the assessment of the sustainability of current and future plans for freshwater extraction by the oil and gas industry, and will help to maintain the precarious balance between economic and environmental well-being. This project is a perfect example of interdisciplinary research, in that it not only examines the hydrology of the region but also investigates how natural gas operations and growth can affect water resources. Therefore, a fruitful collaboration between academia, government and industry has been established to fulfill the objectives of this research in a meaningful manner. This project aims to provide numerous benefits to BC communities. Further, the outcome and detailed information of this research can be a huge asset to researchers examining the effect of climate change on water resources worldwide.Keywords: northeastern British Columbia, water resources, climate change, oil and gas extraction
Procedia PDF Downloads 263458 Bibliometric Analysis of Risk Assessment of Inland Maritime Accidents in Bangladesh
Authors: Armana Huq, Wahidur Rahman, Sanwar Kader
Abstract:
Inland waterways in Bangladesh play an important role in providing comfortable and low-cost transportation. However, a maritime accident takes away many lives and creates unwanted hazards every year. This article deals with a comprehensive review of inland waterway accidents in Bangladesh. Additionally, it includes a comparative study between international and local inland research studies based on maritime accidents. Articles from inland waterway areas are analyzed in-depth to make a comprehensive overview of the nature of the academic work, accident and risk management process and different statistical analyses. It is found that empirical analysis based on the available statistical data dominates the research domain. For this study, major maritime accident-related works in the last four decades in Bangladesh (1981-2020) are being analyzed for preparing a bibliometric analysis. A study of maritime accidents of passenger's vessels during (1995-2005) indicates that the predominant causes of accidents in the inland waterways of Bangladesh are collision and adverse weather (77%), out of which collision due to human error alone stands (56%) of all accidents. Another study refers that the major causes of waterway accidents are the collision (60.3%) during 2005-2015. About 92% of this collision occurs due to direct contact with another vessel during this period. Rest 8% of the collision occurs by contact with permanent obstruction on waterway roots. The overall analysis of another study from the last 25 years (1995-2019) shows that one of the main types of accidents is collisions, with about 50.3% of accidents being caused by collisions. The other accident types are cyclone or storm (17%), overload (11.3%), physical failure (10.3%), excessive waves (5.1%), and others (6%). Very few notable works are available in testing or comparing the methods, proposing new methods for risk management, modeling, uncertainty treatment. The purpose of this paper is to provide an overview of the evolution of marine accident-related research domain regarding inland waterway of Bangladesh and attempts to introduce new ideas and methods to abridge the gap between international and national inland maritime-related work domain which can be a catalyst for a safer and sustainable water transportation system in Bangladesh. Another fundamental objective of this paper is to navigate various national maritime authorities and international organizations to implement risk management processes for shipping accident prevention in waterway areas.Keywords: inland waterways, safety, bibliometric analysis, risk management, accidents
Procedia PDF Downloads 181457 Co-Creational Model for Blended Learning in a Flipped Classroom Environment Focusing on the Combination of Coding and Drone-Building
Authors: A. Schuchter, M. Promegger
Abstract:
The outbreak of the COVID-19 pandemic has shown us that online education is so much more than just a cool feature for teachers – it is an essential part of modern teaching. In online math teaching, it is common to use tools to share screens, compute and calculate mathematical examples, while the students can watch the process. On the other hand, flipped classroom models are on the rise, with their focus on how students can gather knowledge by watching videos and on the teacher’s use of technological tools for information transfer. This paper proposes a co-educational teaching approach for coding and engineering subjects with the help of drone-building to spark interest in technology and create a platform for knowledge transfer. The project combines aspects from mathematics (matrices, vectors, shaders, trigonometry), physics (force, pressure and rotation) and coding (computational thinking, block-based programming, JavaScript and Python) and makes use of collaborative-shared 3D Modeling with clara.io, where students create mathematics knowhow. The instructor follows a problem-based learning approach and encourages their students to find solutions in their own time and in their own way, which will help them develop new skills intuitively and boost logically structured thinking. The collaborative aspect of working in groups will help the students develop communication skills as well as structural and computational thinking. Students are not just listeners as in traditional classroom settings, but play an active part in creating content together by compiling a Handbook of Knowledge (called “open book”) with examples and solutions. Before students start calculating, they have to write down all their ideas and working steps in full sentences so other students can easily follow their train of thought. Therefore, students will learn to formulate goals, solve problems, and create a ready-to use product with the help of “reverse engineering”, cross-referencing and creative thinking. The work on drones gives the students the opportunity to create a real-life application with a practical purpose, while going through all stages of product development.Keywords: flipped classroom, co-creational education, coding, making, drones, co-education, ARCS-model, problem-based learning
Procedia PDF Downloads 119