Search results for: adaptive modeling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4903

Search results for: adaptive modeling

373 Incentive-Based Motivation to Network with Coworkers: Strengthening Professional Networks via Online Social Networks

Authors: Jung Lee

Abstract:

The last decade has witnessed more people than ever before using social media and broadening their social circles. Social media users connect not only with their friends but also with professional acquaintances, primarily coworkers, and clients; personal and professional social circles are mixed within the same social media platform. Considering the positive aspect of social media in facilitating communication and mutual understanding between individuals, we infer that social media interactions with co-workers could indeed benefit one’s professional life. However, given privacy issues, sharing all personal details with one’s co-workers is not necessarily the best practice. Should one connect with coworkers via social media? Will social media connections with coworkers eventually benefit one’s long-term career? Will the benefit differ across cultures? To answer, this study examines how social media can contribute to organizational communication by tracing the foundation of user motivation based on social capital theory, leader-member exchange (LMX) theory and expectancy theory of motivation. Although social media was originally designed for personal communication, users have shown intentions to extend social media use for professional communication, especially when the proper incentive is expected. To articulate the user motivation and the mechanism of the incentive expectation scheme, this study applies those three theories and identify six antecedents and three moderators of social media use motivation including social network flaunt, shared interest, perceived social inclusion. It also hypothesizes that the moderating effects of those constructs would significantly differ based on the relationship hierarchy among the workers. To validate, this study conducted a survey of 329 active social media users with acceptable levels of job experiences. The analysis result confirms the specific roles of the three moderators in social media adoption for organizational communication. The present study contributes to the literature by developing a theoretical modeling of ambivalent employee perceptions about establishing social media connections with co-workers. This framework shows not only how both positive and negative expectations of social media connections with co-workers are formed based on expectancy theory of motivation, but also how such expectations lead to behavioral intentions using career success model. It also enhances understanding of how various relationships among employees can be influenced through social media use and such usage can potentially affect both performance and careers. Finally, it shows how cultural factors induced by social media use can influence relations among the coworkers.

Keywords: the social network, workplace, social capital, motivation

Procedia PDF Downloads 123
372 Knowledge Creation and Diffusion Dynamics under Stable and Turbulent Environment for Organizational Performance Optimization

Authors: Jessica Gu, Yu Chen

Abstract:

Knowledge Management (KM) is undoubtable crucial to organizational value creation, learning, and adaptation. Although the rapidly growing KM domain has been fueled with full-fledged methodologies and technologies, studies on KM evolution that bridge the organizational performance and adaptation to the organizational environment are still rarely attempted. In particular, creation (or generation) and diffusion (or share/exchange) of knowledge are of the organizational primary concerns on the problem-solving perspective, however, the optimized distribution of knowledge creation and diffusion endeavors are still unknown to knowledge workers. This research proposed an agent-based model of knowledge creation and diffusion in an organization, aiming at elucidating how the intertwining knowledge flows at microscopic level lead to optimized organizational performance at macroscopic level through evolution, and exploring what exogenous interventions by the policy maker and endogenous adjustments of the knowledge workers can better cope with different environmental conditions. With the developed model, a series of simulation experiments are conducted. Both long-term steady-state and time-dependent developmental results on organizational performance, network and structure, social interaction and learning among individuals, knowledge audit and stocktaking, and the likelihood of choosing knowledge creation and diffusion by the knowledge workers are obtained. One of the interesting findings reveals a non-monotonic phenomenon on organizational performance under turbulent environment while a monotonic phenomenon on organizational performance under a stable environment. Hence, whether the environmental condition is turbulence or stable, the most suitable exogenous KM policy and endogenous knowledge creation and diffusion choice adjustments can be identified for achieving the optimized organizational performance. Additional influential variables are further discussed and future work directions are finally elaborated. The proposed agent-based model generates evidence on how knowledge worker strategically allocates efforts on knowledge creation and diffusion, how the bottom-up interactions among individuals lead to emerged structure and optimized performance, and how environmental conditions bring in challenges to the organization system. Meanwhile, it serves as a roadmap and offers great macro and long-term insights to policy makers without interrupting the real organizational operation, sacrificing huge overhead cost, or introducing undesired panic to employees.

Keywords: knowledge creation, knowledge diffusion, agent-based modeling, organizational performance, decision making evolution

Procedia PDF Downloads 238
371 Structural Equation Modelling Based Approach to Integrate Customers and Suppliers with Internal Practices for Lean Manufacturing Implementation in the Indian Context

Authors: Protik Basu, Indranil Ghosh, Pranab K. Dan

Abstract:

Lean management is an integrated socio-technical system to bring about a competitive state in an organization. The purpose of this paper is to explore and integrate the role of customers and suppliers with the internal practices of the Indian manufacturing industries towards successful implementation of lean manufacturing (LM). An extensive literature survey is carried out. An attempt is made to build an exhaustive list of all the input manifests related to customers, suppliers and internal practices necessary for LM implementation, coupled with a similar exhaustive list of the benefits accrued from its successful implementation. A structural model is thus conceptualized, which is empirically validated based on the data from the Indian manufacturing sector. With the current impetus on developing the industrial sector, the Government of India recently introduced the Lean Manufacturing Competitiveness Scheme that aims to increase competitiveness with the help of lean concepts. There is a huge scope to enrich the Indian industries with the lean benefits, the implementation status being quite low. Hardly any survey-based empirical study in India has been found to integrate customers and suppliers with the internal processes towards successful LM implementation. This empirical research is thus carried out in the Indian manufacturing industries. The basic steps of the research methodology followed in this research are the identification of input and output manifest variables and latent constructs, model proposition and hypotheses development, development of survey instrument, sampling and data collection and model validation (exploratory factor analysis, confirmatory factor analysis, and structural equation modeling). The analysis reveals six key input constructs and three output constructs, indicating that these constructs should act in unison to maximize the benefits of implementing lean. The structural model presented in this paper may be treated as a guide to integrating customers and suppliers with internal practices to successfully implement lean. Integrating customers and suppliers with internal practices into a unified, coherent manufacturing system will lead to an optimum utilization of resources. This work is one of the very first researches to have a survey-based empirical analysis of the role of customers, suppliers and internal practices of the Indian manufacturing sector towards an effective lean implementation.

Keywords: customer management, internal manufacturing practices, lean benefits, lean implementation, lean manufacturing, structural model, supplier management

Procedia PDF Downloads 178
370 CFD Simulation of Spacer Effect on Turbulent Mixing Phenomena in Sub Channels of Boiling Nuclear Assemblies

Authors: Shashi Kant Verma, S. L. Sinha, D. K. Chandraker

Abstract:

Numerical simulations of selected subchannel tracer (Potassium Nitrate) based experiments have been performed to study the capabilities of state-of-the-art of Computational Fluid Dynamics (CFD) codes. The Computational Fluid Dynamics (CFD) methodology can be useful for investigating the spacer effect on turbulent mixing to predict turbulent flow behavior such as Dimensionless mixing scalar distributions, radial velocity and vortices in the nuclear fuel assembly. A Gibson and Launder (GL) Reynolds stress model (RSM) has been selected as the primary turbulence model to be applied for the simulation case as it has been previously found reasonably accurate to predict flows inside rod bundles. As a comparison, the case is also simulated using a standard k-ε turbulence model that is widely used in industry. Despite being an isotropic turbulence model, it has also been used in the modeling of flow in rod bundles and to produce lateral velocities after thorough mixing of coolant fairly. Both these models have been solved numerically to find out fully developed isothermal turbulent flow in a 30º segment of a 54-rod bundle. Numerical simulation has been carried out for the study of natural mixing of a Tracer (Passive scalar) to characterize the growth of turbulent diffusion in an injected sub-channel and, afterwards on, cross-mixing between adjacent sub-channels. The mixing with water has been numerically studied by means of steady state CFD simulations with the commercial code STAR-CCM+. Flow enters into the computational domain through the mass inflow at the three subchannel faces. Turbulence intensity and hydraulic diameter of 1% and 5.9 mm respectively were used for the inlet. A passive scalar (Potassium nitrate) is injected through the mass fraction of 5.536 PPM at subchannel 2 (Upstream of the mixing section). Flow exited the domain through the pressure outlet boundary (0 Pa), and the reference pressure was 1 atm. Simulation results have been extracted at different locations of the mixing zone and downstream zone. The local mass fraction shows uniform mixing. The effect of the applied turbulence model is nearly negligible just before the outlet plane because the distributions look like almost identical and the flow is fully developed. On the other hand, quantitatively the dimensionless mixing scalar distributions change noticeably, which is visible in the different scale of the colour bars.

Keywords: single-phase flow, turbulent mixing, tracer, sub channel analysis

Procedia PDF Downloads 207
369 Multi-Size Continuous Particle Separation on a Dielectrophoresis-Based Microfluidics Chip

Authors: Arash Dalili, Hamed Tahmouressi, Mina Hoorfar

Abstract:

Advances in lab-on-a-chip (LOC) devices have led to significant advances in the manipulation, separation, and isolation of particles and cells. Among the different active and passive particle manipulation methods, dielectrophoresis (DEP) has been proven to be a versatile mechanism as it is label-free, cost-effective, simple to operate, and has high manipulation efficiency. DEP has been applied for a wide range of biological and environmental applications. A popular form of DEP devices is the continuous manipulation of particles by using co-planar slanted electrodes, which utilizes a sheath flow to focus the particles into one side of the microchannel. When particles enter the DEP manipulation zone, the negative DEP (nDEP) force generated by the slanted electrodes deflects the particles laterally towards the opposite side of the microchannel. The lateral displacement of the particles is dependent on multiple parameters including the geometry of the electrodes, the width, length and height of the microchannel, the size of the particles and the throughput. In this study, COMSOL Multiphysics® modeling along with experimental studies are used to investigate the effect of the aforementioned parameters. The electric field between the electrodes and the induced DEP force on the particles are modelled by COMSOL Multiphysics®. The simulation model is used to show the effect of the DEP force on the particles, and how the geometry of the electrodes (width of the electrodes and the gap between them) plays a role in the manipulation of polystyrene microparticles. The simulation results show that increasing the electrode width to a certain limit, which depends on the height of the channel, increases the induced DEP force. Also, decreasing the gap between the electrodes leads to a stronger DEP force. Based on these results, criteria for the fabrication of the electrodes were found, and soft lithography was used to fabricate interdigitated slanted electrodes and microchannels. Experimental studies were run to find the effect of the flow rate, geometrical parameters of the microchannel such as length, width, and height as well as the electrodes’ angle on the displacement of 5 um, 10 um and 15 um polystyrene particles. An empirical equation is developed to predict the displacement of the particles under different conditions. It is shown that the displacement of the particles is more for longer and lower height channels, lower flow rates, and bigger particles. On the other hand, the effect of the angle of the electrodes on the displacement of the particles was negligible. Based on the results, we have developed an optimum design (in terms of efficiency and throughput) for three size separation of particles.

Keywords: COMSOL Multiphysics, Dielectrophoresis, Microfluidics, Particle separation

Procedia PDF Downloads 186
368 Processes and Application of Casting Simulation and Its Software’s

Authors: Surinder Pal, Ajay Gupta, Johny Khajuria

Abstract:

Casting simulation helps visualize mold filling and casting solidification; predict related defects like cold shut, shrinkage porosity and hard spots; and optimize the casting design to achieve the desired quality with high yield. Flow and solidification of molten metals are, however, a very complex phenomenon that is difficult to simulate correctly by conventional computational techniques, especially when the part geometry is intricate and the required inputs (like thermo-physical properties and heat transfer coefficients) are not available. Simulation software is based on the process of modeling a real phenomenon with a set of mathematical formulas. It is, essentially, a program that allows the user to observe an operation through simulation without actually performing that operation. Simulation software is used widely to design equipment so that the final product will be as close to design specs as possible without expensive in process modification. Simulation software with real-time response is often used in gaming, but it also has important industrial applications. When the penalty for improper operation is costly, such as airplane pilots, nuclear power plant operators, or chemical plant operators, a mockup of the actual control panel is connected to a real-time simulation of the physical response, giving valuable training experience without fear of a disastrous outcome. The all casting simulation software has own requirements, like magma cast has only best for crack simulation. The latest generation software Auto CAST developed at IIT Bombay provides a host of functions to support method engineers, including part thickness visualization, core design, multi-cavity mold design with common gating and feeding, application of various feed aids (feeder sleeves, chills, padding, etc.), simulation of mold filling and casting solidification, automatic optimization of feeders and gating driven by the desired quality level, and what-if cost analysis. IIT Bombay has developed a set of applications for the foundry industry to improve casting yield and quality. Casting simulation is a fast and efficient solution for process for advanced tool which is the result of more than 20 years of collaboration with major industrial partners and academic institutions around the world. In this paper the process of casting simulation is studied.

Keywords: casting simulation software’s, simulation technique’s, casting simulation, processes

Procedia PDF Downloads 475
367 New Recombinant Netrin-a Protein of Lucilia Sericata Larvae by Bac to Bac Expression Vector System in Sf9 Insect Cell

Authors: Hamzeh Alipour, Masoumeh Bagheri, Abbasali Raz, Javad Dadgar Pakdel, Kourosh Azizi, Aboozar Soltani, Mohammad Djaefar Moemenbellah-Fard

Abstract:

Background: Maggot debridement therapy is an appropriate, effective, and controlled method using sterilized larvae of Luciliasericata (L.sericata) to treat wounds. Netrin-A is an enzyme in the Laminins family which secreted from salivary gland of L.sericata with a central role in neural regeneration and angiogenesis. This study aimed to production of new recombinant Netrin-A protein of Luciliasericata larvae by baculovirus expression vector system (BEVS) in SF9. Material and methods: In the first step, gene structure was subjected to the in silico studies, which were include determination of Antibacterial activity, Prion formation risk, homology modeling, Molecular docking analysis, and Optimization of recombinant protein. In the second step, the Netrin-A gene was cloned and amplified in pTG19 vector. After digestion with BamH1 and EcoR1 restriction enzymes, it was cloned in pFastBac HTA vector. It was then transformed into DH10Bac competent cells, and the recombinant Bacmid was subsequently transfected into insect Sf9 cells. The expressed recombinant Netrin-A was thus purified in the Ni-NTA agarose. This protein evaluation was done using SDS-PAGE and western blot, respectively. Finally, its concentration was calculated with the Bradford assay method. Results: The Bacmid vector structure with Netrin-A was successfully constructed and then expressed as Netrin-A protein in the Sf9 cell lane. The molecular weight of this protein was 52 kDa with 404 amino acids. In the in silico studies, fortunately, we predicted that recombinant LSNetrin-A have Antibacterial activity and without any prion formation risk.This molecule hasa high binding affinity to the Neogenin and a lower affinity to the DCC-specific receptors. Signal peptide located between amino acids 24 and 25. The concentration of Netrin-A recombinant protein was calculated to be 48.8 μg/ml. it was confirmed that the characterized gene in our previous study codes L. sericata Netrin-A enzyme. Conclusions: Successful generation of the recombinant Netrin-A, a secreted protein in L.sericata salivary glands, and because Luciliasericata larvae are used in larval therapy. Therefore, the findings of the present study could be useful to researchers in future studies on wound healing.

Keywords: blowfly, BEVS, gene, immature insect, recombinant protein, Sf9

Procedia PDF Downloads 93
366 Metabolic Profiling in Breast Cancer Applying Micro-Sampling of Biological Fluids and Analysis by Gas Chromatography – Mass Spectrometry

Authors: Mónica P. Cala, Juan S. Carreño, Roland J.W. Meesters

Abstract:

Recently, collection of biological fluids on special filter papers has become a popular micro-sampling technique. Especially, the dried blood spot (DBS) micro-sampling technique has gained much attention and is momently applied in various life sciences reserach areas. As a result of this popularity, DBS are not only intensively competing with the venous blood sampling method but are at this moment widely applied in numerous bioanalytical assays. In particular, in the screening of inherited metabolic diseases, pharmacokinetic modeling and in therapeutic drug monitoring. Recently, microsampling techniques were also introduced in “omics” areas, whereunder metabolomics. For a metabolic profiling study we applied micro-sampling of biological fluids (blood and plasma) from healthy controls and from women with breast cancer. From blood samples, dried blood and plasma samples were prepared by spotting 8uL sample onto pre-cutted 5-mm paper disks followed by drying of the disks for 100 minutes. Dried disks were then extracted by 100 uL of methanol. From liquid blood and plasma samples 40 uL were deproteinized with methanol followed by centrifugation and collection of supernatants. Supernatants and extracts were evaporated until dryness by nitrogen gas and residues derivated by O-methyxyamine and MSTFA. As internal standard C17:0-methylester in heptane (10 ppm) was used. Deconvolution and alignment of and full scan (m/z 50-500) MS data were done by AMDIS and SpectConnect (http://spectconnect.mit.edu) software, respectively. Statistical Data analysis was done by Principal Component Analysis (PCA) using R software. The results obtained from our preliminary study indicate that the use of dried blood/plasma on paper disks could be a powerful new tool in metabolic profiling. Many of the metabolites observed in plasma (liquid/dried) were also positively identified in whole blood samples (liquid/dried). Whole blood could be a potential substitute matrix for plasma in Metabolomic profiling studies as well also micro-sampling techniques for the collection of samples in clinical studies. It was concluded that the separation of the different sample methodologies (liquid vs. dried) as observed by PCA was due to different sample treatment protocols applied. More experiments need to be done to confirm obtained observations as well also a more rigorous validation .of these micro-sampling techniques is needed. The novelty of our approach can be found in the application of different biological fluid micro-sampling techniques for metabolic profiling.

Keywords: biofluids, breast cancer, metabolic profiling, micro-sampling

Procedia PDF Downloads 411
365 Analysis of Urban Flooding in Wazirabad Catchment of Kabul City with Help of Geo-SWMM

Authors: Fazli Rahim Shinwari, Ulrich Dittmer

Abstract:

Like many megacities around the world, Kabul is facing severe problems due to the rising frequency of urban flooding. Since 2001, Kabul is experiencing rapid population growth because of the repatriation of refugees and internal migration. Due to unplanned development, green areas inside city and hilly areas within and around the city are converted into new housing towns that had increased runoff. Trenches along the roadside comprise the unplanned drainage network of the city that drains the combined sewer flow. In rainy season overflow occurs, and after streets become dry, the dust particles contaminate the air which is a major cause of air pollution in Kabul city. In this study, a stormwater management model is introduced as a basis for a systematic approach to urban drainage planning in Kabul. For this purpose, Kabul city is delineated into 8 watersheds with the help of one-meter resolution LIDAR DEM. Storm, water management model, is developed for Wazirabad catchment by using available data and literature values. Due to lack of long term metrological data, the model is only run for hourly rainfall data of a rain event that occurred in April 2016. The rain event from 1st to 3rd April with maximum intensity of 3mm/hr caused huge flooding in Wazirabad Catchment of Kabul City. Model-estimated flooding at some points of the catchment as an actual measurement of flooding was not possible; results were compared with information obtained from local people, Kabul Municipality and Capital Region Independent Development Authority. The model helped to identify areas where flooding occurred because of less capacity of drainage system and areas where the main reason for flooding is due to blockage in the drainage canals. The model was used for further analysis to find a sustainable solution to the problem. The option to construct new canals was analyzed, and two new canals were proposed that will reduce the flooding frequency in Wazirabad catchment of Kabul city. By developing the methodology to develop a stormwater management model from digital data and information, the study had fulfilled the primary objective, and similar methodology can be used for other catchments of Kabul city to prepare an emergency and long-term plan for drainage system of Kabul city.

Keywords: urban hydrology, storm water management, modeling, SWMM, GEO-SWMM, GIS, identification of flood vulnerable areas, urban flooding analysis, sustainable urban drainage

Procedia PDF Downloads 153
364 Computational Modelling of pH-Responsive Nanovalves in Controlled-Release System

Authors: Tomilola J. Ajayi

Abstract:

A category of nanovalves system containing the α-cyclodextrin (α-CD) ring on a stalk tethered to the pores of mesoporous silica nanoparticles (MSN) is theoretically and computationally modelled. This functions to control opening and blocking of the MSN pores for efficient targeted drug release system. Modeling of the nanovalves is based on the interaction between α-CD and the stalk (p-anisidine) in relation to pH variation. Conformational analysis was carried out prior to the formation of the inclusion complex, to find the global minimum of both neutral and protonated stalk. B3LYP/6-311G**(d, p) basis set was employed to attain all theoretically possible conformers of the stalk. Six conformers were taken into considerations, and the dihedral angle (θ) around the reference atom (N17) of the p-anisidine stalk was scanned from 0° to 360° at 5° intervals. The most stable conformer was obtained at a dihedral angle of 85.3° and was fully optimized at B3LYP/6-311G**(d, p) level of theory. The most stable conformer obtained from conformational analysis was used as the starting structure to create the inclusion complexes. 9 complexes were formed by moving the neutral guest into the α-CD cavity along the Z-axis in 1 Å stepwise while keeping the distance between dummy atom and OMe oxygen atom on the stalk restricted. The dummy atom and the carbon atoms on α-CD structure were equally restricted for orientation A (see Scheme 1). The generated structures at each step were optimized with B3LYP/6-311G**(d, p) methods to determine their energy minima. Protonation of the nitrogen atom on the stalk occurs at acidic pH, leading to unsatisfactory host-guest interaction in the nanogate; hence there is dethreading. High required interaction energy and conformational change are theoretically established to drive the release of α-CD at a certain pH. The release was found to occur between pH 5-7 which agreed with reported experimental results. In this study, we applied the theoretical model for the prediction of the experimentally observed pH-responsive nanovalves which enables blocking, and opening of mesoporous silica nanoparticles pores for targeted drug release system. Our results show that two major factors are responsible for the cargo release at acidic pH. The higher interaction energy needed for the complex/nanovalve formation to exist after protonation as well as conformational change upon protonation are driving the release due to slight pH change from 5 to 7.

Keywords: nanovalves, nanogate, mesoporous silica nanoparticles, cargo

Procedia PDF Downloads 123
363 Analytical and Numerical Studies on the Behavior of a Freezing Soil Layer

Authors: X. Li, Y. Liu, H. Wong, B. Pardoen, A. Fabbri, F. McGregor, E. Liu

Abstract:

The target of this paper is to investigate how saturated poroelastic soils subject to freezing temperatures behave and how different boundary conditions can intervene and affect the thermo-hydro-mechanical (THM) responses, based on a particular but classical configuration of a finite homogeneous soil layer studied by Terzaghi. The essential relations on the constitutive behavior of a freezing soil are firstly recalled: ice crystal - liquid water thermodynamic equilibrium, hydromechanical constitutive equations, momentum balance, water mass balance, and the thermal diffusion equation, in general, non-linear case where material parameters are state-dependent. The system of equations is firstly linearized, assuming all material parameters to be constants, particularly the permeability of liquid water, which should depend on the ice content. Two analytical solutions solved by the classic Laplace transform are then developed, accounting for two different sets of boundary conditions. Afterward, the general non-linear equations with state-dependent parameters are solved using a commercial code COMSOL based on finite elements method to obtain numerical results. The validity of this numerical modeling is partially verified using the analytical solution in the limiting case of state-independent parameters. Comparison between the results given by the linearized analytical solutions and the non-linear numerical model reveals that the above-mentioned linear computation will always underestimate the liquid pore pressure and displacement, whatever the hydraulic boundary conditions are. In the nonlinear model, the faster growth of ice crystals, accompanying the subsequent reduction of permeability of freezing soil layer, makes a longer duration for the depressurization of water liquid and slower settlement in the case where the ground surface is swiftly covered by a thin layer of ice, as well as a bigger global liquid pressure and swelling in the case of the impermeable ground surface. Nonetheless, the analytical solutions based on linearized equations give a correct order-of-magnitude estimate, especially at moderate temperature variations, and remain a useful tool for preliminary design checks.

Keywords: chemical potential, cryosuction, Laplace transform, multiphysics coupling, phase transformation, thermodynamic equilibrium

Procedia PDF Downloads 80
362 Investigation of Mangrove Area Effects on Hydrodynamic Conditions of a Tidal Dominant Strait Near the Strait of Hormuz

Authors: Maryam Hajibaba, Mohsen Soltanpour, Mehrnoosh Abbasian, S. Abbas Haghshenas

Abstract:

This paper aims to evaluate the main role of mangroves forests on the unique hydrodynamic characteristics of the Khuran Strait (KS) in the Persian Gulf. Investigation of hydrodynamic conditions of KS is vital to predict and estimate sedimentation and erosion all over the protected areas north of Qeshm Island. KS (or Tang-e-Khuran) is located between Qeshm Island and the Iranian mother land and has a minimum width of approximately two kilometers. Hydrodynamics of the strait is dominated by strong tidal currents of up to 2 m/s. The bathymetry of the area is dynamic and complicated as 1) strong currents do exist in the area which lead to seemingly sand dune movements in the middle and southern parts of the strait, and 2) existence a vast area with mangrove coverage next to the narrowest part of the strait. This is why ordinary modeling schemes with normal mesh resolutions are not capable for high accuracy estimations of current fields in the KS. A comprehensive set of measurements were carried out with several components, to investigate the hydrodynamics and morpho-dynamics of the study area, including 1) vertical current profiling at six stations, 2) directional wave measurements at four stations, 3) water level measurements at six stations, 4) wind measurements at one station, and 5) sediment grab sampling at 100 locations. Additionally, a set of periodic hydrographic surveys was included in the program. The numerical simulation was carried out by using Delft3D – Flow Module. Model calibration was done by comparing water levels and depth averaged velocity of currents against available observational data. The results clearly indicate that observed data and simulations only fit together if a realistic perspective of the mangrove area is well captured by the model bathymetry data. Generating unstructured grid by using RGFGRID and QUICKIN, the flow model was driven with water level time-series at open boundaries. Adopting the available field data, the key role of mangrove area on the hydrodynamics of the study area can be studied. The results show that including the accurate geometry of the mangrove area and consideration of its sponge-like behavior are the key aspects through which a realistic current field can be simulated in the KS.

Keywords: Khuran Strait, Persian Gulf, tide, current, Delft3D

Procedia PDF Downloads 210
361 Investigate the Competencies Required for Sustainable Entrepreneurship Development in Agricultural Higher Education

Authors: Ehsan Moradi, Parisa Paikhaste, Amir Alam Beigi, Seyedeh Somayeh Bathaei

Abstract:

The need for entrepreneurial sustainability is as important as the entrepreneurship category itself. By transferring competencies in a sustainable entrepreneurship framework, entrepreneurship education can make a significant contribution to the effectiveness of businesses, especially for start-up entrepreneurs. This study analyzes the essential competencies of students in the development of sustainable entrepreneurship. It is an applied causal study in terms of nature and field in terms of data collection. The main purpose of this research project is to study and explain the dimensions of sustainability entrepreneurship competencies among agricultural students. The statistical population consists of 730 junior and senior undergraduate students of the Campus of Agriculture and Natural Resources, University of Tehran. The sample size was determined to be 120 using the Cochran's formula, and the convenience sampling method was used. Face validity, structure validity, and diagnostic methods were used to evaluate the validity of the research tool and Cronbach's alpha and composite reliability to evaluate its reliability. Structural equation modeling (SEM) was used by the confirmatory factor analysis (CFA) method to prepare a measurement model for data processing. The results showed that seven key dimensions play a role in shaping sustainable entrepreneurial development competencies: systems thinking competence (STC), embracing diversity and interdisciplinary (EDI), foresighted thinking (FTC), normative competence (NC), action competence (AC), interpersonal competence (IC), and strategic management competence (SMC). It was found that acquiring skills in SMC by creating the ability to plan to achieve sustainable entrepreneurship in students through the relevant mechanisms can improve entrepreneurship in students by adopting a sustainability attitude. While increasing students' analytical ability in the field of social and environmental needs and challenges and emphasizing curriculum updates, AC should pay more attention to the relationship between the curriculum and its content in the form of entrepreneurship culture promotion programs. In the field of EDI, it was found that the success of entrepreneurs in terms of sustainability and business sustainability of start-up entrepreneurs depends on their interdisciplinary thinking. It was also found that STC plays an important role in explaining the relationship between sustainability and entrepreneurship. Therefore, focusing on these competencies in agricultural education to train start-up entrepreneurs can lead to sustainable entrepreneurship in the agricultural higher education system.

Keywords: sustainable entrepreneurship, entrepreneurship education, competency, agricultural higher education

Procedia PDF Downloads 144
360 Development of Gully Erosion Prediction Model in Sokoto State, Nigeria, using Remote Sensing and Geographical Information System Techniques

Authors: Nathaniel Bayode Eniolorunda, Murtala Abubakar Gada, Sheikh Danjuma Abubakar

Abstract:

The challenge of erosion in the study area is persistent, suggesting the need for a better understanding of the mechanisms that drive it. Thus, the study evolved a predictive erosion model (RUSLE_Sok), deploying Remote Sensing (RS) and Geographical Information System (GIS) tools. The nature and pattern of the factors of erosion were characterized, while soil losses were quantified. Factors’ impacts were also measured, and the morphometry of gullies was described. Data on the five factors of RUSLE and distances to settlements, rivers and roads (K, R, LS, P, C, DS DRd and DRv) were combined and processed following standard RS and GIS algorithms. Harmonized World Soil Data (HWSD), Shuttle Radar Topographical Mission (SRTM) image, Climate Hazards Group InfraRed Precipitation with Station data (CHIRPS), Sentinel-2 image accessed and processed within the Google Earth Engine, road network and settlements were the data combined and calibrated into the factors for erosion modeling. A gully morphometric study was conducted at some purposively selected sites. Factors of soil erosion showed low, moderate, to high patterns. Soil losses ranged from 0 to 32.81 tons/ha/year, classified into low (97.6%), moderate (0.2%), severe (1.1%) and very severe (1.05%) forms. The multiple regression analysis shows that factors statistically significantly predicted soil loss, F (8, 153) = 55.663, p < .0005. Except for the C-Factor with a negative coefficient, all other factors were positive, with contributions in the order of LS>C>R>P>DRv>K>DS>DRd. Gullies are generally from less than 100m to about 3km in length. Average minimum and maximum depths at gully heads are 0.6 and 1.2m, while those at mid-stream are 1 and 1.9m, respectively. The minimum downstream depth is 1.3m, while that for the maximum is 4.7m. Deeper gullies exist in proximity to rivers. With minimum and maximum gully elevation values ranging between 229 and 338m and an average slope of about 3.2%, the study area is relatively flat. The study concluded that major erosion influencers in the study area are topography and vegetation cover and that the RUSLE_Sok well predicted soil loss more effectively than ordinary RUSLE. The adoption of conservation measures such as tree planting and contour ploughing on sloppy farmlands was recommended.

Keywords: RUSLE_Sok, Sokoto, google earth engine, sentinel-2, erosion

Procedia PDF Downloads 75
359 Microchip-Integrated Computational Models for Studying Gait and Motor Control Deficits in Autism

Authors: Noah Odion, Honest Jimu, Blessing Atinuke Afuape

Abstract:

Introduction: Motor control and gait abnormalities are commonly observed in individuals with autism spectrum disorder (ASD), affecting their mobility and coordination. Understanding the underlying neurological and biomechanical factors is essential for designing effective interventions. This study focuses on developing microchip-integrated wearable devices to capture real-time movement data from individuals with autism. By applying computational models to the collected data, we aim to analyze motor control patterns and gait abnormalities, bridging a crucial knowledge gap in autism-related motor dysfunction. Methods: We designed microchip-enabled wearable devices capable of capturing precise kinematic data, including joint angles, acceleration, and velocity during movement. A cross-sectional study was conducted on individuals with ASD and a control group to collect comparative data. Computational modelling was applied using machine learning algorithms to analyse motor control patterns, focusing on gait variability, balance, and coordination. Finite element models were also used to simulate muscle and joint dynamics. The study employed descriptive and analytical methods to interpret the motor data. Results: The wearable devices effectively captured detailed movement data, revealing significant gait variability in the ASD group. For example, gait cycle time was 25% longer, and stride length was reduced by 15% compared to the control group. Motor control analysis showed a 30% reduction in balance stability in individuals with autism. Computational models successfully predicted movement irregularities and helped identify motor control deficits, particularly in the lower limbs. Conclusions: The integration of microchip-based wearable devices with computational models offers a powerful tool for diagnosing and treating motor control deficits in autism. These results have significant implications for patient care, providing objective data to guide personalized therapeutic interventions. The findings also contribute to the broader field of neuroscience by improving our understanding of the motor dysfunctions associated with ASD and other neurodevelopmental disorders.

Keywords: motor control, gait abnormalities, autism, wearable devices, microchips, computational modeling, kinematic analysis, neurodevelopmental disorders

Procedia PDF Downloads 23
358 Global Modeling of Drill String Dragging and Buckling in 3D Curvilinear Bore-Holes

Authors: Valery Gulyayev, Sergey Glazunov, Elena Andrusenko, Nataliya Shlyun

Abstract:

Enhancement of technology and techniques for drilling deep directed oil and gas bore-wells are of essential industrial significance because these wells make it possible to increase their productivity and output. Generally, they are used for drilling in hard and shale formations, that is why their drivage processes are followed by the emergency and failure effects. As is corroborated by practice, the principal drilling drawback occurring in drivage of long curvilinear bore-wells is conditioned by the need to obviate essential force hindrances caused by simultaneous action of the gravity, contact and friction forces. Primarily, these forces depend on the type of the technological regime, drill string stiffness, bore-hole tortuosity and its length. They can lead to the Eulerian buckling of the drill string and its sticking. To predict and exclude these states, special mathematic models and methods of computer simulation should play a dominant role. At the same time, one might note that these mechanical phenomena are very complex and only simplified approaches (‘soft string drag and torque models’) are used for their analysis. Taking into consideration that now the cost of directed wells increases essentially with complication of their geometry and enlargement of their lengths, it can be concluded that the price of mistakes of the drill string behavior simulation through the use of simplified approaches can be very high and so the problem of correct software elaboration is very urgent. This paper deals with the problem of simulating the regimes of drilling deep curvilinear bore-wells with prescribed imperfect geometrical trajectories of their axial lines. On the basis of the theory of curvilinear flexible elastic rods, methods of differential geometry, and numerical analysis methods, the 3D ‘stiff-string drag and torque model’ of the drill string bending and the appropriate software are elaborated for the simulation of the tripping in and out regimes and drilling operations. It is shown by the computer calculations that the contact and friction forces can be calculated and regulated, providing predesigned trouble-free modes of operation. The elaborated mathematic models and software can be used for the emergency situations prognostication and their exclusion at the stages of the drilling process design and realization.

Keywords: curvilinear drilling, drill string tripping in and out, contact forces, resistance forces

Procedia PDF Downloads 146
357 Optimization for Autonomous Robotic Construction by Visual Guidance through Machine Learning

Authors: Yangzhi Li

Abstract:

Network transfer of information and performance customization is now a viable method of digital industrial production in the era of Industry 4.0. Robot platforms and network platforms have grown more important in digital design and construction. The pressing need for novel building techniques is driven by the growing labor scarcity problem and increased awareness of construction safety. Robotic approaches in construction research are regarded as an extension of operational and production tools. Several technological theories related to robot autonomous recognition, which include high-performance computing, physical system modeling, extensive sensor coordination, and dataset deep learning, have not been explored using intelligent construction. Relevant transdisciplinary theory and practice research still has specific gaps. Optimizing high-performance computing and autonomous recognition visual guidance technologies improves the robot's grasp of the scene and capacity for autonomous operation. Intelligent vision guidance technology for industrial robots has a serious issue with camera calibration, and the use of intelligent visual guiding and identification technologies for industrial robots in industrial production has strict accuracy requirements. It can be considered that visual recognition systems have challenges with precision issues. In such a situation, it will directly impact the effectiveness and standard of industrial production, necessitating a strengthening of the visual guiding study on positioning precision in recognition technology. To best facilitate the handling of complicated components, an approach for the visual recognition of parts utilizing machine learning algorithms is proposed. This study will identify the position of target components by detecting the information at the boundary and corner of a dense point cloud and determining the aspect ratio in accordance with the guidelines for the modularization of building components. To collect and use components, operational processing systems assign them to the same coordinate system based on their locations and postures. The RGB image's inclination detection and the depth image's verification will be used to determine the component's present posture. Finally, a virtual environment model for the robot's obstacle-avoidance route will be constructed using the point cloud information.

Keywords: robotic construction, robotic assembly, visual guidance, machine learning

Procedia PDF Downloads 86
356 A Reduced Ablation Model for Laser Cutting and Laser Drilling

Authors: Torsten Hermanns, Thoufik Al Khawli, Wolfgang Schulz

Abstract:

In laser cutting as well as in long pulsed laser drilling of metals, it can be demonstrated that the ablation shape (the shape of cut faces respectively the hole shape) that is formed approaches a so-called asymptotic shape such that it changes only slightly or not at all with further irradiation. These findings are already known from the ultrashort pulse (USP) ablation of dielectric and semiconducting materials. The explanation for the occurrence of an asymptotic shape in laser cutting and long pulse drilling of metals is identified, its underlying mechanism numerically implemented, tested and clearly confirmed by comparison with experimental data. In detail, there now is a model that allows the simulation of the temporal (pulse-resolved) evolution of the hole shape in laser drilling as well as the final (asymptotic) shape of the cut faces in laser cutting. This simulation especially requires much less in the way of resources, such that it can even run on common desktop PCs or laptops. Individual parameters can be adjusted using sliders – the simulation result appears in an adjacent window and changes in real time. This is made possible by an application-specific reduction of the underlying ablation model. Because this reduction dramatically decreases the complexity of calculation, it produces a result much more quickly. This means that the simulation can be carried out directly at the laser machine. Time-intensive experiments can be reduced and set-up processes can be completed much faster. The high speed of simulation also opens up a range of entirely different options, such as metamodeling. Suitable for complex applications with many parameters, metamodeling involves generating high-dimensional data sets with the parameters and several evaluation criteria for process and product quality. These sets can then be used to create individual process maps that show the dependency of individual parameter pairs. This advanced simulation makes it possible to find global and local extreme values through mathematical manipulation. Such simultaneous optimization of multiple parameters is scarcely possible by experimental means. This means that new methods in manufacturing such as self-optimization can be executed much faster. However, the software’s potential does not stop there; time-intensive calculations exist in many areas of industry. In laser welding or laser additive manufacturing, for example, the simulation of thermal induced residual stresses still uses up considerable computing capacity or is even not possible. Transferring the principle of reduced models promises substantial savings there, too.

Keywords: asymptotic ablation shape, interactive process simulation, laser drilling, laser cutting, metamodeling, reduced modeling

Procedia PDF Downloads 214
355 Predicting Costs in Construction Projects with Machine Learning: A Detailed Study Based on Activity-Level Data

Authors: Soheila Sadeghi

Abstract:

Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.

Keywords: cost prediction, machine learning, project management, random forest, neural networks

Procedia PDF Downloads 54
354 Design and Implementation of Generative Models for Odor Classification Using Electronic Nose

Authors: Kumar Shashvat, Amol P. Bhondekar

Abstract:

In the midst of the five senses, odor is the most reminiscent and least understood. Odor testing has been mysterious and odor data fabled to most practitioners. The delinquent of recognition and classification of odor is important to achieve. The facility to smell and predict whether the artifact is of further use or it has become undesirable for consumption; the imitation of this problem hooked on a model is of consideration. The general industrial standard for this classification is color based anyhow; odor can be improved classifier than color based classification and if incorporated in machine will be awfully constructive. For cataloging of odor for peas, trees and cashews various discriminative approaches have been used Discriminative approaches offer good prognostic performance and have been widely used in many applications but are incapable to make effectual use of the unlabeled information. In such scenarios, generative approaches have better applicability, as they are able to knob glitches, such as in set-ups where variability in the series of possible input vectors is enormous. Generative models are integrated in machine learning for either modeling data directly or as a transitional step to form an indeterminate probability density function. The algorithms or models Linear Discriminant Analysis and Naive Bayes Classifier have been used for classification of the odor of cashews. Linear Discriminant Analysis is a method used in data classification, pattern recognition, and machine learning to discover a linear combination of features that typifies or divides two or more classes of objects or procedures. The Naive Bayes algorithm is a classification approach base on Bayes rule and a set of qualified independence theory. Naive Bayes classifiers are highly scalable, requiring a number of restraints linear in the number of variables (features/predictors) in a learning predicament. The main recompenses of using the generative models are generally a Generative Models make stronger assumptions about the data, specifically, about the distribution of predictors given the response variables. The Electronic instrument which is used for artificial odor sensing and classification is an electronic nose. This device is designed to imitate the anthropological sense of odor by providing an analysis of individual chemicals or chemical mixtures. The experimental results have been evaluated in the form of the performance measures i.e. are accuracy, precision and recall. The investigational results have proven that the overall performance of the Linear Discriminant Analysis was better in assessment to the Naive Bayes Classifier on cashew dataset.

Keywords: odor classification, generative models, naive bayes, linear discriminant analysis

Procedia PDF Downloads 387
353 The Usage of Bridge Estimator for Hegy Seasonal Unit Root Tests

Authors: Huseyin Guler, Cigdem Kosar

Abstract:

The aim of this study is to propose Bridge estimator for seasonal unit root tests. Seasonality is an important factor for many economic time series. Some variables may contain seasonal patterns and forecasts that ignore important seasonal patterns have a high variance. Therefore, it is very important to eliminate seasonality for seasonal macroeconomic data. There are some methods to eliminate the impacts of seasonality in time series. One of them is filtering the data. However, this method leads to undesired consequences in unit root tests, especially if the data is generated by a stochastic seasonal process. Another method to eliminate seasonality is using seasonal dummy variables. Some seasonal patterns may result from stationary seasonal processes, which are modelled using seasonal dummies but if there is a varying and changing seasonal pattern over time, so the seasonal process is non-stationary, deterministic seasonal dummies are inadequate to capture the seasonal process. It is not suitable to use seasonal dummies for modeling such seasonally nonstationary series. Instead of that, it is necessary to take seasonal difference if there are seasonal unit roots in the series. Different alternative methods are proposed in the literature to test seasonal unit roots, such as Dickey, Hazsa, Fuller (DHF) and Hylleberg, Engle, Granger, Yoo (HEGY) tests. HEGY test can be also used to test the seasonal unit root in different frequencies (monthly, quarterly, and semiannual). Another issue in unit root tests is the lag selection. Lagged dependent variables are added to the model in seasonal unit root tests as in the unit root tests to overcome the autocorrelation problem. In this case, it is necessary to choose the lag length and determine any deterministic components (i.e., a constant and trend) first, and then use the proper model to test for seasonal unit roots. However, this two-step procedure might lead size distortions and lack of power in seasonal unit root tests. Recent studies show that Bridge estimators are good in selecting optimal lag length while differentiating nonstationary versus stationary models for nonseasonal data. The advantage of this estimator is the elimination of the two-step nature of conventional unit root tests and this leads a gain in size and power. In this paper, the Bridge estimator is proposed to test seasonal unit roots in a HEGY model. A Monte-Carlo experiment is done to determine the efficiency of this approach and compare the size and power of this method with HEGY test. Since Bridge estimator performs well in model selection, our approach may lead to some gain in terms of size and power over HEGY test.

Keywords: bridge estimators, HEGY test, model selection, seasonal unit root

Procedia PDF Downloads 340
352 A Machine Learning Approach for Efficient Resource Management in Construction Projects

Authors: Soheila Sadeghi

Abstract:

Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.

Keywords: resource allocation, machine learning, optimization, data-driven decision-making, project management

Procedia PDF Downloads 38
351 A Multi-Scale Study of Potential-Dependent Ammonia Synthesis on IrO₂ (110): DFT, 3D-RISM, and Microkinetic Modeling

Authors: Shih-Huang Pan, Tsuyoshi Miyazaki, Minoru Otani, Santhanamoorthi Nachimuthu, Jyh-Chiang Jiang

Abstract:

Ammonia (NH₃) is crucial in renewable energy and agriculture, yet its traditional production via the Haber-Bosch process faces challenges due to the inherent inertness of nitrogen (N₂) and the need for high temperatures and pressures. The electrocatalytic nitrogen reduction (ENRR) presents a more sustainable option, functioning at ambient conditions. However, its advancement is limited by selectivity and efficiency challenges due to the competing hydrogen evolution reaction (HER). The critical roles of protonation of N-species and HER highlight the necessity of selecting optimal catalysts and solvents to enhance ENRR performance. Notably, transition metal oxides, with their adjustable electronic states and excellent chemical and thermal stability, have shown promising ENRR characteristics. In this study, we use density functional theory (DFT) methods to investigate the ENRR mechanisms on IrO₂ (110), a material known for its tunable electronic properties and exceptional chemical and thermal stability. Employing the constant electrode potential (CEP) model, where the electrode - electrolyte interface is treated as a polarizable continuum with implicit solvation, and adjusting electron counts to equalize work functions in the grand canonical ensemble, we further incorporate the advanced 3D Reference Interaction Site Model (3D-RISM) to accurately determine the ENRR limiting potential across various solvents and pH conditions. Our findings reveal that the limiting potential for ENRR on IrO₂ (110) is significantly more favorable than for HER, highlighting the efficiency of the IrO₂ catalyst for converting N₂ to NH₃. This is supported by the optimal *NH₃ desorption energy on IrO₂, which enhances the overall reaction efficiency. Microkinetic simulations further predict a promising NH₃ production rate, even at the solution's boiling point¸ reinforcing the catalytic viability of IrO₂ (110). This comprehensive approach provides an atomic-level understanding of the electrode-electrolyte interface in ENRR, demonstrating the practical application of IrO₂ in electrochemical catalysis. The findings provide a foundation for developing more efficient and selective catalytic strategies, potentially revolutionizing industrial NH₃ production.

Keywords: density functional theory, electrocatalyst, nitrogen reduction reaction, electrochemistry

Procedia PDF Downloads 20
350 Bibliometric Analysis of Risk Assessment of Inland Maritime Accidents in Bangladesh

Authors: Armana Huq, Wahidur Rahman, Sanwar Kader

Abstract:

Inland waterways in Bangladesh play an important role in providing comfortable and low-cost transportation. However, a maritime accident takes away many lives and creates unwanted hazards every year. This article deals with a comprehensive review of inland waterway accidents in Bangladesh. Additionally, it includes a comparative study between international and local inland research studies based on maritime accidents. Articles from inland waterway areas are analyzed in-depth to make a comprehensive overview of the nature of the academic work, accident and risk management process and different statistical analyses. It is found that empirical analysis based on the available statistical data dominates the research domain. For this study, major maritime accident-related works in the last four decades in Bangladesh (1981-2020) are being analyzed for preparing a bibliometric analysis. A study of maritime accidents of passenger's vessels during (1995-2005) indicates that the predominant causes of accidents in the inland waterways of Bangladesh are collision and adverse weather (77%), out of which collision due to human error alone stands (56%) of all accidents. Another study refers that the major causes of waterway accidents are the collision (60.3%) during 2005-2015. About 92% of this collision occurs due to direct contact with another vessel during this period. Rest 8% of the collision occurs by contact with permanent obstruction on waterway roots. The overall analysis of another study from the last 25 years (1995-2019) shows that one of the main types of accidents is collisions, with about 50.3% of accidents being caused by collisions. The other accident types are cyclone or storm (17%), overload (11.3%), physical failure (10.3%), excessive waves (5.1%), and others (6%). Very few notable works are available in testing or comparing the methods, proposing new methods for risk management, modeling, uncertainty treatment. The purpose of this paper is to provide an overview of the evolution of marine accident-related research domain regarding inland waterway of Bangladesh and attempts to introduce new ideas and methods to abridge the gap between international and national inland maritime-related work domain which can be a catalyst for a safer and sustainable water transportation system in Bangladesh. Another fundamental objective of this paper is to navigate various national maritime authorities and international organizations to implement risk management processes for shipping accident prevention in waterway areas.

Keywords: inland waterways, safety, bibliometric analysis, risk management, accidents

Procedia PDF Downloads 182
349 Co-Creational Model for Blended Learning in a Flipped Classroom Environment Focusing on the Combination of Coding and Drone-Building

Authors: A. Schuchter, M. Promegger

Abstract:

The outbreak of the COVID-19 pandemic has shown us that online education is so much more than just a cool feature for teachers – it is an essential part of modern teaching. In online math teaching, it is common to use tools to share screens, compute and calculate mathematical examples, while the students can watch the process. On the other hand, flipped classroom models are on the rise, with their focus on how students can gather knowledge by watching videos and on the teacher’s use of technological tools for information transfer. This paper proposes a co-educational teaching approach for coding and engineering subjects with the help of drone-building to spark interest in technology and create a platform for knowledge transfer. The project combines aspects from mathematics (matrices, vectors, shaders, trigonometry), physics (force, pressure and rotation) and coding (computational thinking, block-based programming, JavaScript and Python) and makes use of collaborative-shared 3D Modeling with clara.io, where students create mathematics knowhow. The instructor follows a problem-based learning approach and encourages their students to find solutions in their own time and in their own way, which will help them develop new skills intuitively and boost logically structured thinking. The collaborative aspect of working in groups will help the students develop communication skills as well as structural and computational thinking. Students are not just listeners as in traditional classroom settings, but play an active part in creating content together by compiling a Handbook of Knowledge (called “open book”) with examples and solutions. Before students start calculating, they have to write down all their ideas and working steps in full sentences so other students can easily follow their train of thought. Therefore, students will learn to formulate goals, solve problems, and create a ready-to use product with the help of “reverse engineering”, cross-referencing and creative thinking. The work on drones gives the students the opportunity to create a real-life application with a practical purpose, while going through all stages of product development.

Keywords: flipped classroom, co-creational education, coding, making, drones, co-education, ARCS-model, problem-based learning

Procedia PDF Downloads 120
348 Disparities in Language Competence and Conflict: The Moderating Role of Cultural Intelligence in Intercultural Interactions

Authors: Catherine Peyrols Wu

Abstract:

Intercultural interactions are becoming increasingly common in organizations and life. These interactions are often the stage of miscommunication and conflict. In management research, these problems are commonly attributed to cultural differences in values and interactional norms. As a result, the notion that intercultural competence can minimize these challenges is widely accepted. Cultural differences, however, are not the only source of a challenge during intercultural interactions. The need to rely on a lingua franca – or common language between people who have different mother tongues – is another important one. In theory, a lingua franca can improve communication and ease coordination. In practice however, disparities in people’s ability and confidence to communicate in the language can exacerbate tensions and generate inefficiencies. In this study, we draw on power theory to develop a model of disparities in language competence and conflict in a multicultural work context. Specifically, we hypothesized that differences in language competence between interaction partners would be positively related to conflict such that people would report greater conflict with partners who have more dissimilar levels of language competence and lesser conflict with partners with more similar levels of language competence. Furthermore, we proposed that cultural intelligence (CQ) an intercultural competence that denotes an individual’s capability to be effective in intercultural situations, would weaken the relationship between disparities in language competence and conflict such that people would report less conflict with partners who have more dissimilar levels of language competence when the interaction partner has high CQ and more conflict when the partner has low CQ. We tested this model with a sample of 135 undergraduate students working in multicultural teams for 13 weeks. We used a round-robin design to examine conflict in 646 dyads nested within 21 teams. Results of analyses using social relations modeling provided support for our hypotheses. Specifically, we found that in intercultural dyads with large disparities in language competence, partners with the lowest level of language competence would report higher levels of interpersonal conflict. However, this relationship disappeared when the partner with higher language competence was also high in CQ. These findings suggest that communication in a lingua franca can be a source of conflict in intercultural collaboration when partners differ in their level of language competence and that CQ can alleviate these effects during collaboration with partners who have relatively lower levels of language competence. Theoretically, this study underscores the benefits of CQ as a complement to language competence for intercultural effectiveness. Practically, these results further attest to the benefits of investing resources to develop language competence and CQ in employees engaged in multicultural work.

Keywords: cultural intelligence, intercultural interactions, language competence, multicultural teamwork

Procedia PDF Downloads 165
347 Using Structured Analysis and Design Technique Method for Unmanned Aerial Vehicle Components

Authors: Najeh Lakhoua

Abstract:

Introduction: Scientific developments and techniques for the systemic approach generate several names to the systemic approach: systems analysis, systems analysis, structural analysis. The main purpose of these reflections is to find a multi-disciplinary approach which organizes knowledge, creates universal language design and controls complex sets. In fact, system analysis is structured sequentially by steps: the observation of the system by various observers in various aspects, the analysis of interactions and regulatory chains, the modeling that takes into account the evolution of the system, the simulation and the real tests in order to obtain the consensus. Thus the system approach allows two types of analysis according to the structure and the function of the system. The purpose of this paper is to present an application of system analysis of Unmanned Aerial Vehicle (UAV) components in order to represent the architecture of this system. Method: There are various analysis methods which are proposed, in the literature, in to carry out actions of global analysis and different points of view as SADT method (Structured Analysis and Design Technique), Petri Network. The methodology adopted in order to contribute to the system analysis of an Unmanned Aerial Vehicle has been proposed in this paper and it is based on the use of SADT. In fact, we present a functional analysis based on the SADT method of UAV components Body, power supply and platform, computing, sensors, actuators, software, loop principles, flight controls and communications). Results: In this part, we present the application of SADT method for the functional analysis of the UAV components. This SADT model will be composed exclusively of actigrams. It starts with the main function ‘To analysis of the UAV components’. Then, this function is broken into sub-functions and this process is developed until the last decomposition level has been reached (levels A1, A2, A3 and A4). Recall that SADT techniques are semi-formal; however, for the same subject, different correct models can be built without having to know with certitude which model is the good or, at least, the best. In fact, this kind of model allows users a sufficient freedom in its construction and so the subjective factor introduces a supplementary dimension for its validation. That is why the validation step on the whole necessitates the confrontation of different points of views. Conclusion: In this paper, we presented an application of system analysis of Unmanned Aerial Vehicle components. In fact, this application of system analysis is based on SADT method (Structured Analysis Design Technique). This functional analysis proved the useful use of SADT method and its ability of describing complex dynamic systems.

Keywords: system analysis, unmanned aerial vehicle, functional analysis, architecture

Procedia PDF Downloads 204
346 Applying the Quad Model to Estimate the Implicit Self-Esteem of Patients with Depressive Disorders: Comparing the Psychometric Properties with the Implicit Association Test Effect

Authors: Yi-Tung Lin

Abstract:

Researchers commonly assess implicit self-esteem with the Implicit Association Test (IAT). The IAT’s measure, often referred to as the IAT effect, indicates the strengths of automatic preferences for the self relative to others, which is often considered an index of implicit self-esteem. However, based on the Dual-process theory, the IAT does not rely entirely on the automatic process; it is also influenced by a controlled process. The present study, therefore, analyzed the IAT data with the Quad model, separating four processes on the IAT performance: the likelihood that automatic association is activated by the stimulus in the trial (AC); that a correct response is discriminated in the trial (D); that the automatic bias is overcome in favor of a deliberate response (OB); and that when the association is not activated, and the individual fails to discriminate a correct answer, there is a guessing or response bias drives the response (G). The AC and G processes are automatic, while the D and OB processes are controlled. The AC parameter is considered as the strength of the association activated by the stimulus, which reflects what implicit measures of social cognition aim to assess. The stronger the automatic association between self and positive valence, the more likely it will be activated by a relevant stimulus. Therefore, the AC parameter was used as the index of implicit self-esteem in the present study. Meanwhile, the relationship between implicit self-esteem and depression is not fully investigated. In the cognitive theory of depression, it is assumed that the negative self-schema is crucial in depression. Based on this point of view, implicit self-esteem would be negatively associated with depression. However, the results among empirical studies are inconsistent. The aims of the present study were to examine the psychometric properties of the AC (i.e., test-retest reliability and its correlations with explicit self-esteem and depression) and compare it with that of the IAT effect. The present study had 105 patients with depressive disorders completing the Rosenberg Self-Esteem Scale, Beck Depression Inventory-II and the IAT on the pretest. After at least 3 weeks, the participants completed the second IAT. The data were analyzed by the latent-trait multinomial processing tree model (latent-trait MPT) with the TreeBUGS package in R. The result showed that the latent-trait MPT had a satisfactory model fit. The effect size of test-retest reliability of the AC and the IAT effect were medium (r = .43, p < .0001) and small (r = .29, p < .01) respectively. Only the AC showed a significant correlation with explicit self-esteem (r = .19, p < .05). Neither of the two indexes was correlated with depression. Collectively, the AC parameter was a satisfactory index of implicit self-esteem compared with the IAT effect. Also, the present study supported the results that implicit self-esteem was not correlated with depression.

Keywords: cognitive modeling, implicit association test, implicit self-esteem, quad model

Procedia PDF Downloads 127
345 High Resolution Satellite Imagery and Lidar Data for Object-Based Tree Species Classification in Quebec, Canada

Authors: Bilel Chalghaf, Mathieu Varin

Abstract:

Forest characterization in Quebec, Canada, is usually assessed based on photo-interpretation at the stand level. For species identification, this often results in a lack of precision. Very high spatial resolution imagery, such as DigitalGlobe, and Light Detection and Ranging (LiDAR), have the potential to overcome the limitations of aerial imagery. To date, few studies have used that data to map a large number of species at the tree level using machine learning techniques. The main objective of this study is to map 11 individual high tree species ( > 17m) at the tree level using an object-based approach in the broadleaf forest of Kenauk Nature, Quebec. For the individual tree crown segmentation, three canopy-height models (CHMs) from LiDAR data were assessed: 1) the original, 2) a filtered, and 3) a corrected model. The corrected CHM gave the best accuracy and was then coupled with imagery to refine tree species crown identification. When compared with photo-interpretation, 90% of the objects represented a single species. For modeling, 313 variables were derived from 16-band WorldView-3 imagery and LiDAR data, using radiance, reflectance, pixel, and object-based calculation techniques. Variable selection procedures were employed to reduce their number from 313 to 16, using only 11 bands to aid reproducibility. For classification, a global approach using all 11 species was compared to a semi-hierarchical hybrid classification approach at two levels: (1) tree type (broadleaf/conifer) and (2) individual broadleaf (five) and conifer (six) species. Five different model techniques were used: (1) support vector machine (SVM), (2) classification and regression tree (CART), (3) random forest (RF), (4) k-nearest neighbors (k-NN), and (5) linear discriminant analysis (LDA). Each model was tuned separately for all approaches and levels. For the global approach, the best model was the SVM using eight variables (overall accuracy (OA): 80%, Kappa: 0.77). With the semi-hierarchical hybrid approach, at the tree type level, the best model was the k-NN using six variables (OA: 100% and Kappa: 1.00). At the level of identifying broadleaf and conifer species, the best model was the SVM, with OA of 80% and 97% and Kappa values of 0.74 and 0.97, respectively, using seven variables for both models. This paper demonstrates that a hybrid classification approach gives better results and that using 16-band WorldView-3 with LiDAR data leads to more precise predictions for tree segmentation and classification, especially when the number of tree species is large.

Keywords: tree species, object-based, classification, multispectral, machine learning, WorldView-3, LiDAR

Procedia PDF Downloads 133
344 Spatial Mapping of Variations in Groundwater of Taluka Islamkot Thar Using GIS and Field Data

Authors: Imran Aziz Tunio

Abstract:

Islamkot is an underdeveloped sub-district (Taluka) in the Tharparkar district Sindh province of Pakistan located between latitude 24°25'19.79"N to 24°47'59.92"N and longitude 70° 1'13.95"E to 70°32'15.11"E. The Islamkot has an arid desert climate and the region is generally devoid of perennial rivers, canals, and streams. It is highly dependent on rainfall which is not considered a reliable surface water source and groundwater is the only key source of water for many centuries. To assess groundwater’s potential, an electrical resistivity survey (ERS) was conducted in Islamkot Taluka. Groundwater investigations for 128 Vertical Electrical Sounding (VES) were collected to determine the groundwater potential and obtain qualitatively and quantitatively layered resistivity parameters. The PASI Model 16 GL-N Resistivity Meter was used by employing a Schlumberger electrode configuration, with half current electrode spacing (AB/2) ranging from 1.5 to 100 m and the potential electrode spacing (MN/2) from 0.5 to 10 m. The data was acquired with a maximum current electrode spacing of 200 m. The data processing for the delineation of dune sand aquifers involved the technique of data inversion, and the interpretation of the inversion results was aided by the use of forward modeling. The measured geo-electrical parameters were examined by Interpex IX1D software, and apparent resistivity curves and synthetic model layered parameters were mapped in the ArcGIS environment using the inverse Distance Weighting (IDW) interpolation technique. Qualitative interpretation of vertical electrical sounding (VES) data shows the number of geo-electrical layers in the area varies from three to four with different resistivity values detected. Out of 128 VES model curves, 42 nos. are 3 layered, and 86 nos. are 4 layered. The resistivity of the first subsurface layers (Loose surface sand) varied from 16.13 Ωm to 3353.3 Ωm and thickness varied from 0.046 m to 17.52m. The resistivity of the second subsurface layer (Semi-consolidated sand) varied from 1.10 Ωm to 7442.8 Ωm and thickness varied from 0.30 m to 56.27 m. The resistivity of the third subsurface layer (Consolidated sand) varied from 0.00001 Ωm to 3190.8 Ωm and thickness varied from 3.26 m to 86.66 m. The resistivity of the fourth subsurface layer (Silt and Clay) varied from 0.0013 Ωm to 16264 Ωm and thickness varied from 13.50 m to 87.68 m. The Dar Zarrouk parameters, i.e. longitudinal unit conductance S is from 0.00024 to 19.91 mho; transverse unit resistance T from 7.34 to 40080.63 Ωm2; longitudinal resistance RS is from 1.22 to 3137.10 Ωm and transverse resistivity RT from 5.84 to 3138.54 Ωm. ERS data and Dar Zarrouk parameters were mapped which revealed that the study area has groundwater potential in the subsurface.

Keywords: electrical resistivity survey, GIS & RS, groundwater potential, environmental assessment, VES

Procedia PDF Downloads 110