Search results for: generalised linear model
15665 A Deep Learning Approach to Detect Complete Safety Equipment for Construction Workers Based on YOLOv7
Authors: Shariful Islam, Sharun Akter Khushbu, S. M. Shaqib, Shahriar Sultan Ramit
Abstract:
In the construction sector, ensuring worker safety is of the utmost significance. In this study, a deep learning-based technique is presented for identifying safety gear worn by construction workers, such as helmets, goggles, jackets, gloves, and footwear. The suggested method precisely locates these safety items by using the YOLO v7 (You Only Look Once) object detection algorithm. The dataset utilized in this work consists of labeled images split into training, testing and validation sets. Each image has bounding box labels that indicate where the safety equipment is located within the image. The model is trained to identify and categorize the safety equipment based on the labeled dataset through an iterative training approach. We used custom dataset to train this model. Our trained model performed admirably well, with good precision, recall, and F1-score for safety equipment recognition. Also, the model's evaluation produced encouraging results, with a [email protected] score of 87.7%. The model performs effectively, making it possible to quickly identify safety equipment violations on building sites. A thorough evaluation of the outcomes reveals the model's advantages and points up potential areas for development. By offering an automatic and trustworthy method for safety equipment detection, this research contributes to the fields of computer vision and workplace safety. The proposed deep learning-based approach will increase safety compliance and reduce the risk of accidents in the construction industry.Keywords: deep learning, safety equipment detection, YOLOv7, computer vision, workplace safety
Procedia PDF Downloads 6815664 Modeling Driving Distraction Considering Psychological-Physical Constraints
Authors: Yixin Zhu, Lishengsa Yue, Jian Sun, Lanyue Tang
Abstract:
Modeling driving distraction in microscopic traffic simulation is crucial for enhancing simulation accuracy. Current driving distraction models are mainly derived from physical motion constraints under distracted states, in which distraction-related error terms are added to existing microscopic driver models. However, the model accuracy is not very satisfying, due to a lack of modeling the cognitive mechanism underlying the distraction. This study models driving distraction based on the Queueing Network Human Processor model (QN-MHP). This study utilizes the queuing structure of the model to perform task invocation and switching for distracted operation and control of the vehicle under driver distraction. Based on the assumption of the QN-MHP model about the cognitive sub-network, server F is a structural bottleneck. The latter information must wait for the previous information to leave server F before it can be processed in server F. Therefore, the waiting time for task switching needs to be calculated. Since the QN-MHP model has different information processing paths for auditory information and visual information, this study divides driving distraction into two types: auditory distraction and visual distraction. For visual distraction, both the visual distraction task and the driving task need to go through the visual perception sub-network, and the stimuli of the two are asynchronous, which is called stimulus on asynchrony (SOA), so when calculating the waiting time for switching tasks, it is necessary to consider it. In the case of auditory distraction, the auditory distraction task and the driving task do not need to compete for the server resources of the perceptual sub-network, and their stimuli can be synchronized without considering the time difference in receiving the stimuli. According to the Theory of Planned Behavior for drivers (TPB), this study uses risk entropy as the decision criterion for driver task switching. A logistic regression model is used with risk entropy as the independent variable to determine whether the driver performs a distraction task, to explain the relationship between perceived risk and distraction. Furthermore, to model a driver’s perception characteristics, a neurophysiological model of visual distraction tasks is incorporated into the QN-MHP, and executes the classical Intelligent Driver Model. The proposed driving distraction model integrates the psychological cognitive process of a driver with the physical motion characteristics, resulting in both high accuracy and interpretability. This paper uses 773 segments of distracted car-following in Shanghai Naturalistic Driving Study data (SH-NDS) to classify the patterns of distracted behavior on different road facilities and obtains three types of distraction patterns: numbness, delay, and aggressiveness. The model was calibrated and verified by simulation. The results indicate that the model can effectively simulate the distracted car-following behavior of different patterns on various roadway facilities, and its performance is better than the traditional IDM model with distraction-related error terms. The proposed model overcomes the limitations of physical-constraints-based models in replicating dangerous driving behaviors, and internal characteristics of an individual. Moreover, the model is demonstrated to effectively generate more dangerous distracted driving scenarios, which can be used to construct high-value automated driving test scenarios.Keywords: computational cognitive model, driving distraction, microscopic traffic simulation, psychological-physical constraints
Procedia PDF Downloads 9115663 Convolutional Neural Network Based on Random Kernels for Analyzing Visual Imagery
Authors: Ja-Keoung Koo, Kensuke Nakamura, Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Byung-Woo Hong
Abstract:
The machine learning techniques based on a convolutional neural network (CNN) have been actively developed and successfully applied to a variety of image analysis tasks including reconstruction, noise reduction, resolution enhancement, segmentation, motion estimation, object recognition. The classical visual information processing that ranges from low level tasks to high level ones has been widely developed in the deep learning framework. It is generally considered as a challenging problem to derive visual interpretation from high dimensional imagery data. A CNN is a class of feed-forward artificial neural network that usually consists of deep layers the connections of which are established by a series of non-linear operations. The CNN architecture is known to be shift invariant due to its shared weights and translation invariance characteristics. However, it is often computationally intractable to optimize the network in particular with a large number of convolution layers due to a large number of unknowns to be optimized with respect to the training set that is generally required to be large enough to effectively generalize the model under consideration. It is also necessary to limit the size of convolution kernels due to the computational expense despite of the recent development of effective parallel processing machinery, which leads to the use of the constantly small size of the convolution kernels throughout the deep CNN architecture. However, it is often desired to consider different scales in the analysis of visual features at different layers in the network. Thus, we propose a CNN model where different sizes of the convolution kernels are applied at each layer based on the random projection. We apply random filters with varying sizes and associate the filter responses with scalar weights that correspond to the standard deviation of the random filters. We are allowed to use large number of random filters with the cost of one scalar unknown for each filter. The computational cost in the back-propagation procedure does not increase with the larger size of the filters even though the additional computational cost is required in the computation of convolution in the feed-forward procedure. The use of random kernels with varying sizes allows to effectively analyze image features at multiple scales leading to a better generalization. The robustness and effectiveness of the proposed CNN based on random kernels are demonstrated by numerical experiments where the quantitative comparison of the well-known CNN architectures and our models that simply replace the convolution kernels with the random filters is performed. The experimental results indicate that our model achieves better performance with less number of unknown weights. The proposed algorithm has a high potential in the application of a variety of visual tasks based on the CNN framework. Acknowledgement—This work was supported by the MISP (Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by IITP, and NRF-2014R1A2A1A11051941, NRF2017R1A2B4006023.Keywords: deep learning, convolutional neural network, random kernel, random projection, dimensionality reduction, object recognition
Procedia PDF Downloads 29015662 Physical Education Effect on Sports Science Analysis Technology
Authors: Peter Adly Hamdy Fahmy
Abstract:
The aim of the study was to examine the effects of a physical education program on student learning by combining the teaching of personal and social responsibility (TPSR) with a physical education model and TPSR with a traditional teaching model, these learning outcomes involving self-learning. -Study. Athletic performance, enthusiasm for sport, group cohesion, sense of responsibility and game performance. The participants were 3 secondary school physical education teachers and 6 physical education classes, 133 participants with students from the experimental group with 75 students and the control group with 58 students, and each teacher taught the experimental group and the control group for 16 weeks. The research methods used surveys, interviews and focus group meetings. Research instruments included the Personal and Social Responsibility Questionnaire, Sports Enthusiasm Scale, Group Cohesion Scale, Sports Self-Efficacy Scale, and Game Performance Assessment Tool. Multivariate analyzes of covariance and repeated measures ANOVA were used to examine differences in student learning outcomes between combining the TPSR with a physical education model and the TPSR with a traditional teaching model. The research findings are as follows: 1) The TPSR sports education model can improve students' learning outcomes, including sports self-efficacy, game performance, sports enthusiasm, team cohesion, group awareness and responsibility. 2) A traditional teaching model with TPSR could improve student learning outcomes, including sports self-efficacy, responsibility, and game performance. 3) The sports education model with TPSR could improve learning outcomes more than the traditional teaching model with TPSR, including sports self-efficacy, sports enthusiasm, responsibility and game performance. 4) Based on qualitative data on teachers' and students' learning experience, the physical education model with TPSR significantly improves learning motivation, group interaction and sense of play. The results suggest that physical education with TPSR could further improve learning outcomes in the physical education program. On the other hand, the hybrid model curriculum projects TPSR - Physical Education and TPSR - Traditional Education are good curriculum projects for moral character education that can be used in school physics.Keywords: approach competencies, physical, education, teachers employment, graduate, physical education and sport sciences, SWOT analysis character education, sport season, game performance, sport competence
Procedia PDF Downloads 4615661 Experimentally Validated Analytical Model for Thermal Analysis of Multi-Stage Depressed Collector
Authors: Vishant Gahlaut, A Mercy Latha, Sanjay Kumar Ghosh
Abstract:
Multi-stage depressed collectors (MDC) are used as an efficiency enhancement technique in traveling wave tubes the high-energy electron beam, after its interaction with the RF signal, gets velocity sorted and collected at various depressed electrodes of the MDC. The ultimate goal is to identify an optimum thermal management scheme (cooling mechanism) that could extract the heat efficiently from the electrodes. Careful thermal analysis, incorporating the cooling mechanism is required to ensure that the maximum temperature does not exceed the safe limits. A simple analytical model for quick prediction of the thermal has been developed. The model has been developed for the worst-case un-modulated DC condition, where all the thermal power is dissipated in the last electrode (typically, fourth electrode in the case of the four-stage depressed collector). It considers the thermal contact resistances at various braze joints accounting for the practical non-uniformities. Analytical results obtained from the model have been validated with simulated and experimental results.Keywords: multi-stage depressed collector, TWTs, thermal contact resistance, thermal management
Procedia PDF Downloads 22415660 Dosimetric Application of α-Al2O3:C for Food Irradiation Using TA-OSL
Authors: A. Soni, D. R. Mishra, D. K. Koul
Abstract:
α-Al2O3:C has been reported to have deeper traps at 600°C and 900°C respectively. These traps have been reported to accessed at relatively earlier temperatures (122 and 322 °C respectively) using thermally assisted OSL (TA-OSL). In this work, the dose response α-Al2O3:C was studied in the dose range of 10Gy to 10kGy for its application in food irradiation in low ( upto 1kGy) and medium(1 to 10kGy) dose range. The TOL (Thermo-optically stimulated luminescence) measurements were carried out on RisØ TL/OSL, TL-DA-15 system having a blue light-emitting diodes (λ=470 ±30nm) stimulation source with power level set at the 90% of the maximum stimulation intensity for the blue LEDs (40 mW/cm2). The observations were carried on commercial α-Al2O3:C phosphor. The TOL experiments were carried out with number of active channel (300) and inactive channel (1). Using these settings, the sample is subjected to linear thermal heating and constant optical stimulation. The detection filter used in all observations was a Hoya U-340 (Ip ~ 340 nm, FWHM ~ 80 nm). Irradiation of the samples was carried out using a 90Sr/90Y β-source housed in the system. A heating rate of 2 °C/s was preferred in TL measurements so as to reduce the temperature lag between the heater plate and the samples. To study the dose response of deep traps of α-Al2O3:C, samples were irradiated with various dose ranging from 10 Gy to 10 kGy. For each set of dose, three samples were irradiated. In order to record the TA-OSL, initially TL was recorded up to a temperature of 400°C, to deplete the signal due to 185°C main dosimetry TL peak in α-Al2O3:C, which is also associated with the basic OSL traps. After taking TL readout, the sample was subsequently subjected to TOL measurement. As a result, two well-defined TA-OSL peaks at 121°C and at 232°C occur in time as well as temperature domain which are different from the main dosimetric TL peak which occurs at ~ 185°C. The linearity of the integrated TOL signal has been measured as a function of absorbed dose and found to be linear upto 10kGy. Thus, it can be used for low and intermediate dose range of for its application in food irradiation. The deep energy level defects of α-Al2O3:C phosphor can be accessed using TOL section of RisØ reader system.Keywords: α-Al2O3:C, deep traps, food irradiation, TA-OSL
Procedia PDF Downloads 30015659 Cell-free Bioconversion of n-Octane to n-Octanol via a Heterogeneous and Bio-Catalytic Approach
Authors: Shanna Swart, Caryn Fenner, Athanasios Kotsiopoulos, Susan Harrison
Abstract:
Linear alkanes are produced as by-products from the increasing use of gas-to-liquid fuel technologies for synthetic fuel production and offer great potential for value addition. Their current use as low-value fuels and solvents do not maximize this potential. Therefore, attention has been drawn towards direct activation of these aliphatic alkanes to more useful products such as alcohols, aldehydes, carboxylic acids and derivatives. Cytochrome P450 monooxygenases (P450s) can be used for activation of these aliphatic alkanes using whole-cells or cell-free systems. Some limitations of whole-cell systems include reduced mass transfer, stability and possible side reactions. Since the P450 systems are little studied as cell-free systems, they form the focus of this study. Challenges of a cell-free system include co-factor regeneration, substrate availability and enzyme stability. Enzyme immobilization offers a positive outlook on this dilemma, as it may enhance stability of the enzyme. In the present study, 2 different P450s (CYP153A6 and CYP102A1) as well as the relevant accessory enzymes required for electron transfer (ferredoxin and ferredoxin reductase) and co-factor regeneration (glucose dehydrogenase) have been expressed in E. coli and purified by metal affinity chromatography. Glucose dehydrogenase (GDH), was used as a model enzyme to assess the potential of various enzyme immobilization strategies including; surface attachment on MagReSyn® microspheres with various functionalities and on electrospun nanofibers, using self-assembly based methods forming Cross Linked Enzymes (CLE), Cross Linked Enzyme Aggregates (CLEAs) and spherezymes as well as in a sol gel. The nanofibers were synthesized by electrospinning, which required the building of an electrospinning machine. The nanofiber morphology has been analyzed by SEM and binding will be further verified by FT-IR. Covalent attachment based methods showed limitations where only ferredoxin reductase and GDH retained activity after immobilization which were largely attributed to insufficient electron transfer and inactivation caused by the crosslinkers (60% and 90% relative activity loss for the free enzyme when using 0.5% glutaraldehyde and glutaraldehyde/ethylenediamine (1:1 v/v), respectively). So far, initial experiments with GDH have shown the most potential when immobilized via their His-tag onto the surface of MagReSyn® microspheres functionalized with Ni-NTA. It was found that Crude GDH could be simultaneously purified and immobilized with sufficient activity retention. Immobilized pure and crude GDH could be recycled 9 and 10 times, respectively, with approximately 10% activity remaining. The immobilized GDH was also more stable than the free enzyme after storage for 14 days at 4˚C. This immobilization strategy will also be applied to the P450s and optimized with regards to enzyme loading and immobilization time, as well as characterized and compared with the free enzymes. It is anticipated that the proposed immobilization set-up will offer enhanced enzyme stability (as well as reusability and easy recovery), minimal mass transfer limitation, with continuous co-factor regeneration and minimal enzyme leaching. All of which provide a positive outlook on this robust multi-enzyme system for efficient activation of linear alkanes as well as the potential for immobilization of various multiple enzymes, including multimeric enzymes for different bio-catalytic applications beyond alkane activation.Keywords: alkane activation, cytochrome P450 monooxygenase, enzyme catalysis, enzyme immobilization
Procedia PDF Downloads 22715658 Hydro-Mechanical Forming of AZ31 Sheet
Authors: Yong-Nam Kwon
Abstract:
In the present study, we have designed the hydro-mechanical forming in which AZ31 sheet was drawn to a kind of preform step following gas blow forming for accurate geometry. In order to judge a formability enhancement of AZ31 sheet, model geometry came from a practical automotive part which had quite depth with complicated curvatures, which was proven that a single sheet forming could not gave a successful part. Experimentally, we succeeded to make the model part with accurate dimension. The optimum forming conditions for respective forming steps were considered most important technical features of this hydro-mechanical and would be discussed in details. Also, the effort to avoid detrimental abnormal grain growth was given and discussed for a practical application.Keywords: hydro-mechanical forming, AZ31, abnormal grain growth, model geometry
Procedia PDF Downloads 51215657 Using of the Fractal Dimensions for the Analysis of Hyperkinetic Movements in the Parkinson's Disease
Authors: Sadegh Marzban, Mohamad Sobhan Sheikh Andalibi, Farnaz Ghassemi, Farzad Towhidkhah
Abstract:
Parkinson's disease (PD), which is characterized by the tremor at rest, rigidity, akinesia or bradykinesia and postural instability, affects the quality of life of involved individuals. The concept of a fractal is most often associated with irregular geometric objects that display self-similarity. Fractal dimension (FD) can be used to quantify the complexity and the self-similarity of an object such as tremor. In this work, we are aimed to propose a new method for evaluating hyperkinetic movements such as tremor, by using the FD and other correlated parameters in patients who are suffered from PD. In this study, we used 'the tremor data of Physionet'. The database consists of fourteen participants, diagnosed with PD including six patients with high amplitude tremor and eight patients with low amplitude. We tried to extract features from data, which can distinguish between patients before and after medication. We have selected fractal dimensions, including correlation dimension, box dimension, and information dimension. Lilliefors test has been used for normality test. Paired t-test or Wilcoxon signed rank test were also done to find differences between patients before and after medication, depending on whether the normality is detected or not. In addition, two-way ANOVA was used to investigate the possible association between the therapeutic effects and features extracted from the tremor. Just one of the extracted features showed significant differences between patients before and after medication. According to the results, correlation dimension was significantly different before and after the patient's medication (p=0.009). Also, two-way ANOVA demonstrates significant differences just in medication effect (p=0.033), and no significant differences were found between subject's differences (p=0.34) and interaction (p=0.97). The most striking result emerged from the data is that correlation dimension could quantify medication treatment based on tremor. This study has provided a technique to evaluate a non-linear measure for quantifying medication, nominally the correlation dimension. Furthermore, this study supports the idea that fractal dimension analysis yields additional information compared with conventional spectral measures in the detection of poor prognosis patients.Keywords: correlation dimension, non-linear measure, Parkinson’s disease, tremor
Procedia PDF Downloads 24415656 A West Coast Estuarine Case Study: A Predictive Approach to Monitor Estuarine Eutrophication
Authors: Vedant Janapaty
Abstract:
Estuaries are wetlands where fresh water from streams mixes with salt water from the sea. Also known as “kidneys of our planet”- they are extremely productive environments that filter pollutants, absorb floods from sea level rise, and shelter a unique ecosystem. However, eutrophication and loss of native species are ailing our wetlands. There is a lack of uniform data collection and sparse research on correlations between satellite data and in situ measurements. Remote sensing (RS) has shown great promise in environmental monitoring. This project attempts to use satellite data and correlate metrics with in situ observations collected at five estuaries. Images for satellite data were processed to calculate 7 bands (SIs) using Python. Average SI values were calculated per month for 23 years. Publicly available data from 6 sites at ELK was used to obtain 10 parameters (OPs). Average OP values were calculated per month for 23 years. Linear correlations between the 7 SIs and 10 OPs were made and found to be inadequate (correlation = 1 to 64%). Fourier transform analysis on 7 SIs was performed. Dominant frequencies and amplitudes were extracted for 7 SIs, and a machine learning(ML) model was trained, validated, and tested for 10 OPs. Better correlations were observed between SIs and OPs, with certain time delays (0, 3, 4, 6 month delay), and ML was again performed. The OPs saw improved R² values in the range of 0.2 to 0.93. This approach can be used to get periodic analyses of overall wetland health with satellite indices. It proves that remote sensing can be used to develop correlations with critical parameters that measure eutrophication in situ data and can be used by practitioners to easily monitor wetland health.Keywords: estuary, remote sensing, machine learning, Fourier transform
Procedia PDF Downloads 10415655 Towards Accurate Velocity Profile Models in Turbulent Open-Channel Flows: Improved Eddy Viscosity Formulation
Authors: W. Meron Mebrahtu, R. Absi
Abstract:
Velocity distribution in turbulent open-channel flows is organized in a complex manner. This is due to the large spatial and temporal variability of fluid motion resulting from the free-surface turbulent flow condition. This phenomenon is complicated further due to the complex geometry of channels and the presence of solids transported. Thus, several efforts were made to understand the phenomenon and obtain accurate mathematical models that are suitable for engineering applications. However, predictions are inaccurate because oversimplified assumptions are involved in modeling this complex phenomenon. Therefore, the aim of this work is to study velocity distribution profiles and obtain simple, more accurate, and predictive mathematical models. Particular focus will be made on the acceptable simplification of the general transport equations and an accurate representation of eddy viscosity. Wide rectangular open-channel seems suitable to begin the study; other assumptions are smooth-wall, and sediment-free flow under steady and uniform flow conditions. These assumptions will allow examining the effect of the bottom wall and the free surface only, which is a necessary step before dealing with more complex flow scenarios. For this flow condition, two ordinary differential equations are obtained for velocity profiles; from the Reynolds-averaged Navier-Stokes (RANS) equation and equilibrium consideration between turbulent kinetic energy (TKE) production and dissipation. Then different analytic models for eddy viscosity, TKE, and mixing length were assessed. Computation results for velocity profiles were compared to experimental data for different flow conditions and the well-known linear, log, and log-wake laws. Results show that the model based on the RANS equation provides more accurate velocity profiles. In the viscous sublayer and buffer layer, the method based on Prandtl’s eddy viscosity model and Van Driest mixing length give a more precise result. For the log layer and outer region, a mixing length equation derived from Von Karman’s similarity hypothesis provides the best agreement with measured data except near the free surface where an additional correction based on a damping function for eddy viscosity is used. This method allows more accurate velocity profiles with the same value of the damping coefficient that is valid under different flow conditions. This work continues with investigating narrow channels, complex geometries, and the effect of solids transported in sewers.Keywords: accuracy, eddy viscosity, sewers, velocity profile
Procedia PDF Downloads 11215654 Online Learning for Modern Business Models: Theoretical Considerations and Algorithms
Authors: Marian Sorin Ionescu, Olivia Negoita, Cosmin Dobrin
Abstract:
This scientific communication reports and discusses learning models adaptable to modern business problems and models specific to digital concepts and paradigms. In the PAC (probably approximately correct) learning model approach, in which the learning process begins by receiving a batch of learning examples, the set of learning processes is used to acquire a hypothesis, and when the learning process is fully used, this hypothesis is used in the prediction of new operational examples. For complex business models, a lot of models should be introduced and evaluated to estimate the induced results so that the totality of the results are used to develop a predictive rule, which anticipates the choice of new models. In opposition, for online learning-type processes, there is no separation between the learning (training) and predictive phase. Every time a business model is approached, a test example is considered from the beginning until the prediction of the appearance of a model considered correct from the point of view of the business decision. After choosing choice a part of the business model, the label with the logical value "true" is known. Some of the business models are used as examples of learning (training), which helps to improve the prediction mechanisms for future business models.Keywords: machine learning, business models, convex analysis, online learning
Procedia PDF Downloads 14015653 Dynamics of the Coupled Fitzhugh-Rinzel Neurons
Authors: Sanjeev Kumar Sharma, Arnab Mondal, Ranjit Kumar Upadhyay
Abstract:
Excitable cells often produce different oscillatory activities that help us to understand the transmitting and processing of signals in the neural system. We consider a FitzHugh-Rinzel (FH-R) model and studied the different dynamics of the model by considering the parameter c as the predominant parameter. The model exhibits different types of neuronal responses such as regular spiking, mixed-mode bursting oscillations (MMBOs), elliptic bursting, etc. Based on the bifurcation diagram, we consider the three regimes (MMBOs, elliptic bursting, and quiescent state). An analytical treatment for the occurrence of the supercritical Hopf bifurcation is studied. Further, we extend our study to a network of a hundred neurons by considering the bi-directional synaptic coupling between them. In this article, we investigate the alternation of spiking propagation and bursting phenomena of an uncoupled and coupled FH-R neurons. We explore that the complete graph of heterogenous desynchronized neurons can exhibit different types of bursting oscillations for certain coupling strength. For higher coupling strength, all the neurons in the network show complete synchronization.Keywords: excitable neuron model, spiking-bursting, stability and bifurcation, synchronization networks
Procedia PDF Downloads 12815652 Environmental Resilience in Sustainability Outcomes of Spatial-Economic Model Structure on the Topology of Construction Ecology
Authors: Moustafa Osman Mohammed
Abstract:
The resilient and sustainable of construction ecology is essential to world’s socio-economic development. Environmental resilience is crucial in relating construction ecology to topology of spatial-economic model. Sustainability of spatial-economic model gives attention to green business to comply with Earth’s System for naturally exchange patterns of ecosystems. The systems ecology has consistent and periodic cycles to preserve energy and materials flow in Earth’s System. When model structure is influencing communication of internal and external features in system networks, it postulated the valence of the first-level spatial outcomes (i.e., project compatibility success). These instrumentalities are dependent on second-level outcomes (i.e., participant security satisfaction). These outcomes of model are based on measuring database efficiency, from 2015 to 2025. The model topology has state-of-the-art in value-orientation impact and correspond complexity of sustainability issues (e.g., build a consistent database necessary to approach spatial structure; construct the spatial-economic model; develop a set of sustainability indicators associated with model; allow quantification of social, economic and environmental impact; use the value-orientation as a set of important sustainability policy measures), and demonstrate environmental resilience. The model is managing and developing schemes from perspective of multiple sources pollutants through the input–output criteria. These criteria are evaluated the external insertions effects to conduct Monte Carlo simulations and analysis for using matrices in a unique spatial structure. The balance “equilibrium patterns” such as collective biosphere features, has a composite index of the distributed feedback flows. These feedback flows have a dynamic structure with physical and chemical properties for gradual prolong of incremental patterns. While these structures argue from system ecology, static loads are not decisive from an artistic/architectural perspective. The popularity of system resilience, in the systems structure related to ecology has not been achieved without the generation of confusion and vagueness. However, this topic is relevant to forecast future scenarios where industrial regions will need to keep on dealing with the impact of relative environmental deviations. The model attempts to unify analytic and analogical structure of urban environments using database software to integrate sustainability outcomes where the process based on systems topology of construction ecology.Keywords: system ecology, construction ecology, industrial ecology, spatial-economic model, systems topology
Procedia PDF Downloads 1915651 Development of a Coupled Thermal-Mechanical-Biological Model to Simulate Impacts of Temperature on Waste Stabilization at a Landfill in Quebec, Canada
Authors: Simran Kaur, Paul J. Van Geel
Abstract:
A coupled Thermal-Mechanical-Biological (TMB) model was developed for the analysis of impacts of temperatures on waste stabilization at a Municipal Solid Waste (MSW) landfill in Quebec, Canada using COMSOL Multiphysics, a finite element-based software. For waste placed in landfills in Northern climates during winter months, it can take months or even years before the waste approaches ideal temperatures for biodegradation to occur. Therefore, the proposed model links biodegradation induced strain in MSW to waste temperatures and corresponding heat generation rates as a result of anaerobic degradation. This provides a link between the thermal-biological and mechanical behavior of MSW. The thermal properties of MSW are further linked to density which is tracked and updated in the mechanical component of the model, providing a mechanical-thermal link. The settlement of MSW is modelled based on the concept of viscoelasticity. The specific viscoelastic model used is a single Kelvin – Voight viscoelastic body in which the finite element response is controlled by the elastic material parameters – Young’s Modulus and Poisson’s ratio. The numerical model was validated with 10 years of temperature and settlement data collected from a landfill in Ste. Sophie, Quebec. The coupled TMB modelling framework, which simulates placement of waste lifts as they are placed progressively in the landfill, allows for optimization of several thermal and mechanical parameters throughout the depth of the waste profile and helps in better understanding of temperature dependence of MSW stabilization. The model is able to illustrate how waste placed in the winter months can delay biodegradation-induced settlement and generation of landfill gas. A delay in waste stabilization will impact the utilization of the approved airspace prior to the placement of a final cover and impact post-closure maintenance. The model provides a valuable tool to assess different waste placement strategies in order to increase airspace utilization within landfills operating under different climates, in addition to understanding conditions for increased gas generation for recovery as a green and renewable energy source.Keywords: coupled model, finite element modeling, landfill, municipal solid waste, waste stabilization
Procedia PDF Downloads 13215650 A Post-Occupancy Evaluation of LEED-Certified Residential Communities Using Structural Equation Modeling
Authors: Mohsen Goodarzi, George Berghorn
Abstract:
Despite the rapid growth in the number of green building and community development projects, the long-term performance of these projects has not yet been sufficiently evaluated from the users’ points of view. This is partially due to the lack of post-occupancy evaluation tools available for this type of project. In this study, a post-construction evaluation model is developed to evaluate the relationship between the perceived performance and satisfaction of residents in LEED-certified residential buildings and communities. To develop this evaluation model, a primary five-factor model was developed based on the existing models and residential satisfaction theories. Each factor of the model included several measures that were adopted from LEED certification systems such as LEED-BD+C New Construction, LEED-BD+C Multifamily Midrise, LEED-ND, as well as the UC Berkeley’s Center for the Built Environment survey tool. The model included four predictor variables (factors), including perceived building performance (8 measures), perceived infrastructure performance (9 measures), perceived neighborhood design (6 measures), and perceived economic performance (4 measures), and one dependent variable (factor), which was residential satisfaction (6 measures). An online survey was then conducted to collect the data from the residents of LEED-certified residential communities (n=192) and the validity of the model was tested through Confirmatory Factor Analysis (CFA). After modifying the CFA model, 26 measures, out of the initial 33 measures, were retained to enter into a Structural Equation Model (SEM) and to find the relationships between the perceived buildings performance, infrastructure performance, neighborhood design, economic performance and residential Satisfaction. The results of the SEM showed that the perceived building performance was the most influential factor in determining residential satisfaction in LEED-certified communities, followed by the perceived neighborhood design. On the other hand, perceived infrastructure performance and perceived economic performance did not show any significant relationship with residential satisfaction in these communities. This study can benefit green building researchers by providing a model for the evaluation of the long-term performance of these projects. It can also provide opportunities for green building practitioners to determine priorities for future residential development projects.Keywords: green building, residential satisfaction, perceived performance, confirmatory factor analysis, structural equation modeling
Procedia PDF Downloads 23915649 3D Geomechanical Model the Best Solution of the 21st Century for Perforation's Problems
Authors: Luis Guiliana, Andrea Osorio
Abstract:
The lack of comprehension of the reservoir geomechanics conditions may cause operational problems that cost to the industry billions of dollars per year. The drilling operations at the Ceuta Field, Area 2 South, Maracaibo Lake, have been very expensive due to problems associated with drilling. The principal objective of this investigation is to develop a 3D geomechanical model in this area, in order to optimize the future drillings in the field. For this purpose, a 1D geomechanical model was built at first instance, following the workflow of the MEM (Mechanical Earth Model), this consists of the following steps: 1) Data auditing, 2) Analysis of drilling events and structural model, 3) Mechanical stratigraphy, 4) Overburden stress, 5) Pore pressure, 6) Rock mechanical properties, 7) Horizontal stresses, 8) Direction of the horizontal stresses, 9) Wellbore stability. The 3D MEM was developed through the geostatistic model of the Eocene C-SUP VLG-3676 reservoir and the 1D MEM. With this data the geomechanical grid was embedded. The analysis of the results threw, that the problems occurred in the wells that were examined were mainly due to wellbore stability issues. It was determined that the stress field change as the stratigraphic column deepens, it is normal to strike-slip at the Middle Miocene and Lower Miocene, and strike-slipe to reverse at the Eocene. In agreement to this, at the level of the Eocene, the most advantageous direction to drill is parallel to the maximum horizontal stress (157º). The 3D MEM allowed having a tridimensional visualization of the rock mechanical properties, stresses and operational windows (mud weight and pressures) variations. This will facilitate the optimization of the future drillings in the area, including those zones without any geomechanics information.Keywords: geomechanics, MEM, drilling, stress
Procedia PDF Downloads 27315648 Simulation-Based Control Module for Offshore Single Point Mooring System
Authors: Daehyun Baek, Seungmin Lee, Minju Kim Jangik Park, Hyeong-Soon Moon
Abstract:
SPM (Single Point Mooring) is one of the mooring buoy facilities installed on a coast near oil and gas terminal which is not able to berth FPSO or large oil tankers under the condition of high draft due to geometrical limitation. Loading and unloading of crude oil and gas through a subsea pipeline can be carried out between the mooring buoy, ships and onshore facilities. SPM is an offshore-standalone system which has to withstand the harsh marine environment with harsh conditions such as high wind, current and so on. Therefore, SPM is required to have high stability, reliability and durability. Also, SPM is comprised to be integrated systems which consist of power management, high pressure valve control, sophisticated hardware/software and a long distance communication system. In order to secure required functions of SPM system, a simulation model for the integrated system of SPM using MATLAB Simulink and State flow tool has been developed. The developed model consists of configuration of hydraulic system for opening and closing of PLEM (Pipeline End Manifold) valves and control system logic. To verify functions of the model, an integrated simulation model for overall systems of SPM was also developed by considering handshaking variables between individual systems. In addition to the dynamic model, a self-diagnostic function to determine failure of the system was configured, which enables the SPM system itself to alert users about the failure once a failure signal comes to arise. Controlling and monitoring the SPM system is able to be done by a HMI system which is capable of managing the SPM system remotely, which was carried out by building a communication environment between the SPM system and the HMI system.Keywords: HMI system, mooring buoy, simulink simulation model, single point mooring, stateflow
Procedia PDF Downloads 41715647 Child Care Policy in Kazakhstan: A New Model
Authors: Dina Maratovna Aikenova
Abstract:
Child care policy must be a priority area of public authorities in any country. This study investigates child care policy in Kazakhstan in accordance with the current position of children and laws. The results show that Kazakhstan policy in this sphere needs more systematic model including state economic and social measures, parental involvement and role of non-government organizations.Keywords: children, Kazakhstan, policy, vulnerability
Procedia PDF Downloads 48415646 Preliminary Study of Human Reliability of Control in Case of Fire Based on the Decision Processes and Stress Model of Human in a Fire
Authors: Seung-Un Chae, Heung-Yul Kim, Sa-Kil Kim
Abstract:
This paper presents the findings of preliminary study on human control performance in case of fire. The relationship between human control and human decision is studied in decision processes and stress model of human in a fire. Human behavior aspects involved in the decision process during a fire incident. The decision processes appear that six of individual perceptual processes: recognition, validation, definition, evaluation, commitment, and reassessment. Then, human may be stressed in order to get an optimal decision for their activity. This paper explores problems in human control processes and stresses in a catastrophic situation. Thus, the future approach will be concerned to reduce stresses and ambiguous irrelevant information.Keywords: human reliability, decision processes, stress model, fire
Procedia PDF Downloads 98615645 Effect of Outliers in Assessing Significant Wave Heights Through a Time-Dependent GEV Model
Authors: F. Calderón-Vega, A. D. García-Soto, C. Mösso
Abstract:
Recorded significant wave heights sometimes exhibit large uncommon values (outliers) that can be associated with extreme phenomena such as hurricanes and cold fronts. In this study, some extremely large wave heights recorded in NOAA buoys (National Data Buoy Center, noaa.gov) are used to investigate their effect in the prediction of future wave heights associated with given return periods. Extreme waves are predicted through a time-dependent model based on the so-called generalized extreme value distribution. It is found that the outliers do affect the estimated wave heights. It is concluded that a detailed inspection of outliers is envisaged to determine whether they are real recorded values since this will impact defining design wave heights for coastal protection purposes.Keywords: GEV model, non-stationary, seasonality, outliers
Procedia PDF Downloads 19515644 Exchange Rate Forecasting by Econometric Models
Authors: Zahid Ahmad, Nosheen Imran, Nauman Ali, Farah Amir
Abstract:
The objective of the study is to forecast the US Dollar and Pak Rupee exchange rate by using time series models. For this purpose, daily exchange rates of US and Pakistan for the period of January 01, 2007 - June 2, 2017, are employed. The data set is divided into in sample and out of sample data set where in-sample data are used to estimate as well as forecast the models, whereas out-of-sample data set is exercised to forecast the exchange rate. The ADF test and PP test are used to make the time series stationary. To forecast the exchange rate ARIMA model and GARCH model are applied. Among the different Autoregressive Integrated Moving Average (ARIMA) models best model is selected on the basis of selection criteria. Due to the volatility clustering and ARCH effect the GARCH (1, 1) is also applied. Results of analysis showed that ARIMA (0, 1, 1 ) and GARCH (1, 1) are the most suitable models to forecast the future exchange rate. Further the GARCH (1,1) model provided the volatility with non-constant conditional variance in the exchange rate with good forecasting performance. This study is very useful for researchers, policymakers, and businesses for making decisions through accurate and timely forecasting of the exchange rate and helps them in devising their policies.Keywords: exchange rate, ARIMA, GARCH, PAK/USD
Procedia PDF Downloads 56115643 Organizational Commitment in Islamic Boarding School: The Implementation of Organizational Behavior Integrative Model
Authors: Siswoyo Haryono
Abstract:
Purpose – The fundamental goal of this research is to see if the integrative organizational behavior model can be used effectively in Islamic boarding schools. This paper also seeks to assess the effect of Islamic organizational culture, leadership, and spiritual intelligence on teachers' organizational commitment to Islamic Boarding schools. The goal of the mediation analysis is to see if the Islamic work ethic has a more significant effect on the instructors' organizational commitment than the direct effects of Islamic organizational culture, leadership, and Islamic spiritual intelligence. Design/methodology/approach – A questionnaire survey was used to obtain data from teachers at Islamic Boarding Schools. This study used the AMOS technique for structural equation modeling to evaluate the expected direct effect. To test the hypothesized indirect effect, employed Sobel test. Findings – Islamic organizational culture, Islamic leadership, and Islamic spiritual intelligence significantly affect Islamic work ethic. When it comes to Islamic corporate culture, Islamic leadership, Islamic spiritual intelligence, and Islamic work ethics have a significant impact. The findings of the mediation study reveal that Islamic organizational culture, leadership, and spiritual intelligence influences organizational commitment through Islamic work ethic. The total effect analysis shows that the most effective path to increasing teachers’ organizational commitment is Islamic leadership - Islamic work ethic – organizational commitment. Originality/value – This study evaluates the Integrative Model of Organizational Behavior by Colquitt (2016) applied in Islamic Boarding School. The model consists of contemporary leadership and individual characteristic as the antecedent. The mediating variables of the model consist of individual mechanisms such as trust, justice, and ethic. Individual performance and organizational commitment are the model's outcomes. These variables, on the other hand, do not represent the Islamic viewpoint as a whole. As a result, this study aims to assess the role of Islamic principles in the model. The study employs reliability and validity tests to get reliable and valid measures. The findings revealed that the evaluation model is proven to improve organizational commitment at Islamic Boarding School.Keywords: Islamic leadership, Islamic spiritual intelligence, Islamic work ethic, organizational commitment, Islamic boarding school
Procedia PDF Downloads 16115642 The role of Financial Development and Institutional Quality in Promoting Sustainable Development through Tourism Management
Authors: Hashim Zameer
Abstract:
Effective tourism management plays a vital role in promoting sustainability and supporting ecosystems. A common principle that has been in practice over the years is “first pollute and then clean,” indicating countries need financial resources to promote sustainability. Financial development and the tourism management both seems very important to promoting sustainable development. However, without institutional support, it is very difficult to succeed. In this context, it seems prominently significant to explore how institutional quality, tourism development, and financial development could promote sustainable development. In the past, no research explored the role of tourism development in sustainable development. Moreover, the role of financial development, natural resources, and institutional quality in sustainable development is also ignored. In this regard, this paper aims to investigate the role of tourism development, natural resources, financial development, and institutional quality in sustainable development in China. The study used time-series data from 2000–2021 and employed the Bayesian linear regression model because it is suitable for small data sets. The robustness of the findings was checked using a quantile regression approach. The results reveal that an increase in tourism expenditures stimulates the economy, creates jobs, encourages cultural exchange, and supports sustainability initiatives. Moreover, financial development and institution quality have a positive effect on sustainable development. However, reliance on natural resources can result in negative economic, social, and environmental outcomes, highlighting the need for resource diversification and management to reinforce sustainable development. These results highlight the significance of financial development, strong institutions, sustainable tourism, and careful utilization of natural resources for long-term sustainability. The study holds vital insights for policy formulation to promote sustainable tourism.Keywords: sustainability, tourism development, financial development, institutional quality
Procedia PDF Downloads 8315641 The Best Prediction Data Mining Model for Breast Cancer Probability in Women Residents in Kabul
Authors: Mina Jafari, Kobra Hamraee, Saied Hossein Hosseini
Abstract:
The prediction of breast cancer disease is one of the challenges in medicine. In this paper we collected 528 records of women’s information who live in Kabul including demographic, life style, diet and pregnancy data. There are many classification algorithm in breast cancer prediction and tried to find the best model with most accurate result and lowest error rate. We evaluated some other common supervised algorithms in data mining to find the best model in prediction of breast cancer disease among afghan women living in Kabul regarding to momography result as target variable. For evaluating these algorithms we used Cross Validation which is an assured method for measuring the performance of models. After comparing error rate and accuracy of three models: Decision Tree, Naive Bays and Rule Induction, Decision Tree with accuracy of 94.06% and error rate of %15 is found the best model to predicting breast cancer disease based on the health care records.Keywords: decision tree, breast cancer, probability, data mining
Procedia PDF Downloads 13815640 The Patterns Designation by the Inspiration from Flower at Suan Sunandha Palace
Authors: Nawaporn Srisarankullawong
Abstract:
This research is about the creating the design by the inspiration of the flowers, which were once planted in Suan Sunandha Palace. The researcher have conducted the research regarding the history of Suan Sunandha Palace and the flowers which have been planted in the palace’s garden, in order to use this research to create the new designs in the future. The objective are as follows; 1. To study the shape and the pattern of the flowers in Suan Sunandha Palace, in order to select a few of them as the model to create the new design. 2. In order to create the flower design from the flowers in Suan Sunandha Palace by using the current photograph of the flowers which were once used to be planted inside the palace and using adobe Illustrator and Adobe Photoshop programs to create the patterns and the model. The result of the research: From the research, the researcher had selected three types of flowers to crate the pattern model; they are Allamanda, Orchids and Flamingo Plant. The details of the flowers had been reduced in order to show the simplicity and create the pattern model to use them for models, so three flowers had created three pattern models and they had been developed into six patterns, using universal artist techniques, so the pattern created are modern and they can be used for further decoration.Keywords: patterns design, Suan Sunandha Palace, pattern of the flowers, visual arts and design
Procedia PDF Downloads 37415639 Study on Flexible Diaphragm In-Plane Model of Irregular Multi-Storey Industrial Plant
Authors: Cheng-Hao Jiang, Mu-Xuan Tao
Abstract:
The rigid diaphragm model may cause errors in the calculation of internal forces due to neglecting the in-plane deformation of the diaphragm. This paper thus studies the effects of different diaphragm in-plane models (including in-plane rigid model and in-plane flexible model) on the seismic performance of structures. Taking an actual industrial plant as an example, the seismic performance of the structure is predicted using different floor diaphragm models, and the analysis errors caused by different diaphragm in-plane models including deformation error and internal force error are calculated. Furthermore, the influence of the aspect ratio on the analysis errors is investigated. Finally, the code rationality is evaluated by assessing the analysis errors of the structure models whose floors were determined as rigid according to the code’s criterion. It is found that different floor models may cause great differences in the distribution of structural internal forces, and the current code may underestimate the influence of the floor in-plane effect.Keywords: industrial plant, diaphragm, calculating error, code rationality
Procedia PDF Downloads 14015638 Using Jumping Particle Swarm Optimization for Optimal Operation of Pump in Water Distribution Networks
Authors: R. Rajabpour, N. Talebbeydokhti, M. H. Ahmadi
Abstract:
Carefully scheduling the operations of pumps can be resulted to significant energy savings. Schedules can be defined either implicit, in terms of other elements of the network such as tank levels, or explicit by specifying the time during which each pump is on/off. In this study, two new explicit representations based on time-controlled triggers were analyzed, where the maximum number of pump switches was established beforehand, and the schedule may contain fewer switches than the maximum. The optimal operation of pumping stations was determined using a Jumping Particle Swarm Optimization (JPSO) algorithm to achieve the minimum energy cost. The model integrates JPSO optimizer and EPANET hydraulic network solver. The optimal pump operation schedule of VanZyl water distribution system was determined using the proposed model and compared with those from Genetic and Ant Colony algorithms. The results indicate that the proposed model utilizing the JPSP algorithm outperformed the others and is a versatile management model for the operation of real-world water distribution system.Keywords: JPSO, operation, optimization, water distribution system
Procedia PDF Downloads 24515637 Chemometric Regression Analysis of Radical Scavenging Ability of Kombucha Fermented Kefir-Like Products
Authors: Strahinja Kovacevic, Milica Karadzic Banjac, Jasmina Vitas, Stefan Vukmanovic, Radomir Malbasa, Lidija Jevric, Sanja Podunavac-Kuzmanovic
Abstract:
The present study deals with chemometric regression analysis of quality parameters and the radical scavenging ability of kombucha fermented kefir-like products obtained with winter savory (WS), peppermint (P), stinging nettle (SN) and wild thyme tea (WT) kombucha inoculums. Each analyzed sample was described by milk fat content (MF, %), total unsaturated fatty acids content (TUFA, %), monounsaturated fatty acids content (MUFA, %), polyunsaturated fatty acids content (PUFA, %), the ability of free radicals scavenging (RSA Dₚₚₕ, % and RSA.ₒₕ, %) and pH values measured after each hour from the start until the end of fermentation. The aim of the conducted regression analysis was to establish chemometric models which can predict the radical scavenging ability (RSA Dₚₚₕ, % and RSA.ₒₕ, %) of the samples by correlating it with the MF, TUFA, MUFA, PUFA and the pH value at the beginning, in the middle and at the end of fermentation process which lasted between 11 and 17 hours, until pH value of 4.5 was reached. The analysis was carried out applying univariate linear (ULR) and multiple linear regression (MLR) methods on the raw data and the data standardized by the min-max normalization method. The obtained models were characterized by very limited prediction power (poor cross-validation parameters) and weak statistical characteristics. Based on the conducted analysis it can be concluded that the resulting radical scavenging ability cannot be precisely predicted only on the basis of MF, TUFA, MUFA, PUFA content, and pH values, however, other quality parameters should be considered and included in the further modeling. This study is based upon work from project: Kombucha beverages production using alternative substrates from the territory of the Autonomous Province of Vojvodina, 142-451-2400/2019-03, supported by Provincial Secretariat for Higher Education and Scientific Research of AP Vojvodina.Keywords: chemometrics, regression analysis, kombucha, quality control
Procedia PDF Downloads 14215636 The Effect of Articial Intelligence on Physical Education Analysis and Sports Science
Authors: Peter Adly Hamdy Fahmy
Abstract:
The aim of the study was to examine the effects of a physical education program on student learning by combining the teaching of personal and social responsibility (TPSR) with a physical education model and TPSR with a traditional teaching model, these learning outcomes involving self-learning. -Study. Athletic performance, enthusiasm for sport, group cohesion, sense of responsibility and game performance. The participants were 3 secondary school physical education teachers and 6 physical education classes, 133 participants with students from the experimental group with 75 students and the control group with 58 students, and each teacher taught the experimental group and the control group for 16 weeks. The research methods used surveys, interviews and focus group meetings. Research instruments included the Personal and Social Responsibility Questionnaire, Sports Enthusiasm Scale, Group Cohesion Scale, Sports Self-Efficacy Scale, and Game Performance Assessment Tool. Multivariate analyzes of covariance and repeated measures ANOVA were used to examine differences in student learning outcomes between combining the TPSR with a physical education model and the TPSR with a traditional teaching model. The research findings are as follows: 1) The TPSR sports education model can improve students' learning outcomes, including sports self-efficacy, game performance, sports enthusiasm, team cohesion, group awareness and responsibility. 2) A traditional teaching model with TPSR could improve student learning outcomes, including sports self-efficacy, responsibility, and game performance. 3) The sports education model with TPSR could improve learning outcomes more than the traditional teaching model with TPSR, including sports self-efficacy, sports enthusiasm, responsibility and game performance. 4) Based on qualitative data on teachers' and students' learning experience, the physical education model with TPSR significantly improves learning motivation, group interaction and sense of play. The results suggest that physical education with TPSR could further improve learning outcomes in the physical education program. On the other hand, the hybrid model curriculum projects TPSR - Physical Education and TPSR - Traditional Education are good curriculum projects for moral character education that can be used in school physics.Keywords: approach competencies, physical, education, teachers employment, graduate, physical education and sport sciences, SWOT analysis character education, sport season, game performance, sport competence
Procedia PDF Downloads 59