Search results for: factor models
10384 Relationship of Epidermal Growth Factor Receptor Gene Mutations Andserum Levels of Ligands in Non-Small Cell Lung Carcinoma Patients
Authors: Abdolamir Allameh, Seyyed Mortaza Haghgoo, Adnan Khosravi, Esmaeil Mortaz, Mihan Pourabdollah-Toutkaboni, Sharareh Seifi
Abstract:
Non-Small Cell Lung Carcinoma (NSCLC) is associated with a number of gene mutations in epidermal growth factor receptor (EGFR). The prognostic significance of mutations in exons 19 and 21, together with serum levels of EGFR, amphiregulin (AR), and Transforming Growth Factor-alpha (TGF-α) are implicated in diagnosis and treatment. The aim of this study was to examine the relationship of EGFR mutations in selected exons with the expression of relevant ligands in sera samples of NSCLC patients. For this, a group of NSCLC patients (n=98) referred to the hospital for lung surgery with a mean age of 59±10.5 were enrolled (M/F: 75/23). Blood specimen was collected from each patient. Besides, formalin fixed paraffin embedded tissues were processed for DNA extraction. Gene mutations in exons 19 and 21 were detected by direct sequencing, following DNA amplification which was done by PCR (Polymerase Chain Reaction). Also, serum levels of EGFR, AR, and TGF-α were measured by ELISA. The results of our study show that EGFR mutations were present in 37% of Iranian NSCLC patients. The most frequently identified mutations were deletions in exon 19 (72.2%) and substitutions in exon 21 (27.8%). The most frequently identified alteration, which is considered as a rare mutation, was the E872K mutation in exon 21, which was found in 90% (9 out of 10) cases. EGFR mutation detected in exon 21 was significantly (P<0.05) correlated with the levels of its ligands, EGFR and TGF-α in serum samples. Furthermore, it was found that increased serum AR (>3pg/ml) and TGF-α (>10.5 pg/ml) were associated with shorter overall survival (P<0.05). The results clearly showed a close relationship between EGFR mutations and serum EGFR and serum TGF-α. Increased serum EGFR was associated with TGF-α and AR and linked to poor prognosis of NSCLC. These findings are implicated in clinical decision-making related to EGFR-Tyrosine kinase inhibitors (TKIs).Keywords: lung cancer, Iranian patients, epidermal growth factor, mutation, prognosis
Procedia PDF Downloads 8010383 Structure of Turbulence Flow in the Wire-Wrappes Fuel Assemblies of BREST-OD-300
Authors: Dmitry V. Fomichev, Vladimir I. Solonin
Abstract:
In this paper, experimental and numerical study of hydrodynamic characteristics of the air coolant flow in the test wire-wrapped assembly is presented. The test assembly has 37 rods, which are similar to the real fuel pins of the BREST-OD-300 fuel assemblies geometrically. Air open loop test facility installed at the “Nuclear Power Plants and Installations” department of BMSTU was used to obtain the experimental data. The obtaining altitudinal distribution of static pressure in the near-wall test assembly as well as velocity and temperature distribution of coolant flow in the test sections can give us some new knowledge about the mechanism of formation of the turbulence flow structure in the wire wrapped fuel assemblies. Numerical simulations of the turbulence flow has been accomplished using ANSYS Fluent 14.5. Different non-local turbulence models have been considered, such as standard and RNG k-e models and k-w SST model. Results of numerical simulations of the flow based on the considered turbulence models give the best agreement with the experimental data and help us to carry out strong analysis of flow characteristics.Keywords: wire-spaces fuel assembly, turbulent flow structure, computation fluid dynamics
Procedia PDF Downloads 45910382 Development and Validation of the University of Mindanao Needs Assessment Scale (UMNAS) for College Students
Authors: Ryan Dale B. Elnar
Abstract:
This study developed a multidimensional need assessment scale for college students called The University of Mindanao Needs Assessment Scale (UMNAS). Although there are context-specific instruments measuring the needs of clinical and non-clinical samples, literature reveals no standardized scales to measure the needs of the college students thus a four-phase item development process was initiated to support its content validity. Comprising seven broad facets namely spiritual-moral, intrapersonal, socio-personal, psycho-emotional, cognitive, physical and sexual, a pyramid model of college needs was deconstructed through FGD sample to support the literature review. Using various construct validity procedures, the model was further tested using a total of 881 Filipino college samples. The result of the study revealed evidences of the reliability and validity of the UMNAS. The reliability indices range from .929-.933. Exploratory and confirmatory factor analyses revealed a one-factor-six-dimensional instrument to measure the needs of the college students. Using multivariate regression analysis, year level and course are found predictors of students’ needs. Content analysis attested the usefulness of the instrument to diagnose students’ personal and academic issues and concerns in conjunction with other measures. The norming process includes 1728 students from the different colleges of the University of Mindanao. Further validation is recommended to establish a national norm for the instrument.Keywords: needs assessment scale, validity, factor analysis, college students
Procedia PDF Downloads 44110381 Silver Grating for Strong and Reproducible SERS Response
Authors: Y. Kalachyova, O. Lyutakov, V. Svorcik
Abstract:
One of the most significant obstacles for the application of surface enhanced Raman spectroscopy (SERS) is the poor reproducibility of SERS active substrates: SERS intensity can be varied from one substrate to another and moreover along the one substrate surface. High enhancement of the near-field intensity is the key factor for ultrasensitive SERS realization. SERS substrate can be prepared through introduction of highly ordered metal array, where light focusing is achieved through excitation of surface plasmon-polaritons (SPPs). In this work, we report the preparation of silver nanostructures with plasmon absorption peaks tuned by the metal arrangement. Excimer laser modification of poly(methyl methacrylate) followed by silver evaporation is proposed as an effective way for the creation of reproducible and effective surface plasmon-polaritons (SPP)-based SERS substrate. Theoretical and experimental studies were performed to optimize structure parameter for effective SPP excitation. It was found that the narrow range of grating periodicity and metal thickness exist, where SPPs can be most efficiently excited. In spite of the fact, that SERS response was almost always achieved, the enhancement factor was found to vary more with the effectivity of SPP excitation. When the real structure parameters were set to optimal for SPP excitation, a SERS enhancement factor was achieved up to four times. Theoretical and experimental investigation of SPP excitation on the two-dimensional periodical silver array was performed with the aim to make SERS response as high as possible.Keywords: grating, nanostructures, plasmon-polaritons, SERS
Procedia PDF Downloads 26810380 Advances in Design Decision Support Tools for Early-stage Energy-Efficient Architectural Design: A Review
Authors: Maryam Mohammadi, Mohammadjavad Mahdavinejad, Mojtaba Ansari
Abstract:
The main driving force for increasing movement towards the design of High-Performance Buildings (HPB) are building codes and rating systems that address the various components of the building and their impact on the environment and energy conservation through various methods like prescriptive methods or simulation-based approaches. The methods and tools developed to meet these needs, which are often based on building performance simulation tools (BPST), have limitations in terms of compatibility with the integrated design process (IDP) and HPB design, as well as use by architects in the early stages of design (when the most important decisions are made). To overcome these limitations in recent years, efforts have been made to develop Design Decision Support Systems, which are often based on artificial intelligence. Numerous needs and steps for designing and developing a Decision Support System (DSS), which complies with the early stages of energy-efficient architecture design -consisting of combinations of different methods in an integrated package- have been listed in the literature. While various review studies have been conducted in connection with each of these techniques (such as optimizations, sensitivity and uncertainty analysis, etc.) and their integration of them with specific targets; this article is a critical and holistic review of the researches which leads to the development of applicable systems or introduction of a comprehensive framework for developing models complies with the IDP. Information resources such as Science Direct and Google Scholar are searched using specific keywords and the results are divided into two main categories: Simulation-based DSSs and Meta-simulation-based DSSs. The strengths and limitations of different models are highlighted, two general conceptual models are introduced for each category and the degree of compliance of these models with the IDP Framework is discussed. The research shows movement towards Multi-Level of Development (MOD) models, well combined with early stages of integrated design (schematic design stage and design development stage), which are heuristic, hybrid and Meta-simulation-based, relies on Big-real Data (like Building Energy Management Systems Data or Web data). Obtaining, using and combining of these data with simulation data to create models with higher uncertainty, more dynamic and more sensitive to context and culture models, as well as models that can generate economy-energy-efficient design scenarios using local data (to be more harmonized with circular economy principles), are important research areas in this field. The results of this study are a roadmap for researchers and developers of these tools.Keywords: integrated design process, design decision support system, meta-simulation based, early stage, big data, energy efficiency
Procedia PDF Downloads 16210379 Production of Biodiesel Using Brine Waste as a Heterogeneous Catalyst
Authors: Hilary Rutto, Linda Sibali
Abstract:
In these modern times, we constantly search for new and innovative technologies to lift the burden of our extreme energy demand. The overall purpose of biofuel production research is to source an alternative energy source to replace the normal use of fossil fuel as liquid petroleum products. This experiment looks at the basis of biodiesel production with regards to alternative catalysts that can be used to produce biodiesel. The key factors that will be addressed during the experiments will focus on temperature variation, catalyst additions to the overall reaction, methanol to oil ratio, and the impact of agitation on the reaction. Brine samples sources from nearby plants will be evaluated and tested thoroughly and the key characteristics of these brine samples analysed for the verification of its use as a possible catalyst in biodiesel production. The one factor at a time experimental approach was used in this experiment, and the recycle and reuse characteristics of the heterogeneous catalyst was evaluated.Keywords: brine sludge, heterogenous catalyst, biodiesel, one factor
Procedia PDF Downloads 17110378 Spatial Differentiation of Elderly Care Facilities in Mountainous Cities: A Case Study of Chongqing
Abstract:
In this study, a web crawler was used to collect POI sample data from 38 districts and counties of Chongqing in 2022, and ArcGIS was combined to coordinate and projection conversion and realize data visualization. Nuclear density analysis and spatial correlation analysis were used to explore the spatial distribution characteristics of elderly care facilities in Chongqing, and K mean cluster analysis was carried out with GeoDa to study the spatial concentration degree of elderly care resources in 38 districts and counties. Finally, the driving force of spatial differentiation of elderly care facilities in various districts and counties of Chongqing is studied by using the method of geographic detector. The results show that: (1) in terms of spatial distribution structure, the distribution of elderly care facilities in Chongqing is unbalanced, showing a distribution pattern of ‘large dispersion and small agglomeration’ and the asymmetric pattern of ‘west dense and east sparse, north dense and south sparse’ is prominent. (2) In terms of the spatial matching between elderly care resources and the elderly population, there is a weak coordination between the input of elderly care resources and the distribution of the elderly population at the county level in Chongqing. (3) The analysis of the results of the geographical detector shows that the single factor influence is mainly the number of elderly population, public financial revenue and district and county GDP. The high single factor influence is mainly caused by the elderly population, public financial income, and district and county GDP. The influence of each influence factor on the spatial distribution of elderly care facilities is not simply superimposed but has a nonlinear enhancement effect or double factor enhancement. It is necessary to strengthen the synergistic effect of two factors and promote the synergistic effect of multiple factors.Keywords: aging, elderly care facilities, spatial differentiation, geographical detector, driving force analysis, Mountain city
Procedia PDF Downloads 3810377 An Optimized Method for Calculating the Linear and Nonlinear Response of SDOF System Subjected to an Arbitrary Base Excitation
Authors: Hossein Kabir, Mojtaba Sadeghi
Abstract:
Finding the linear and nonlinear responses of a typical single-degree-of-freedom system (SDOF) is always being regarded as a time-consuming process. This study attempts to provide modifications in the renowned Newmark method in order to make it more time efficient than it used to be and make it more accurate by modifying the system in its own non-linear state. The efficacy of the presented method is demonstrated by assigning three base excitations such as Tabas 1978, El Centro 1940, and MEXICO CITY/SCT 1985 earthquakes to a SDOF system, that is, SDOF, to compute the strength reduction factor, yield pseudo acceleration, and ductility factor.Keywords: single-degree-of-freedom system (SDOF), linear acceleration method, nonlinear excited system, equivalent displacement method, equivalent energy method
Procedia PDF Downloads 32010376 Local Interpretable Model-agnostic Explanations (LIME) Approach to Email Spam Detection
Authors: Rohini Hariharan, Yazhini R., Blessy Maria Mathew
Abstract:
The task of detecting email spam is a very important one in the era of digital technology that needs effective ways of curbing unwanted messages. This paper presents an approach aimed at making email spam categorization algorithms transparent, reliable and more trustworthy by incorporating Local Interpretable Model-agnostic Explanations (LIME). Our technique assists in providing interpretable explanations for specific classifications of emails to help users understand the decision-making process by the model. In this study, we developed a complete pipeline that incorporates LIME into the spam classification framework and allows creating simplified, interpretable models tailored to individual emails. LIME identifies influential terms, pointing out key elements that drive classification results, thus reducing opacity inherent in conventional machine learning models. Additionally, we suggest a visualization scheme for displaying keywords that will improve understanding of categorization decisions by users. We test our method on a diverse email dataset and compare its performance with various baseline models, such as Gaussian Naive Bayes, Multinomial Naive Bayes, Bernoulli Naive Bayes, Support Vector Classifier, K-Nearest Neighbors, Decision Tree, and Logistic Regression. Our testing results show that our model surpasses all other models, achieving an accuracy of 96.59% and a precision of 99.12%.Keywords: text classification, LIME (local interpretable model-agnostic explanations), stemming, tokenization, logistic regression.
Procedia PDF Downloads 4710375 Simscape Library for Large-Signal Physical Network Modeling of Inertial Microelectromechanical Devices
Authors: S. Srinivasan, E. Cretu
Abstract:
The information flow (e.g. block-diagram or signal flow graph) paradigm for the design and simulation of Microelectromechanical (MEMS)-based systems allows to model MEMS devices using causal transfer functions easily, and interface them with electronic subsystems for fast system-level explorations of design alternatives and optimization. Nevertheless, the physical bi-directional coupling between different energy domains is not easily captured in causal signal flow modeling. Moreover, models of fundamental components acting as building blocks (e.g. gap-varying MEMS capacitor structures) depend not only on the component, but also on the specific excitation mode (e.g. voltage or charge-actuation). In contrast, the energy flow modeling paradigm in terms of generalized across-through variables offers an acausal perspective, separating clearly the physical model from the boundary conditions. This promotes reusability and the use of primitive physical models for assembling MEMS devices from primitive structures, based on the interconnection topology in generalized circuits. The physical modeling capabilities of Simscape have been used in the present work in order to develop a MEMS library containing parameterized fundamental building blocks (area and gap-varying MEMS capacitors, nonlinear springs, displacement stoppers, etc.) for the design, simulation and optimization of MEMS inertial sensors. The models capture both the nonlinear electromechanical interactions and geometrical nonlinearities and can be used for both small and large signal analyses, including the numerical computation of pull-in voltages (stability loss). Simscape behavioral modeling language was used for the implementation of reduced-order macro models, that present the advantage of a seamless interface with Simulink blocks, for creating hybrid information/energy flow system models. Test bench simulations of the library models compare favorably with both analytical results and with more in-depth finite element simulations performed in ANSYS. Separate MEMS-electronic integration tests were done on closed-loop MEMS accelerometers, where Simscape was used for modeling the MEMS device and Simulink for the electronic subsystem.Keywords: across-through variables, electromechanical coupling, energy flow, information flow, Matlab/Simulink, MEMS, nonlinear, pull-in instability, reduced order macro models, Simscape
Procedia PDF Downloads 13610374 The Direct Deconvolutional Model in the Large-Eddy Simulation of Turbulence
Authors: Ning Chang, Zelong Yuan, Yunpeng Wang, Jianchun Wang
Abstract:
The utilization of Large Eddy Simulation (LES) has been extensive in turbulence research. LES concentrates on resolving the significant grid-scale motions while representing smaller scales through subfilter-scale (SFS) models. The deconvolution model, among the available SFS models, has proven successful in LES of engineering and geophysical flows. Nevertheless, the thorough investigation of how sub-filter scale dynamics and filter anisotropy affect SFS modeling accuracy remains lacking. The outcomes of LES are significantly influenced by filter selection and grid anisotropy, factors that have not been adequately addressed in earlier studies. This study examines two crucial aspects of LES: Firstly, the accuracy of direct deconvolution models (DDM) is evaluated concerning sub-filter scale (SFS) dynamics across varying filter-to-grid ratios (FGR) in isotropic turbulence. Various invertible filters are employed, including Gaussian, Helmholtz I and II, Butterworth, Chebyshev I and II, Cauchy, Pao, and rapidly decaying filters. The importance of FGR becomes evident as it plays a critical role in controlling errors for precise SFS stress prediction. When FGR is set to 1, the DDM models struggle to faithfully reconstruct SFS stress due to inadequate resolution of SFS dynamics. Notably, prediction accuracy improves when FGR is set to 2, leading to accurate reconstruction of SFS stress, except for cases involving Helmholtz I and II filters. Remarkably high precision, nearly 100%, is achieved at an FGR of 4 for all DDM models. Furthermore, the study extends to filter anisotropy and its impact on SFS dynamics and LES accuracy. By utilizing the dynamic Smagorinsky model (DSM), dynamic mixed model (DMM), and direct deconvolution model (DDM) with anisotropic filters, aspect ratios (AR) ranging from 1 to 16 are examined in LES filters. The results emphasize the DDM’s proficiency in accurately predicting SFS stresses under highly anisotropic filtering conditions. Notably high correlation coefficients exceeding 90% are observed in the a priori study for the DDM’s reconstructed SFS stresses, surpassing those of the DSM and DMM models. However, these correlations tend to decrease as filter anisotropy increases. In the a posteriori analysis, the DDM model consistently outperforms the DSM and DMM models across various turbulence statistics, including velocity spectra, probability density functions related to vorticity, SFS energy flux, velocity increments, strainrate tensors, and SFS stress. It is evident that as filter anisotropy intensifies, the results of DSM and DMM deteriorate, while the DDM consistently delivers satisfactory outcomes across all filter-anisotropy scenarios. These findings underscore the potential of the DDM framework as a valuable tool for advancing the development of sophisticated SFS models for LES in turbulence research.Keywords: deconvolution model, large eddy simulation, subfilter scale modeling, turbulence
Procedia PDF Downloads 7610373 Engagement Analysis Using DAiSEE Dataset
Authors: Naman Solanki, Souraj Mondal
Abstract:
With the world moving towards online communication, the video datastore has exploded in the past few years. Consequently, it has become crucial to analyse participant’s engagement levels in online communication videos. Engagement prediction of people in videos can be useful in many domains, like education, client meetings, dating, etc. Video-level or frame-level prediction of engagement for a user involves the development of robust models that can capture facial micro-emotions efficiently. For the development of an engagement prediction model, it is necessary to have a widely-accepted standard dataset for engagement analysis. DAiSEE is one of the datasets which consist of in-the-wild data and has a gold standard annotation for engagement prediction. Earlier research done using the DAiSEE dataset involved training and testing standard models like CNN-based models, but the results were not satisfactory according to industry standards. In this paper, a multi-level classification approach has been introduced to create a more robust model for engagement analysis using the DAiSEE dataset. This approach has recorded testing accuracies of 0.638, 0.7728, 0.8195, and 0.866 for predicting boredom level, engagement level, confusion level, and frustration level, respectively.Keywords: computer vision, engagement prediction, deep learning, multi-level classification
Procedia PDF Downloads 11410372 The Effects of Transformational Leadership on Process Innovation through Knowledge Sharing
Authors: Sawsan J. Al-Husseini, Talib A. Dosa
Abstract:
Transformational leadership has been identified as the most important factor affecting innovation and knowledge sharing; it leads to increased goal-directed behavior exhibited by followers and thus to enhanced performance and innovation for the organization. However, there is a lack of models linking transformational leadership, knowledge sharing, and process innovation within higher education (HE) institutions in general within developing countries, particularly in Iraq. This research aims to examine the mediating role of knowledge sharing in the transformational leadership and process innovation relationship. A quantitative approach was taken and 254 usable questionnaires were collected from public HE institutions in Iraq. Structural equation modelling with AMOS 22 was used to analyze the causal relationships among factors. The research found that knowledge sharing plays a pivotal role in the relationship between transformational leadership and process innovation, and that transformational leadership would be ideal in an educational context, promoting knowledge sharing activities and influencing process innovation in the public HE in Iraq. The research has developed some guidelines for researchers as well as leaders and provided evidence to support the use of TL to increase process innovation within HE environment in developing countries, particularly in Iraq.Keywords: transformational leadership, knowledge sharing, process innovation, structural equation modelling, developing countries
Procedia PDF Downloads 33610371 Classroom Incivility Behaviours among Medical Students: A Comparative Study in Pakistan
Authors: Manal Rauf
Abstract:
Trained medical practitioners are produced from medical colleges serving in public and private sectors. Prime responsibility of teaching faculty is to inculcate required work ethic among the students by serving as role models for them. It is an observed fact that classroom incivility behaviours are providing a friction in achieving these targets. Present study aimed at identification of classroom incivility behaviours observed by teachers and students of public and private medical colleges as per Glasser’s Choice Theory, making a comparison and investigating the strategies being adopted by teachers of both sectors to control undesired class room behaviours. Findings revealed that a significant difference occurs between teacher and student incivility behaviours. Public sector teacher focussed on survival as a strong factor behind in civil behaviours whereas private sector teachers considered power as the precedent for incivility. Teachers of both sectors are required to use verbal as well as non-verbal immediacy to reach a healthy leaning environment.Keywords: classroom incivility behaviour, glasser choice theory, Mehrabian immediacy theory
Procedia PDF Downloads 23910370 User Acceptance Criteria for Digital Libraries
Authors: Yu-Ming Wang, Jia-Hong Jian
Abstract:
The Internet and digital publication technologies have brought dramatic impacts on how people collect, organize, disseminate, access, store, and use information. More and more governments, schools, and organizations spent huge funds to develop digital libraries. A digital library can be regarded as a web extension of traditional physically libraries. People can search diverse publications, find out the position of knowledge resources, and borrow or buy publications through digital libraries. People can gain knowledge and students or employees can finish their reports by using digital libraries. Since the considerable funds and energy have been invested in implementing digital libraries, it is important to understand the evaluative criteria from the users’ viewpoint in order to enhance user acceptance. This study develops a list of user acceptance criteria for digital libraries. An initial criteria list was developed based on some previously validated instruments related to digital libraries. Data were collected from user experiences of digital libraries. The exploratory factor analysis and confirmatory factor analysis were adopted to purify the criteria list. The reliabilities and validities were tested. After validating the criteria list, a user survey was conducted to collect the comparative importance of criteria. The analytic hierarchy process (AHP) method was utilized to derive the importance of each criterion. The results of this study contribute to an e understanding of the criteria and relative importance that users evaluate for digital libraries.Keywords: digital library, user acceptance, analytic hierarchy process, factor analysis
Procedia PDF Downloads 25310369 Prediction of Soil Liquefaction by Using UBC3D-PLM Model in PLAXIS
Authors: A. Daftari, W. Kudla
Abstract:
Liquefaction is a phenomenon in which the strength and stiffness of a soil is reduced by earthquake shaking or other rapid cyclic loading. Liquefaction and related phenomena have been responsible for huge amounts of damage in historical earthquakes around the world. Modelling of soil behaviour is the main step in soil liquefaction prediction process. Nowadays, several constitutive models for sand have been presented. Nevertheless, only some of them can satisfy this mechanism. One of the most useful models in this term is UBCSAND model. In this research, the capability of this model is considered by using PLAXIS software. The real data of superstition hills earthquake 1987 in the Imperial Valley was used. The results of the simulation have shown resembling trend of the UBC3D-PLM model.Keywords: liquefaction, plaxis, pore-water pressure, UBC3D-PLM
Procedia PDF Downloads 31010368 Building Information Management in Context of Urban Spaces, Analysis of Current Use and Possibilities
Authors: Lucie Jirotková, Daniel Macek, Andrea Palazzo, Veronika Malinová
Abstract:
Currently, the implementation of 3D models in the construction industry is gaining popularity. Countries around the world are developing their own modelling standards and implement the use of 3D models into their individual permitting processes. Another theme that needs to be addressed are public building spaces and their subsequent maintenance, where the usage of BIM methodology is directly offered. The significant benefit of the implementation of Building Information Management is the information transfer. The 3D model contains not only the spatial representation of the item shapes but also various parameters that are assigned to the individual elements, which are easily traceable, mainly because they are all stored in one place in the BIM model. However, it is important to keep the data in the models up to date to achieve useability of the model throughout the life cycle of the building. It is now becoming standard practice to use BIM models in the construction of buildings, however, the building environment is very often neglected. Especially in large-scale development projects, the public space of buildings is often forwarded to municipalities, which obtains the ownership and are in charge of its maintenance. A 3D model of the building surroundings would include both the above-ground visible elements of the development as well as the underground parts, such as the technological facilities of water features, electricity lines for public lighting, etc. The paper shows the possibilities of a model in the field of information for the handover of premises, the following maintenance and decision making. The attributes and spatial representation of the individual elements make the model a reliable foundation for the creation of "Smart Cities". The paper analyses the current use of the BIM methodology and presents the state-of-the-art possibilities of development.Keywords: BIM model, urban space, BIM methodology, facility management
Procedia PDF Downloads 12410367 Evaluating Robustness of Conceptual Rainfall-runoff Models under Climate Variability in Northern Tunisia
Authors: H. Dakhlaoui, D. Ruelland, Y. Tramblay, Z. Bargaoui
Abstract:
To evaluate the impact of climate change on water resources at the catchment scale, not only future projections of climate are necessary but also robust rainfall-runoff models that are able to be fairly reliable under changing climate conditions. This study aims at assessing the robustness of three conceptual rainfall-runoff models (GR4j, HBV and IHACRES) on five basins in Northern Tunisia under long-term climate variability. Their robustness was evaluated according to a differential split sample test based on a climate classification of the observation period regarding simultaneously precipitation and temperature conditions. The studied catchments are situated in a region where climate change is likely to have significant impacts on runoff and they already suffer from scarcity of water resources. They cover the main hydrographical basins of Northern Tunisia (High Medjerda, Zouaraâ, Ichkeul and Cap bon), which produce the majority of surface water resources in Tunisia. The streamflow regime of the basins can be considered as natural since these basins are located upstream from storage-dams and in areas where withdrawals are negligible. A 30-year common period (1970‒2000) was considered to capture a large spread of hydro-climatic conditions. The calibration was based on the Kling-Gupta Efficiency (KGE) criterion, while the evaluation of model transferability is performed according to the Nash-Suttfliff efficiency criterion and volume error. The three hydrological models were shown to have similar behaviour under climate variability. Models prove a better ability to simulate the runoff pattern when transferred toward wetter periods compared to the case when transferred to drier periods. The limits of transferability are beyond -20% of precipitation and +1.5 °C of temperature in comparison with the calibration period. The deterioration of model robustness could in part be explained by the climate dependency of some parameters.Keywords: rainfall-runoff modelling, hydro-climate variability, model robustness, uncertainty, Tunisia
Procedia PDF Downloads 29210366 Investigation of Geothermal Gradient of the Niger Delta from Recent Studies
Authors: Adedapo Jepson Olumide, Kurowska Ewa, K. Schoeneich, Ikpokonte A. Enoch
Abstract:
In this paper, subsurface temperature measured from continuous temperature logs were used to determine the geothermal gradient of NigerDelta sedimentary basin. The measured temperatures were corrected to the true subsurface temperatures by applying the American Association of Petroleum Resources (AAPG) correction factor, borehole temperature correction factor with La Max’s correction factor and Zeta Utilities borehole correction factor. Geothermal gradient in this basin ranges from 1.20C to 7.560C/100m. Six geothermal anomalies centres were observed at depth in the southern parts of the Abakaliki anticlinorium around Onitsha, Ihiala, Umuaha area and named A1 to A6 while two more centre appeared at depth of 3500m and 4000m named A7 and A8 respectively. Anomaly A1 describes the southern end of the Abakaliki anticlinorium and extends southwards, anomaly A2 to A5 were found associated with a NW-SE structural alignment of the Calabar hinge line with structures describing the edge of the Niger Delta basin with the basement block of the Oban massif. Anomaly A6 locates in the south-eastern part of the basin offshore while A7 and A8 are located in the south western part of the basin offshore. At the average exploratory depth of 3500m, the geothermal gradient values for these anomalies A1, A2, A3, A4, A5, A6, A7, and A8 are 6.50C/100m, 1.750C/100m, 7.50C/100m, 1.250C/100m, 6.50C/100m, 5.50C/100m, 60C/100m, and 2.250C/100m respectively. Anomaly A8 area may yield higher thermal value at greater depth than 3500m. These results show that anomalies areas of A1, A3, A5, A6 and A7 are potentially prospective and explorable for geothermal energy using abandoned oil wells in the study area. Anomalies A1, A3.A5, A6 occur at areas where drilled boreholes were not exploitable for oil and gas but for the remaining areas where wells are so exploitable there appears no geothermal anomaly. Geothermal energy is environmentally friendly, clean and reversible.Keywords: temperature logs, geothermal gradient anomalies, alternative energy, Niger delta basin
Procedia PDF Downloads 27910365 Count of Trees in East Africa with Deep Learning
Authors: Nubwimana Rachel, Mugabowindekwe Maurice
Abstract:
Trees play a crucial role in maintaining biodiversity and providing various ecological services. Traditional methods of counting trees are time-consuming, and there is a need for more efficient techniques. However, deep learning makes it feasible to identify the multi-scale elements hidden in aerial imagery. This research focuses on the application of deep learning techniques for tree detection and counting in both forest and non-forest areas through the exploration of the deep learning application for automated tree detection and counting using satellite imagery. The objective is to identify the most effective model for automated tree counting. We used different deep learning models such as YOLOV7, SSD, and UNET, along with Generative Adversarial Networks to generate synthetic samples for training and other augmentation techniques, including Random Resized Crop, AutoAugment, and Linear Contrast Enhancement. These models were trained and fine-tuned using satellite imagery to identify and count trees. The performance of the models was assessed through multiple trials; after training and fine-tuning the models, UNET demonstrated the best performance with a validation loss of 0.1211, validation accuracy of 0.9509, and validation precision of 0.9799. This research showcases the success of deep learning in accurate tree counting through remote sensing, particularly with the UNET model. It represents a significant contribution to the field by offering an efficient and precise alternative to conventional tree-counting methods.Keywords: remote sensing, deep learning, tree counting, image segmentation, object detection, visualization
Procedia PDF Downloads 7110364 The Classical Conditioning Effect of Animated Spokes-Characters
Authors: Chia-Ching Tsai, Ting-Hsiu Chen
Abstract:
This paper adopted 2X2 factorial design. One factor was experimental versus control condition. The other factor was types of animated spokescharacter, and one of the two levels was expert type, and the other level is attractive type. In the study, we use control versus experimental conditioning and types of animated spokescharacter as independent variables, and brand attitude as dependent variable to examine the conditioning effect of types of animated spokescharacter on brand attitude. There are 123 subjects participating in the experiment. The results showed conditioning group presents that animated spokescharacter has significantly superior effect of product endorsement in contrast to non-conditioning one, while there is no significant impact of types of animated spokescharacter on brand attitude.Keywords: classical conditioning, animated spokes-character, brand attitude, factorial design
Procedia PDF Downloads 27310363 Structuring Highly Iterative Product Development Projects by Using Agile-Indicators
Authors: Guenther Schuh, Michael Riesener, Frederic Diels
Abstract:
Nowadays, manufacturing companies are faced with the challenge of meeting heterogeneous customer requirements in short product life cycles with a variety of product functions. So far, some of the functional requirements remain unknown until late stages of the product development. A way to handle these uncertainties is the highly iterative product development (HIP) approach. By structuring the development project as a highly iterative process, this method provides customer oriented and marketable products. There are first approaches for combined, hybrid models comprising deterministic-normative methods like the Stage-Gate process and empirical-adaptive development methods like SCRUM on a project management level. However, almost unconsidered is the question, which development scopes can preferably be realized with either empirical-adaptive or deterministic-normative approaches. In this context, a development scope constitutes a self-contained section of the overall development objective. Therefore, this paper focuses on a methodology that deals with the uncertainty of requirements within the early development stages and the corresponding selection of the most appropriate development approach. For this purpose, internal influencing factors like a company’s technology ability, the prototype manufacturability and the potential solution space as well as external factors like the market accuracy, relevance and volatility will be analyzed and combined into an Agile-Indicator. The Agile-Indicator is derived in three steps. First of all, it is necessary to rate each internal and external factor in terms of the importance for the overall development task. Secondly, each requirement has to be evaluated for every single internal and external factor appropriate to their suitability for empirical-adaptive development. Finally, the total sums of internal and external side are composed in the Agile-Indicator. Thus, the Agile-Indicator constitutes a company-specific and application-related criterion, on which the allocation of empirical-adaptive and deterministic-normative development scopes can be made. In a last step, this indicator will be used for a specific clustering of development scopes by application of the fuzzy c-means (FCM) clustering algorithm. The FCM-method determines sub-clusters within functional clusters based on the empirical-adaptive environmental impact of the Agile-Indicator. By means of the methodology presented in this paper, it is possible to classify requirements, which are uncertainly carried out by the market, into empirical-adaptive or deterministic-normative development scopes.Keywords: agile, highly iterative development, agile-indicator, product development
Procedia PDF Downloads 24610362 Analyzing Safety Incidents using the Fatigue Risk Index Calculator as an Indicator of Fatigue within a UK Rail Franchise
Authors: Michael Scott Evans, Andrew Smith
Abstract:
The feeling of fatigue at work could potentially have devastating consequences. The aim of this study was to investigate whether the well-established objective indicator of fatigue – the Fatigue Risk Index (FRI) calculator used by the rail industry is an effective indicator to the number of safety incidents, in which fatigue could have been a contributing factor. The study received ethics approval from Cardiff University’s Ethics Committee (EC.16.06.14.4547). A total of 901 safety incidents were recorded from a single British rail franchise between 1st June 2010 – 31st December 2016, into the Safety Management Information System (SMIS). The safety incident types identified that fatigue could have been a contributing factor were: Signal Passed at Danger (SPAD), Train Protection & Warning System (TPWS) activation, Automatic Warning System (AWS) slow to cancel, failed to call, and station overrun. From the 901 recorded safety incidents, the scheduling system CrewPlan was used to extract the Fatigue Index (FI) score and Risk Index (RI) score of all train drivers on the day of the safety incident. Only the working rosters of 64.2% (N = 578) (550 men and 28 female) ranging in age from 24 – 65 years old (M = 47.13, SD = 7.30) were accessible for analyses. Analysis from all 578 train drivers who were involved in safety incidents revealed that 99.8% (N = 577) of Fatigue Index (FI) scores fell within or below the identified guideline threshold of 45 as well as 97.9% (N = 566) of Risk Index (RI) scores falling below the 1.6 threshold range. Their scores represent good practice within the rail industry. These findings seem to indicate that the current objective indicator, i.e. the FRI calculator used in this study by the British rail franchise was not an effective predictor of train driver’s FI scores and RI scores, as safety incidents in which fatigue could have been a contributing factor represented only 0.2% of FI scores and 2.1% of RI scores. Further research is needed to determine whether there are other contributing factors that could provide a better indication as to why there is such a significantly large proportion of train drivers who are involved in safety incidents, in which fatigue could have been a contributing factor have such low FI and RI scores.Keywords: fatigue risk index calculator, objective indicator of fatigue, rail industry, safety incident
Procedia PDF Downloads 18110361 Integration of Technology into Nursing Education: A Collaboration between College of Nursing and University Research Center
Authors: Lori Lioce, Gary Maddux, Norven Goddard, Ishella Fogle, Bernard Schroer
Abstract:
This paper presents the integration of technologies into nursing education. The collaborative effort includes the College of Nursing (CoN) at the University of Alabama in Huntsville (UAH) and the UAH Systems Management and Production Center (SMAP). The faculty at the CoN conducts needs assessments to identify education and training requirements. A team of CoN faculty and SMAP engineers then prioritize these requirements and establish improvement/development teams. The development teams consist of nurses to evaluate the models and to provide feedback and of undergraduate engineering students and their senior staff mentors from SMAP. The SMAP engineering staff develops and creates the physical models using 3D printing, silicone molds and specialized molding mixtures and techniques. The collaboration has focused on developing teaching and training, or clinical, simulators. In addition, the onset of the Covid-19 pandemic has intensified this relationship, as 3D modeling shifted to supplied personal protection equipment (PPE) to local health care providers. A secondary collaboration has been introducing students to clinical benchmarking through the UAH Center for Management and Economic Research. As a result of these successful collaborations the Model Exchange & Development of Nursing & Engineering Technology (MEDNET) has been established. MEDNET seeks to extend and expand the linkage between engineering and nursing to K-12 schools, technical schools and medical facilities in the region to the resources available from the CoN and SMAP. As an example, stereolithography (STL) files of the 3D printed models, along with the specifications to fabricate models, are available on the MEDNET website. Ten 3D printed models have been developed and are currently in use by the CoN. The following additional training simulators are currently under development:1) suture pads, 2) gelatin wound models and 3) printed wound tattoos. Specification sheets have been written for these simulations that describe the use, fabrication procedures and parts list. These specifications are available for viewing and download on MEDNET. Included in this paper are 1) descriptions of CoN, SMAP and MEDNET, 2) collaborative process used in product improvement/development, 3) 3D printed models of training and teaching simulators, 4) training simulators under development with specification sheets, 5) family care practice benchmarking, 6) integrating the simulators into the nursing curriculum, 7) utilizing MEDNET as a pandemic response, and 8) conclusions and lessons learned.Keywords: 3D printing, nursing education, simulation, trainers
Procedia PDF Downloads 12210360 Deep Learning for Recommender System: Principles, Methods and Evaluation
Authors: Basiliyos Tilahun Betru, Charles Awono Onana, Bernabe Batchakui
Abstract:
Recommender systems have become increasingly popular in recent years, and are utilized in numerous areas. Nowadays many web services provide several information for users and recommender systems have been developed as critical element of these web applications to predict choice of preference and provide significant recommendations. With the help of the advantage of deep learning in modeling different types of data and due to the dynamic change of user preference, building a deep model can better understand users demand and further improve quality of recommendation. In this paper, deep neural network models for recommender system are evaluated. Most of deep neural network models in recommender system focus on the classical collaborative filtering user-item setting. Deep learning models demonstrated high level features of complex data can be learned instead of using metadata which can significantly improve accuracy of recommendation. Even though deep learning poses a great impact in various areas, applying the model to a recommender system have not been fully exploited and still a lot of improvements can be done both in collaborative and content-based approach while considering different contextual factors.Keywords: big data, decision making, deep learning, recommender system
Procedia PDF Downloads 47810359 Models, Resources and Activities of Project Scheduling Problems
Authors: Jorge A. Ruiz-Vanoye, Ocotlán Díaz-Parra, Alejandro Fuentes-Penna, José J. Hernández-Flores, Edith Olaco Garcia
Abstract:
The Project Scheduling Problem (PSP) is a generic name given to a whole class of problems in which the best form, time, resources and costs for project scheduling are necessary. The PSP is an application area related to the project management. This paper aims at being a guide to understand PSP by presenting a survey of the general parameters of PSP: the Resources (those elements that realize the activities of a project), and the Activities (set of operations or own tasks of a person or organization); the mathematical models of the main variants of PSP and the algorithms used to solve the variants of the PSP. The project scheduling is an important task in project management. This paper contains mathematical models, resources, activities, and algorithms of project scheduling problems. The project scheduling problem has attracted researchers of the automotive industry, steel manufacturer, medical research, pharmaceutical research, telecommunication, industry, aviation industry, development of the software, manufacturing management, innovation and technology management, construction industry, government project management, financial services, machine scheduling, transportation management, and others. The project managers need to finish a project with the minimum cost and the maximum quality.Keywords: PSP, Combinatorial Optimization Problems, Project Management; Manufacturing Management, Technology Management.
Procedia PDF Downloads 41810358 Critical Factors Affecting the Implementation of Total Quality Management in the Construction Industry in U. A. E.
Authors: Firas Mohamad Al-Sabek
Abstract:
The Purpose of the paper is to examine the most critical and important factor which will affect the implementation of Total Quality Management (TQM) in the construction industry in the United Arab Emirates. It also examines the most effected Project outcome from implementing TQM. A framework was also proposed depending on the literature studies. The method used in this paper is a quantitative study. A survey with a sample of 60 respondents was created and distributed in a construction company in Abu Dhabi, which includes 15 questions to examine the most critical factor that will affect the implementation of TQM in addition to the most effected project outcome from implementing TQM. The survey showed that management commitment is the most important factor in implementing TQM in a construction company. Also it showed that Project cost is most effected outcome from the implementation of TQM. Management commitment is very important for implementing TQM in any company. If the management loose interest in quality then everyone in the organization will do so. The success of TQM will depend mostly on the top of the pyramid. Also cost is reduced and money is saved when the project team implement TQM. While if no quality measures are present within the team, the project will suffer a commercial failure. Based on literature, more factors can be examined and added to the model. In addition, more construction companies could be surveyed in order to obtain more accurate results. Also this study could be conducted outside the United Arab Emirates for further enchantment.Keywords: construction project, total quality management, management commitment, cost, theoretical framework
Procedia PDF Downloads 42610357 Development and Validation of an Instrument Measuring the Coping Strategies in Situations of Stress
Authors: Lucie Côté, Martin Lauzier, Guy Beauchamp, France Guertin
Abstract:
Stress causes deleterious effects to the physical, psychological and organizational levels, which highlight the need to use effective coping strategies to deal with it. Several coping models exist, but they don’t integrate the different strategies in a coherent way nor do they take into account the new research on the emotional coping and acceptance of the stressful situation. To fill these gaps, an integrative model incorporating the main coping strategies was developed. This model arises from the review of the scientific literature on coping and from a qualitative study carried out among workers with low or high levels of stress, as well as from an analysis of clinical cases. The model allows one to understand under what circumstances the strategies are effective or ineffective and to learn how one might use them more wisely. It includes Specific Strategies in controllable situations (the Modification of the Situation and the Resignation-Disempowerment), Specific Strategies in non-controllable situations (Acceptance and Stubborn Relentlessness) as well as so-called General Strategies (Wellbeing and Avoidance). This study is intended to undertake and present the process of development and validation of an instrument to measure coping strategies based on this model. An initial pool of items has been generated from the conceptual definitions and three expert judges have validated the content. Of these, 18 items have been selected for a short form questionnaire. A sample of 300 students and employees from a Quebec university was used for the validation of the questionnaire. Concerning the reliability of the instrument, the indices observed following the inter-rater agreement (Krippendorff’s alpha) and the calculation of the coefficients for internal consistency (Cronbach's alpha) are satisfactory. To evaluate the construct validity, a confirmatory factor analysis using MPlus supports the existence of a model with six factors. The results of this analysis suggest also that this configuration is superior to other alternative models. The correlations show that the factors are only loosely related to each other. Overall, the analyses carried out suggest that the instrument has good psychometric qualities and demonstrates the relevance of further work to establish predictive validity and reconfirm its structure. This instrument will help researchers and clinicians better understand and assess coping strategies to cope with stress and thus prevent mental health issues.Keywords: acceptance, coping strategies, stress, validation process
Procedia PDF Downloads 33910356 Aerodynamic Heating Analysis of Hypersonic Flow over Blunt-Nosed Bodies Using Computational Fluid Dynamics
Authors: Aakash Chhunchha, Assma Begum
Abstract:
The qualitative aspects of hypersonic flow over a range of blunt bodies have been extensively analyzed in the past. It is well known that the curvature of a body’s geometry in the sonic region predominantly dictates the bow shock shape and its standoff distance from the body, while the surface pressure distribution depends on both the sonic region and on the local body shape. The present study is an extension to analyze the hypersonic flow characteristics over several blunt-nosed bodies using modern Computational Fluid Dynamics (CFD) tools to determine the shock shape and its effect on the heat flux around the body. 4 blunt-nosed models with cylindrical afterbodies were analyzed for a flow at a Mach number of 10 corresponding to the standard atmospheric conditions at an altitude of 50 km. The nose radii of curvature of the models range from a hemispherical nose to a flat nose. Appropriate numerical models and the supplementary convergence techniques that were implemented for the CFD analysis are thoroughly described. The flow contours are presented highlighting the key characteristics of shock wave shape, shock standoff distance and the sonic point shift on the shock. The variation of heat flux, due to different shock detachments for various models is comprehensively discussed. It is observed that the more the bluntness of the nose radii, the farther the shock stands from the body; and consequently, the less the surface heating at the nose. The results obtained from the CFD analyses are compared with approximated theoretical engineering correlations. Overall, a satisfactory agreement is observed between the two.Keywords: aero-thermodynamics, blunt-nosed bodies, computational fluid dynamics (CFD), hypersonic flow
Procedia PDF Downloads 14310355 Urinalysis by Surface-Enhanced Raman Spectroscopy on Gold Nanoparticles for Different Disease
Authors: Leonardo C. Pacheco-Londoño, Nataly J. Galan-Freyle, Lisandro Pacheco-Lugo, Antonio Acosta, Elkin Navarro, Gustavo Aroca-Martínez, Karin Rondón-Payares, Samuel P. Hernández-Rivera
Abstract:
In our Life Science Research Center of the University Simon Bolivar (LSRC), one of the focuses is the diagnosis and prognosis of different diseases; we have been implementing the use of gold nanoparticles (Au-NPs) for various biomedical applications. In this case, Au-NPs were used for Surface-Enhanced Raman Spectroscopy (SERS) in different diseases' diagnostics, such as Lupus Nephritis (LN), hypertension (H), preeclampsia (PC), and others. This methodology is proposed for the diagnosis of each disease. First, good signals of the different metabolites by SERS were obtained through a mixture of urine samples and Au-NPs. Second, PLS-DA models based on SERS spectra to discriminate each disease were able to differentiate between sick and healthy patients with different diseases. Finally, the sensibility and specificity for the different models were determined in the order of 0.9. On the other hand, a second methodology was developed using machine learning models from all data of the different diseases, and, as a result, a discriminant spectral map of the diseases was generated. These studies were possible thanks to joint research between two university research centers and two health sector entities, and the patient samples were treated with ethical rigor and their consent.Keywords: SERS, Raman, PLS-DA, diseases
Procedia PDF Downloads 141