Search results for: fixed smeared crack model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 17634

Search results for: fixed smeared crack model

8004 Role of ICT and Wage Inequality in Organization

Authors: Shoji Katagiri

Abstract:

This study deals with wage inequality in organization and shows the relationship between ICT and wage in organization. To do so, we incorporate ICT’s factors in organization into our model. ICT’s factors are efficiencies of Enterprise Resource Planning (ERP), Computer Assisted Design/Computer Assisted Manufacturing (CAD/CAM), and NETWORK. The improvement of ICT’s factors decrease the learning cost to solve problem pertaining to the hierarchy in organization. The improvement of NETWORK increases the wage inequality within workers and decreases within managers and entrepreneurs. The improvements of CAD/CAM and ERP increases the wage inequality within all agent, and partially increase it between the agents in hierarchy.

Keywords: endogenous economic growth, ICT, inequality, capital accumulation

Procedia PDF Downloads 236
8003 Predicting Daily Patient Hospital Visits Using Machine Learning

Authors: Shreya Goyal

Abstract:

The study aims to build user-friendly software to understand patient arrival patterns and compute the number of potential patients who will visit a particular health facility for a given period by using a machine learning algorithm. The underlying machine learning algorithm used in this study is the Support Vector Machine (SVM). Accurate prediction of patient arrival allows hospitals to operate more effectively, providing timely and efficient care while optimizing resources and improving patient experience. It allows for better allocation of staff, equipment, and other resources. If there's a projected surge in patients, additional staff or resources can be allocated to handle the influx, preventing bottlenecks or delays in care. Understanding patient arrival patterns can also help streamline processes to minimize waiting times for patients and ensure timely access to care for patients in need. Another big advantage of using this software is adhering to strict data protection regulations such as the Health Insurance Portability and Accountability Act (HIPAA) in the United States as the hospital will not have to share the data with any third party or upload it to the cloud because the software can read data locally from the machine. The data needs to be arranged in. a particular format and the software will be able to read the data and provide meaningful output. Using software that operates locally can facilitate compliance with these regulations by minimizing data exposure. Keeping patient data within the hospital's local systems reduces the risk of unauthorized access or breaches associated with transmitting data over networks or storing it in external servers. This can help maintain the confidentiality and integrity of sensitive patient information. Historical patient data is used in this study. The input variables used to train the model include patient age, time of day, day of the week, seasonal variations, and local events. The algorithm uses a Supervised learning method to optimize the objective function and find the global minima. The algorithm stores the values of the local minima after each iteration and at the end compares all the local minima to find the global minima. The strength of this study is the transfer function used to calculate the number of patients. The model has an output accuracy of >95%. The method proposed in this study could be used for better management planning of personnel and medical resources.

Keywords: machine learning, SVM, HIPAA, data

Procedia PDF Downloads 47
8002 Developing a Model for the Lexical Analysis of Key Works of Children's Literature

Authors: Leigha Inman

Abstract:

One of the most cutting-edge interdisciplinary topics in the social sciences is the application of understandings from the humanities to traditionally social scientific disciplines such as education studies. This paper proposes such a topic. It has often been observed that children enjoy literature. The role of reading in the development of reading ability is an important area of research. However, the role of vocabulary in reading development has long been neglected. This paper reports an investigation into the number of words found in key works of children's literature and attempts to correlate that figure with years elapsed since publication of the work. Pedagogical implications will be discussed.

Keywords: educational pedagogy, young learners, vocabulary teaching, reading development

Procedia PDF Downloads 100
8001 Challenges, Practices, and Opportunities of Knowledge Management in Industrial Research Institutes: Lessons Learned from Flanders Make

Authors: Zhenmin Tao, Jasper De Smet, Koen Laurijssen, Jeroen Stuyts, Sonja Sioncke

Abstract:

Today, the quality of knowledge management (KM)become one of the underpinning factors in the success of an organization, as it determines the effectiveness of capitalizing the organization’s knowledge. Overall, KMin an organization consists of five aspects: (knowledge) creation, validation, presentation, distribution, and application. Among others, KM in research institutes is considered as the cornerstone as their activities cover all five aspects. Furthermore, KM in a research institute facilitates the steering committee to envision the future roadmap, identify knowledge gaps, and make decisions on future research directions. Likewise, KMis even more challenging in industrial research institutes. From a technical perspective, technology advancement in the past decades calls for combinations of breadth and depth in expertise that poses challenges in talent acquisition and, therefore, knowledge creation. From a regulatory perspective, the strict intellectual property protection from industry collaborators and/or the contractual agreements made by possible funding authoritiesform extra barriers to knowledge validation, presentation, and distribution. From a management perspective, seamless KM activities are only guaranteed by inter-disciplinary talents that combine technical background knowledge, management skills, and leadership, let alone international vision. From a financial perspective, the long feedback period of new knowledge, together with the massive upfront investment costs and low reusability of the fixed assets, lead to low RORC (return on research capital) that jeopardize KM practice. In this study, we aim to address the challenges, practices, and opportunitiesof KM in Flanders Make – a leading European research institute specialized in the manufacturing industry. In particular, the analyses encompass an internal KM project which involves functionalities ranging from management to technical domain experts. This wide range of functionalities provides comprehensive empirical evidence on the challenges and practices w.r.t.the abovementioned KMaspects. Then, we ground our analysis onto the critical dimensions ofKM–individuals, socio‐organizational processes, and technology. The analyses have three steps: First, we lay the foundation and define the environment of this study by briefing the KM roles played by different functionalities in Flanders Make. Second, we zoom in to the CoreLab MotionS where the KM project is located. In this step, given the technical domains covered by MotionS products, the challenges in KM will be addressed w.r.t. the five KM aspects and three critical dimensions. Third, by detailing the objectives, practices, results, and limitations of the MotionSKMproject, we justify the practices and opportunities derived in the execution ofKMw.r.t. the challenges addressed in the second step. The results of this study are twofold: First, a KM framework that consolidates past knowledge is developed. A library based on this framework can, therefore1) overlook past research output, 2) accelerate ongoing research activities, and 3) envision future research projects. Second, the challenges inKM on both individual (actions) level and socio-organizational level (e.g., interactions between individuals)are identified. By doing so, suggestions and guidelines will be provided in KM in the context of industrial research institute. To this end, the results in this study are reflected towards the findings in existing literature.

Keywords: technical knowledge management framework, industrial research institutes, individual knowledge management, socio-organizational knowledge management.

Procedia PDF Downloads 89
8000 Modeling the Current and Future Distribution of Anthus Pratensis under Climate Change

Authors: Zahira Belkacemi

Abstract:

One of the most important tools in conservation biology is information on the geographic distribution of species and the variables determining those patterns. In this study, we used maximum-entropy niche modeling (Maxent) to predict the current and future distribution of Anthus pratensis using climatic variables. The results showed that the species would not be highly affected by the climate change in shifting its distribution; however, the results of this study should be improved by taking into account other predictors, and that the NATURA 2000 protected sites will be efficient at 42% in protecting the species.

Keywords: anthus pratensis, climate change, Europe, species distribution model

Procedia PDF Downloads 111
7999 Petrogenetic Model of Formation of Orthoclase Gabbro of the Dzirula Crystalline Massif, the Caucasus

Authors: David Shengelia, Tamara Tsutsunava, Manana Togonidze, Giorgi Chichinadze, Giorgi Beridze

Abstract:

Orthoclase gabbro intrusive exposes in the Eastern part of the Dzirula crystalline massif of the Central Transcaucasian microcontinent. It is intruded in the Baikal quartz-diorite gneisses as a stock-like body. The intrusive is characterized by heterogeneity of rock composition: variability of mineral content and irregular distribution of rock-forming minerals. The rocks are represented by pyroxenites, gabbro-pyroxenites and gabbros of different composition – K-feldspar, pyroxene-hornblende and biotite bearing varieties. Scientific views on the genesis and age of the orthoclase gabbro intrusive are considerably different. Based on the long-term pertogeochemical and geochronological investigations of the intrusive with such an extraordinary composition the authors came to the following conclusions. According to geological and geophysical data, it is stated that in the Saurian orogeny horizontal tectonic layering of the Earth’s crust of the Central Transcaucasian microcontinent took place. That is precisely this fact that explains the formation of the orthoclase gabbro intrusive. During the tectonic doubling of the Earth’s crust of the mentioned microcontinent thick tectonic nappes of mafic and sialic layers overlap the sialic basement (‘inversion’ layer). The initial magma of the intrusive was of high-temperature basite-ultrabasite composition, crystallization products of which are pyroxenites and gabbro-pyroxenites. Petrochemical data of the magma attest to its formation in the Upper mantle and partially in the ‘crustal astenolayer’. Then, a newly formed overheated dry magma with phenocrysts of clinopyrocxene and basic plagioclase intruded into the ‘inversion’ layer. From the new medium it was enriched by the volatile components causing the selective melting and as a result the formation of leucocratic quartz-feldspar material. At the same time in the basic magma intensive transformation of pyroxene to hornblende was going on. The basic magma partially mixed with the newly formed acid magma. These different magmas intruded first into the allochthonous basite layer without its significant transformation and then into the upper sialic layer and crystallized here at a depth of 7-10 km. By petrochemical data the newly formed leucocratic granite magma belongs to the S type granites, but the above mentioned mixed magma – to H (hybrid) type. During the final stage of magmatic processes the gabbroic rocks impregnated with high-temperature feldspar-bearing material forming anorthoclase or orthoclase. Thus, so called ‘orthoclase gabbro’ includes the rocks of various genetic groups: 1. protolith of gabbroic intrusive; 2. hybrid rock – K-feldspar gabbro and 3. leucocratic quartz-feldspar bearing rock. Petrochemical and geochemical data obtained from the hybrid gabbro and from the inrusive protolith differ from each other. For the identification of petrogenetic model of the orthoclase gabbro intrusive formation LA-ICP-MS- U-Pb zircon dating has been conducted in all three genetic types of gabbro. The zircon age of the protolith – mean 221.4±1.9 Ma and of hybrid K-feldspar gabbro – mean 221.9±2.2 Ma, records crystallization time of the intrusive, but the zircon age of quartz-feldspar bearing rocks – mean 323±2.9 Ma, as well as the inherited age (323±9, 329±8.3, 332±10 and 335±11 Ma) of hybrid K-feldspar gabbro corresponds to the formation age of Late Variscan granitoids widespread in the Dzirula crystalline massif.

Keywords: The Caucasus, isotope dating, orthoclase-bearing gabbro, petrogenetic model

Procedia PDF Downloads 326
7998 Leveraging xAPI in a Corporate e-Learning Environment to Facilitate the Tracking, Modelling, and Predictive Analysis of Learner Behaviour

Authors: Libor Zachoval, Daire O Broin, Oisin Cawley

Abstract:

E-learning platforms, such as Blackboard have two major shortcomings: limited data capture as a result of the limitations of SCORM (Shareable Content Object Reference Model), and lack of incorporation of Artificial Intelligence (AI) and machine learning algorithms which could lead to better course adaptations. With the recent development of Experience Application Programming Interface (xAPI), a large amount of additional types of data can be captured and that opens a window of possibilities from which online education can benefit. In a corporate setting, where companies invest billions on the learning and development of their employees, some learner behaviours can be troublesome for they can hinder the knowledge development of a learner. Behaviours that hinder the knowledge development also raise ambiguity about learner’s knowledge mastery, specifically those related to gaming the system. Furthermore, a company receives little benefit from their investment if employees are passing courses without possessing the required knowledge and potential compliance risks may arise. Using xAPI and rules derived from a state-of-the-art review, we identified three learner behaviours, primarily related to guessing, in a corporate compliance course. The identified behaviours are: trying each option for a question, specifically for multiple-choice questions; selecting a single option for all the questions on the test; and continuously repeating tests upon failing as opposed to going over the learning material. These behaviours were detected on learners who repeated the test at least 4 times before passing the course. These findings suggest that gauging the mastery of a learner from multiple-choice questions test scores alone is a naive approach. Thus, next steps will consider the incorporation of additional data points, knowledge estimation models to model knowledge mastery of a learner more accurately, and analysis of the data for correlations between knowledge development and identified learner behaviours. Additional work could explore how learner behaviours could be utilised to make changes to a course. For example, course content may require modifications (certain sections of learning material may be shown to not be helpful to many learners to master the learning outcomes aimed at) or course design (such as the type and duration of feedback).

Keywords: artificial intelligence, corporate e-learning environment, knowledge maintenance, xAPI

Procedia PDF Downloads 96
7997 Revolutionizing Financial Forecasts: Enhancing Predictions with Graph Convolutional Networks (GCN) - Long Short-Term Memory (LSTM) Fusion

Authors: Ali Kazemi

Abstract:

Those within the volatile and interconnected international economic markets, appropriately predicting market trends, hold substantial fees for traders and financial establishments. Traditional device mastering strategies have made full-size strides in forecasting marketplace movements; however, monetary data's complicated and networked nature calls for extra sophisticated processes. This observation offers a groundbreaking method for monetary marketplace prediction that leverages the synergistic capability of Graph Convolutional Networks (GCNs) and Long Short-Term Memory (LSTM) networks. Our suggested algorithm is meticulously designed to forecast the traits of inventory market indices and cryptocurrency costs, utilizing a comprehensive dataset spanning from January 1, 2015, to December 31, 2023. This era, marked by sizable volatility and transformation in financial markets, affords a solid basis for schooling and checking out our predictive version. Our algorithm integrates diverse facts to construct a dynamic economic graph that correctly reflects market intricacies. We meticulously collect opening, closing, and high and low costs daily for key inventory marketplace indices (e.g., S&P 500, NASDAQ) and widespread cryptocurrencies (e.g., Bitcoin, Ethereum), ensuring a holistic view of marketplace traits. Daily trading volumes are also incorporated to seize marketplace pastime and liquidity, providing critical insights into the market's shopping for and selling dynamics. Furthermore, recognizing the profound influence of the monetary surroundings on financial markets, we integrate critical macroeconomic signs with hobby fees, inflation rates, GDP increase, and unemployment costs into our model. Our GCN algorithm is adept at learning the relational patterns amongst specific financial devices represented as nodes in a comprehensive market graph. Edges in this graph encapsulate the relationships based totally on co-movement styles and sentiment correlations, enabling our version to grasp the complicated community of influences governing marketplace moves. Complementing this, our LSTM algorithm is trained on sequences of the spatial-temporal illustration discovered through the GCN, enriched with historic fee and extent records. This lets the LSTM seize and expect temporal marketplace developments accurately. Inside the complete assessment of our GCN-LSTM algorithm across the inventory marketplace and cryptocurrency datasets, the version confirmed advanced predictive accuracy and profitability compared to conventional and opportunity machine learning to know benchmarks. Specifically, the model performed a Mean Absolute Error (MAE) of 0.85%, indicating high precision in predicting day-by-day charge movements. The RMSE was recorded at 1.2%, underscoring the model's effectiveness in minimizing tremendous prediction mistakes, which is vital in volatile markets. Furthermore, when assessing the model's predictive performance on directional market movements, it achieved an accuracy rate of 78%, significantly outperforming the benchmark models, averaging an accuracy of 65%. This high degree of accuracy is instrumental for techniques that predict the course of price moves. This study showcases the efficacy of mixing graph-based totally and sequential deep learning knowledge in economic marketplace prediction and highlights the fee of a comprehensive, records-pushed evaluation framework. Our findings promise to revolutionize investment techniques and hazard management practices, offering investors and economic analysts a powerful device to navigate the complexities of cutting-edge economic markets.

Keywords: financial market prediction, graph convolutional networks (GCNs), long short-term memory (LSTM), cryptocurrency forecasting

Procedia PDF Downloads 26
7996 Fusion of MOLA-based DEMs and HiRISE Images for Large-Scale Mars Mapping

Authors: Ahmed F. Elaksher, Islam Omar

Abstract:

In this project, we used MOLA-based DEMs to orthorectify HiRISE optical images. The MOLA data was interpolated using the kriging interpolation technique. Corresponding tie points were then digitized from both datasets. These points were employed in co-registering both datasets using GIS analysis tools. Different transformation models, including the affine and projective transformation models, were used with different sets and distributions of tie points. Additionally, we evaluated the use of the MOLA elevations in co-registering the MOLA and HiRISE datasets. The planimetric RMSEs achieved for each model are reported. Results suggested the use of 3D-2D transformation models.

Keywords: photogrammetry, Mars, MOLA, HiRISE

Procedia PDF Downloads 52
7995 Response of a Bridge Crane during an Earthquake

Authors: F. Fekak, A. Gravouil, M. Brun, B. Depale

Abstract:

During an earthquake, a bridge crane may be subjected to multiple impacts between crane wheels and rail. In order to model such phenomena, a time-history dynamic analysis with a multi-scale approach is performed. The high frequency aspect of the impacts between wheels and rails is taken into account by a Lagrange explicit event-capturing algorithm based on a velocity-impulse formulation to resolve contacts and impacts. An implicit temporal scheme is used for the rest of the structure. The numerical coupling between the implicit and the explicit schemes is achieved with a heterogeneous asynchronous time-integrator.

Keywords: bridge crane, earthquake, dynamic analysis, explicit, implicit, impact

Procedia PDF Downloads 278
7994 Blood Glucose Measurement and Analysis: Methodology

Authors: I. M. Abd Rahim, H. Abdul Rahim, R. Ghazali

Abstract:

There is numerous non-invasive blood glucose measurement technique developed by researchers, and near infrared (NIR) is the potential technique nowadays. However, there are some disagreements on the optimal wavelength range that is suitable to be used as the reference of the glucose substance in the blood. This paper focuses on the experimental data collection technique and also the analysis method used to analyze the data gained from the experiment. The selection of suitable linear and non-linear model structure is essential in prediction system, as the system developed need to be conceivably accurate.

Keywords: linear, near-infrared (NIR), non-invasive, non-linear, prediction system

Procedia PDF Downloads 434
7993 Redox-labeled Electrochemical Aptasensor Array for Single-cell Detection

Authors: Shuo Li, Yannick Coffinier, Chann Lagadec, Fabrizio Cleri, Katsuhiko Nishiguchi, Akira Fujiwara, Soo Hyeon Kim, Nicolas Clément

Abstract:

The need for single cell detection and analysis techniques has increased in the past decades because of the heterogeneity of individual living cells, which increases the complexity of the pathogenesis of malignant tumors. In the search for early cancer detection, high-precision medicine and therapy, the technologies most used today for sensitive detection of target analytes and monitoring the variation of these species are mainly including two types. One is based on the identification of molecular differences at the single-cell level, such as flow cytometry, fluorescence-activated cell sorting, next generation proteomics, lipidomic studies, another is based on capturing or detecting single tumor cells from fresh or fixed primary tumors and metastatic tissues, and rare circulating tumors cells (CTCs) from blood or bone marrow, for example, dielectrophoresis technique, microfluidic based microposts chip, electrochemical (EC) approach. Compared to other methods, EC sensors have the merits of easy operation, high sensitivity, and portability. However, despite various demonstrations of low limits of detection (LOD), including aptamer sensors, arrayed EC sensors for detecting single-cell have not been demonstrated. In this work, a new technique based on 20-nm-thick nanopillars array to support cells and keep them at ideal recognition distance for redox-labeled aptamers grafted on the surface. The key advantages of this technology are not only to suppress the false positive signal arising from the pressure exerted by all (including non-target) cells pushing on the aptamers by downward force but also to stabilize the aptamer at the ideal hairpin configuration thanks to a confinement effect. With the first implementation of this technique, a LOD of 13 cells (with5.4 μL of cell suspension) was estimated. In further, the nanosupported cell technology using redox-labeled aptasensors has been pushed forward and fully integrated into a single-cell electrochemical aptasensor array. To reach this goal, the LOD has been reduced by more than one order of magnitude by suppressing parasitic capacitive electrochemical signals by minimizing the sensor area and localizing the cells. Statistical analysis at the single-cell level is demonstrated for the recognition of cancer cells. The future of this technology is discussed, and the potential for scaling over millions of electrodes, thus pushing further integration at sub-cellular level, is highlighted. Despite several demonstrations of electrochemical devices with LOD of 1 cell/mL, the implementation of single-cell bioelectrochemical sensor arrays has remained elusive due to their challenging implementation at a large scale. Here, the introduced nanopillar array technology combined with redox-labeled aptamers targeting epithelial cell adhesion molecule (EpCAM) is perfectly suited for such implementation. Combining nanopillar arrays with microwells determined for single cell trapping directly on the sensor surface, single target cells are successfully detected and analyzed. This first implementation of a single-cell electrochemical aptasensor array based on Brownian-fluctuating redox species opens new opportunities for large-scale implementation and statistical analysis of early cancer diagnosis and cancer therapy in clinical settings.

Keywords: bioelectrochemistry, aptasensors, single-cell, nanopillars

Procedia PDF Downloads 78
7992 Heat and Mass Transfer of an Oscillating Flow in a Porous Channel with Chemical Reaction

Authors: Zahra Neffah, Henda Kahalerras

Abstract:

A numerical study is made in a parallel-plate porous channel subjected to an oscillating flow and an exothermic chemical reaction on its walls. The flow field in the porous region is modeled by the Darcy–Brinkman–Forchheimer model and the finite volume method is used to solve the governing equations. The effects of the modified Frank-Kamenetskii (FKm) and Damköhler (Dm) numbers, the amplitude of oscillation (A), and the Strouhal number (St) are examined. The main results show an increase of heat and mass transfer rates with A and St, and their decrease with FKm and Dm.

Keywords: chemical reaction, heat and mass transfer, oscillating flow, porous channel

Procedia PDF Downloads 389
7991 An Extended Domain-Specific Modeling Language for Marine Observatory Relying on Enterprise Architecture

Authors: Charbel Aoun, Loic Lagadec

Abstract:

A Sensor Network (SN) is considered as an operation of two phases: (1) the observation/measuring, which means the accumulation of the gathered data at each sensor node; (2) transferring the collected data to some processing center (e.g., Fusion Servers) within the SN. Therefore, an underwater sensor network can be defined as a sensor network deployed underwater that monitors underwater activity. The deployed sensors, such as Hydrophones, are responsible for registering underwater activity and transferring it to more advanced components. The process of data exchange between the aforementioned components perfectly defines the Marine Observatory (MO) concept which provides information on ocean state, phenomena and processes. The first step towards the implementation of this concept is defining the environmental constraints and the required tools and components (Marine Cables, Smart Sensors, Data Fusion Server, etc). The logical and physical components that are used in these observatories perform some critical functions such as the localization of underwater moving objects. These functions can be orchestrated with other services (e.g. military or civilian reaction). In this paper, we present an extension to our MO meta-model that is used to generate a design tool (ArchiMO). We propose new constraints to be taken into consideration at design time. We illustrate our proposal with an example from the MO domain. Additionally, we generate the corresponding simulation code using our self-developed domain-specific model compiler. On the one hand, this illustrates our approach in relying on Enterprise Architecture (EA) framework that respects: multiple views, perspectives of stakeholders, and domain specificity. On the other hand, it helps reducing both complexity and time spent in design activity, while preventing from design modeling errors during porting this activity in the MO domain. As conclusion, this work aims to demonstrate that we can improve the design activity of complex system based on the use of MDE technologies and a domain-specific modeling language with the associated tooling. The major improvement is to provide an early validation step via models and simulation approach to consolidate the system design.

Keywords: smart sensors, data fusion, distributed fusion architecture, sensor networks, domain specific modeling language, enterprise architecture, underwater moving object, localization, marine observatory, NS-3, IMS

Procedia PDF Downloads 151
7990 Jet Impingement Heat Transfer on a Rib-Roughened Flat Plate

Authors: A. H. Alenezi

Abstract:

Cooling by impingement jet is known to have a significant high local and average heat transfer coefficient which make it widely used in industrial cooling systems. The heat transfer characteristics of an impinging jet on rib-roughened flat plate has been investigated numerically. This paper was set out to investigate the effect of rib height on the heat transfer rate. Since the flow needs to have enough spacing after passing the rib to allow reattachment especially for high Reynolds numbers, this study focuses on finding the optimum rib height which would be the best to maximize the heat transfer rate downstream the plate. This investigation employs a round nozzle with hydraulic diameter (Dh) of 13.5 mm, Jet-to-target distance of (H/D) of 4, rib location=1.5D and and finally jet angels of 45˚ and 90˚ under the influence of Re =10,000.

Keywords: jet impingement, CFD, turbulence model, heat transfer

Procedia PDF Downloads 322
7989 A CM-Based Model for 802.11 Networks Security Policies Enforcement

Authors: Karl Mabiala Dondia, Jing Ma

Abstract:

In recent years, networks based on the 802.11 standards have gained a prolific deployment. The reason for this massive acceptance of the technology by both home users and corporations is assuredly due to the "plug-and-play" nature of the technology and the mobility. The lack of physical containment due to inherent nature of the wireless medium makes maintenance very challenging from a security standpoint. This study examines via continuous monitoring various predictable threats that 802.11 networks can face, how they are executed, where each attack may be executed and how to effectively defend against them. The key goal is to identify the key components of an effective wireless security policy.

Keywords: wireless LAN, IEEE 802.11 standards, continuous monitoring, security policy

Procedia PDF Downloads 351
7988 A Method for Calculating Dew Point Temperature in the Humidity Test

Authors: Wu Sa, Zhang Qian, Li Qi, Wang Ye

Abstract:

Currently in humidity tests having not put the Dew point temperature as a control parameter, this paper selects wet and dry bulb thermometer to measure the vapor pressure, and introduces several the saturation vapor pressure formulas easily calculated on the controller. Then establish the Dew point temperature calculation model to obtain the relationship between the Dew point temperature and vapor pressure. Finally check through the 100 groups of sample in the range of 0-100 ℃ from "Psychrometric handbook", find that the average error is small. This formula can be applied to calculate the Dew point temperature in the humidity test.

Keywords: dew point temperature, psychrometric handbook, saturation vapor pressure, wet and dry bulb thermometer

Procedia PDF Downloads 458
7987 Describing Cognitive Decline in Alzheimer's Disease via a Picture Description Writing Task

Authors: Marielle Leijten, Catherine Meulemans, Sven De Maeyer, Luuk Van Waes

Abstract:

For the diagnosis of Alzheimer's disease (AD), a large variety of neuropsychological tests are available. In some of these tests, linguistic processing - both oral and written - is an important factor. Language disturbances might serve as a strong indicator for an underlying neurodegenerative disorder like AD. However, the current diagnostic instruments for language assessment mainly focus on product measures, such as text length or number of errors, ignoring the importance of the process that leads to written or spoken language production. In this study, it is our aim to describe and test differences between cognitive and impaired elderly on the basis of a selection of writing process variables (inter- and intrapersonal characteristics). These process variables are mainly related to pause times, because the number, length, and location of pauses have proven to be an important indicator of the cognitive complexity of a process. Method: Participants that were enrolled in our research were chosen on the basis of a number of basic criteria necessary to collect reliable writing process data. Furthermore, we opted to match the thirteen cognitively impaired patients (8 MCI and 5 AD) with thirteen cognitively healthy elderly. At the start of the experiment, participants were each given a number of tests, such as the Mini-Mental State Examination test (MMSE), the Geriatric Depression Scale (GDS), the forward and backward digit span and the Edinburgh Handedness Inventory (EHI). Also, a questionnaire was used to collect socio-demographic information (age, gender, eduction) of the subjects as well as more details on their level of computer literacy. The tests and questionnaire were followed by two typing tasks and two picture description tasks. For the typing tasks participants had to copy (type) characters, words and sentences from a screen, whereas the picture description tasks each consisted of an image they had to describe in a few sentences. Both the typing and the picture description tasks were logged with Inputlog, a keystroke logging tool that allows us to log and time stamp keystroke activity to reconstruct and describe text production processes. The main rationale behind keystroke logging is that writing fluency and flow reveal traces of the underlying cognitive processes. This explains the analytical focus on pause (length, number, distribution, location, etc.) and revision (number, type, operation, embeddedness, location, etc.) characteristics. As in speech, pause times are seen as indexical of cognitive effort. Results. Preliminary analysis already showed some promising results concerning pause times before, within and after words. For all variables, mixed effects models were used that included participants as a random effect and MMSE scores, GDS scores and word categories (such as determiners and nouns) as a fixed effect. For pause times before and after words cognitively impaired patients paused longer than healthy elderly. These variables did not show an interaction effect between the group participants (cognitively impaired or healthy elderly) belonged to and word categories. However, pause times within words did show an interaction effect, which indicates pause times within certain word categories differ significantly between patients and healthy elderly.

Keywords: Alzheimer's disease, keystroke logging, matching, writing process

Procedia PDF Downloads 341
7986 Transportation Mode Choice Analysis for Accessibility of the Mehrabad International Airport by Statistical Models

Authors: Navid Mirzaei Varzeghani, Mahmoud Saffarzadeh, Ali Naderan, Amirhossein Taheri

Abstract:

Countries are progressing, and the world's busiest airports see year-on-year increases in travel demand. Passenger acceptability of an airport depends on the airport's appeals, which may include one of these routes between the city and the airport, as well as the facilities to reach them. One of the critical roles of transportation planners is to predict future transportation demand so that an integrated, multi-purpose system can be provided and diverse modes of transportation (rail, air, and land) can be delivered to a destination like an airport. In this study, 356 questionnaires were filled out in person over six days. First, the attraction of business and non-business trips was studied using data and a linear regression model. Lower travel costs, a range of ages more significant than 55, and other factors are essential for business trips. Non-business travelers, on the other hand, have prioritized using personal vehicles to get to the airport and ensuring convenient access to the airport. Business travelers are also less price-sensitive than non-business travelers regarding airport travel. Furthermore, carrying additional luggage (for example, more than one suitcase per person) undoubtedly decreases the attractiveness of public transit. Afterward, based on the manner and purpose of the trip, the locations with the highest trip generation to the airport were identified. The most famous district in Tehran was District 2, with 23 visits, while the most popular mode of transportation was an online taxi, with 12 trips from that location. Then, significant variables in separation and behavior of travel methods to access the airport were investigated for all systems. In this scenario, the most crucial factor is the time it takes to get to the airport, followed by the method's user-friendliness as a component of passenger preference. It has also been demonstrated that enhancing public transportation trip times reduces private transportation's market share, including taxicabs. Based on the responses of personal and semi-public vehicles, the desire of passengers to approach the airport via public transportation systems was explored to enhance present techniques and develop new strategies for providing the most efficient modes of transportation. Using the binary model, it was clear that business travelers and people who had already driven to the airport were the least likely to change.

Keywords: multimodal transportation, demand modeling, travel behavior, statistical models

Procedia PDF Downloads 141
7985 A Case Study of Typhoon Tracks: Insights from the Interaction between Typhoon Hinnamnor and Ocean Currents in 2022

Authors: Wei-Kuo Soong

Abstract:

The forecasting of typhoon tracks remains a formidable challenge, primarily attributable to the paucity of observational data in the open sea and the intricate influence of weather systems at varying scales. This study investigates the case of Typhoon Hinnamnor in 2022, examining its trajectory and intensity fluctuations in relation to the interaction with a concurrent tropical cyclone and sea surface temperatures (SST). Utilizing the Weather Research and Forecasting Model (WRF), to simulate and analyze the interaction between Typhoon Hinnamnor and its environmental factors, shedding light on the mechanisms driving typhoon development and enhancing forecasting capabilities.

Keywords: typhoon, sea surface temperature, forecasting, WRF

Procedia PDF Downloads 31
7984 Experimental Quantification of the Intra-Tow Resin Storage Evolution during RTM Injection

Authors: Mathieu Imbert, Sebastien Comas-Cardona, Emmanuelle Abisset-Chavanne, David Prono

Abstract:

Short cycle time Resin Transfer Molding (RTM) applications appear to be of great interest for the mass production of automotive or aeronautical lightweight structural parts. During the RTM process, the two components of a resin are mixed on-line and injected into the cavity of a mold where a fibrous preform has been placed. Injection and polymerization occur simultaneously in the preform inducing evolutions of temperature, degree of cure and viscosity that furthermore affect flow and curing. In order to adjust the processing conditions to reduce the cycle time, it is, therefore, essential to understand and quantify the physical mechanisms occurring in the part during injection. In a previous study, a dual-scale simulation tool has been developed to help determining the optimum injection parameters. This tool allows tracking finely the repartition of the resin and the evolution of its properties during reactive injections with on-line mixing. Tows and channels of the fibrous material are considered separately to deal with the consequences of the dual-scale morphology of the continuous fiber textiles. The simulation tool reproduces the unsaturated area at the flow front, generated by the tow/channel difference of permeability. Resin “storage” in the tows after saturation is also taken into account as it may significantly affect the repartition and evolution of the temperature, degree of cure and viscosity in the part during reactive injections. The aim of the current study is, thanks to experiments, to understand and quantify the “storage” evolution in the tows to adjust and validate the numerical tool. The presented study is based on four experimental repeats conducted on three different types of textiles: a unidirectional Non Crimp Fabric (NCF), a triaxial NCF and a satin weave. Model fluids, dyes and image analysis, are used to study quantitatively, the resin flow in the saturated area of the samples. Also, textiles characteristics affecting the resin “storage” evolution in the tows are analyzed. Finally, fully coupled on-line mixing reactive injections are conducted to validate the numerical model.

Keywords: experimental, on-line mixing, high-speed RTM process, dual-scale flow

Procedia PDF Downloads 146
7983 A Study of Different Retail Models That Penetrates South African Townships

Authors: Beaula, M. Kruger, Silindisipho, T. Belot

Abstract:

Small informal retailers are considered one of the most important features of developing countries around the world. Those small informal retailers form part of the local communities in South African townships and are estimated to be more than 100,000 across the country. The township economic landscape has changed over time in South Africa. The traditional small informal retailers in South African Townships have been faced with numerous challenges of increasing competition; an increase in the number of local retail shops and foreign-owned shops. There is evidence that the South African personal and disposable income has increased amongst black African consumers. Historically, people residing in townships were restricted to informal retail shops; however, this has changed due to the growing number of formal large retail chains entering into the township market. The larger retail chains are aware of the improved income levels of the middle-income townships residence and as a result, larger retailers have followed certain strategies such as; (1) retail format development; (2) diversification growth strategy; (3) market penetration growth strategy and (4) market expansion. This research did a comparative analysis between the different retail models developed by Pick n Pay, Spar and Shoprite. The research methodology employed for this study was of a qualitative nature and made use of a case study to conduct a comparative analysis between larger retailers. A questionnaire was also designed to obtain data from existing smaller retailers. The study found that larger retailers have developed smaller retail formats to compete with the traditional smaller retailers operating in South African townships. Only one out of the two large retailers offers entrepreneurs a franchise model. One of the big retailers offers the opportunity to employ between 15 to 20 employees while the others are subject to the outcome of a feasibility study. The response obtained from the entrepreneurs in the townships were mixed, while some found their presence as having a “negative impact,” which has increased competition; others saw them as a means to obtain a variety of products. This research found that the most beneficial retail model for both bigger retail and existing and new entrepreneurs are from Pick n Pay. The other retail format models are more beneficial for the bigger retailers and not to new and existing entrepreneurs.

Keywords: Pick n Pay, retailers, shoprite, spar, townships

Procedia PDF Downloads 168
7982 Comparative Performance Analysis of Nonlinearity Cancellation Techniques for MOS-C Realization in Integrator Circuits

Authors: Hasan Çiçekli, Ahmet Gökçen, Uğur Çam

Abstract:

In this paper, a comparative performance analysis of mostly used four nonlinearity cancellation techniques used to realize the passive resistor by MOS transistors is presented. The comparison is done by using an integrator circuit which is employing sequentially Op-amp, OTRA and ICCII as active element. All of the circuits are implemented by MOS-C realization and simulated by PSPICE program using 0.35 µm process TSMC MOSIS model parameters. With MOS-C realization, the circuits became electronically tunable and fully integrable which is very important in IC design. The output waveforms, frequency responses, THD analysis results and features of the nonlinearity cancellation techniques are also given.

Keywords: integrator circuits, MOS-C realization, nonlinearity cancellation, tuneable resistors

Procedia PDF Downloads 504
7981 Matlab/Simulink Simulation of Solar Energy Storage System

Authors: Mustafa A. Al-Refai

Abstract:

This paper investigates the energy storage technologies that can potentially enhance the use of solar energy. Water electrolysis systems are seen as the principal means of producing a large amount of hydrogen in the future. Starting from the analysis of the models of the system components, a complete simulation model was realized in the Matlab-Simulink environment. Results of the numerical simulations are provided. The operation of electrolysis and photovoltaic array combination is verified at various insulation levels. It is pointed out that solar cell arrays and electrolysers are producing the expected results with solar energy inputs that are continuously varying.

Keywords: electrolyzer, simulink, solar energy, storage system

Procedia PDF Downloads 397
7980 Simulation of the Evacuation of Ships Carrying Dangerous Goods from Tsunami

Authors: Yoshinori Matsuura, Saori Iwanaga

Abstract:

The Great East Japan Earthquake occurred at 14:46 on Friday, March 11, 2011. It was the most powerful known earthquake to have hit Japan. The earthquake triggered extremely destructive tsunami waves of up to 40.5 meters in height. We focus on the ship’s evacuation from tsunami. Then we analyze about ships evacuation from tsunami using multi-agent simulation and we want to prepare for a coming earthquake. We developed a simulation model of ships that set sail from the port in order to evacuate from the tsunami considering the ship carrying dangerous goods.

Keywords: Ship’s evacuation, multi-agent simulation, tsunami

Procedia PDF Downloads 432
7979 Process Modeling in an Aeronautics Context

Authors: Sophie Lemoussu, Jean-Charles Chaudemar, Robertus A. Vingerhoeds

Abstract:

Many innovative projects exist in the field of aeronautics, each addressing specific areas so to reduce weight, increase autonomy, reduction of CO2, etc. In many cases, such innovative developments are being carried out by very small enterprises (VSE’s) or small and medium sized-enterprises (SME’s). A good example concerns airships that are being studied as a real alternative to passenger and cargo transportation. Today, no international regulations propose a precise and sufficiently detailed framework for the development and certification of airships. The absence of such a regulatory framework requires a very close contact with regulatory instances. However, VSE’s/SME’s do not always have sufficient resources and internal knowledge to handle this complexity and to discuss these issues. This poses an additional challenge for those VSE’s/SME’s, in particular those that have system integration responsibilities and that must provide all the necessary evidence to demonstrate their ability to design, produce, and operate airships with the expected level of safety and reliability. The main objective of this research is to provide a methodological framework enabling VSE’s/SME’s with limited resources to organize the development of airships while taking into account the constraints of safety, cost, time and performance. This paper proposes to provide a contribution to this problematic by proposing a Model-Based Systems Engineering approach. Through a comprehensive process modeling approach applied to the development processes, the regulatory constraints, existing best practices, etc., a good image can be obtained as to the process landscape that may influence the development of airships. To this effect, not only the necessary regulatory information is taken on board, also other international standards and norms on systems engineering and project management are being modeled and taken into account. In a next step, the model can be used for analysis of the specific situation for given developments, derive critical paths for the development, identify eventual conflicting aspects between the norms, standards, and regulatory expectations, or also identify those areas where not enough information is available. Once critical paths are known, optimization approaches can be used and decision support techniques can be applied so to better support VSE’s/SME’s in their innovative developments. This paper reports on the adopted modeling approach, the retained modeling languages, and how they all fit together.

Keywords: aeronautics, certification, process modeling, project management, regulation, SME, systems engineering, VSE

Procedia PDF Downloads 136
7978 Hierarchy and Weight of Influence Factors on Labor Productivity in the Construction Industry of the Nepal

Authors: Shraddha Palikhe, Sunkuk Kim

Abstract:

The construction industry is the most labor intensive in Nepal. It is obvious that construction is a major sector and any productivity enhancement activity in this sector will have a positive impact in the overall improvement of the national economy. Previous studies have stated that Nepal has poor labor productivity among other south Asian countries. Though considerable research has been done on productivity factors in other countries, no study has addressed labor productivity issues in Nepal. Therefore, the main objective of this study is to identify and hierarchy the influence factors for poor labor productivity. In this study, a questionnaire approach is chosen as a method of the survey from thirty experts involved in the construction industry, such as Architects, Civil Engineers, Project Engineers and Site Engineers. A survey was conducted in Nepal, to identify the major factors impacting construction labor productivity. Analytic Hierarchy Process (AHP) analysis method was used to understand the underlying relationships among the factors, categorized into five groups, namely (1) Labor-management group; (2) Material management group; (3) Human labor group; (4) Technological group and (5) External group and was divided into 33 subfactors. AHP was used to establish the relative importance of the criteria. The AHP makes pairwise comparisons of relative importance between hierarchy elements grouped by labor productivity decision criteria. Respondents were asked to answer based on their experience of construction works. On the basis of the respondent’s response, weight of all the factors were calculated and ranked it. The AHP results were tabulated based on weight and ranking of influence factors. AHP model consists of five main criteria and 33 sub-criteria. Among five main criteria, the scenario assigns a weight of highest influential factor i.e. 26.15% to human labor group followed by 23.01% to technological group, 22.97% to labor management group, 17.61% material management group and 10.25% to external group. While in 33 sub-criteria, the most influential factor for poor productivity in Nepal are lack of monetary incentive (20.53%) for human labor group, unsafe working condition (17.55%) for technological group, lack of leadership (18.43%) for labor management group, unavailability of tools at site (25.03%) for material management group and strikes (35.01%) for external group. The results show that AHP model associated criteria are helpful to predict the current situation of labor productivity. It is essential to consider these influence factors to improve the labor productivity in the construction industry of Nepal.

Keywords: construction, hierarchical analysis, influence factors, labor productivity

Procedia PDF Downloads 379
7977 The Evaluation of the Performance of Different Filtering Approaches in Tracking Problem and the Effect of Noise Variance

Authors: Mohammad Javad Mollakazemi, Farhad Asadi, Aref Ghafouri

Abstract:

Performance of different filtering approaches depends on modeling of dynamical system and algorithm structure. For modeling and smoothing the data the evaluation of posterior distribution in different filtering approach should be chosen carefully. In this paper different filtering approaches like filter KALMAN, EKF, UKF, EKS and smoother RTS is simulated in some trajectory tracking of path and accuracy and limitation of these approaches are explained. Then probability of model with different filters is compered and finally the effect of the noise variance to estimation is described with simulations results.

Keywords: Gaussian approximation, Kalman smoother, parameter estimation, noise variance

Procedia PDF Downloads 407
7976 Optimization of the Aerodynamic Performances of an Unmanned Aerial Vehicle

Authors: Fares Senouci, Bachir Imine

Abstract:

This document provides numerical and experimental optimization of the aerodynamic performance of a drone equipped with three types of horizontal stabilizer. To build this optimal configuration, an experimental and numerical study was conducted on three parameters: the geometry of the stabilizer (horizontal form or reverse V form), the position of the horizontal stabilizer (up or down), and the landing gear position (closed or open). The results show that up-stabilizer position with respect to the horizontal plane of the fuselage provides better aerodynamic performance, and that the landing gear increases the lift in the zone of stability, that is to say where the flow is not separated.

Keywords: aerodynamics, drag, lift, turbulence model, wind tunnel

Procedia PDF Downloads 230
7975 Building a Scalable Telemetry Based Multiclass Predictive Maintenance Model in R

Authors: Jaya Mathew

Abstract:

Many organizations are faced with the challenge of how to analyze and build Machine Learning models using their sensitive telemetry data. In this paper, we discuss how users can leverage the power of R without having to move their big data around as well as a cloud based solution for organizations willing to host their data in the cloud. By using ScaleR technology to benefit from parallelization and remote computing or R Services on premise or in the cloud, users can leverage the power of R at scale without having to move their data around.

Keywords: predictive maintenance, machine learning, big data, cloud based, on premise solution, R

Procedia PDF Downloads 353