Search results for: lexical and numerical error-recognition tasks
2979 Evolution of Deformation in the Southern Central Tunisian Atlas: Parameters and Modelling
Authors: Mohamed Sadok Bensalem, Soulef Amamria, Khaled Lazzez, Mohamed Ghanmi
Abstract:
The southern-central Tunisian Atlas presents a typical example of an external zone. It occupies a particular position in the North African chains: firstly, it is the eastern limit of atlassic structures; secondly, it is the edges between the belts structures to the north and the stable Saharan platform in the south. The evolution of deformation study is based on several methods, such as classical or numerical methods. The principals parameters controlling the genesis of folds in the southern central Tunisian Atlas are; the reactivation of pre-existing faults during the later compressive phase, the evolution of decollement level, and the relation between thin and thick-skinned. One of the more principal characters of the southern-central Tunisian Atlas is the variation of belts structures directions determined by: NE-SW direction, named the attlassic direction in Tunisia, the NW-SE direction carried along the Gafsa fault (the oriental limit of southern atlassic accident), and the E-W direction defined in the southern Tunisian Atlas. This variation of direction is the result of important variation of deformation during different tectonics phases. A classical modelling of the Jebel ElKebar anticline, based on faults throw of the pre-existing faults and its reactivation during compressive phases, shows the importance of extensional deformation, particular during Aptian-Albian period, comparing with that of later compression (Alpine phases). A numerical modelling, based on the software Rampe E.M. 1.5.0, applied on the anticline of Jebel Orbata confirms the interpretation of “fault related fold” with decollement level within the Triassic successions. The other important parameter of evolution of deformation is the vertical migration of decollement level; indeed, more than the decollement level is in the recent series, most that the deformation is accentuated. The evolution of deformation is marked the development of duplex structure in Jebel At Taghli (eastern limit of Jebel Orbata). Consequently, the evolution of deformation is proportional to the depth of the decollement level, the most important deformation is in the higher successions; thus, is associated to the thin-skinned deformation; the decollement level permit the passive transfer of deformation in the cover.Keywords: evolution of deformation, pre-existing faults, decollement level, thin-skinned
Procedia PDF Downloads 1262978 Can Empowering Women Farmers Reduce Household Food Insecurity? Evidence from Malawi
Authors: Christopher Manyamba
Abstract:
Women in Malawi produce perform between 50-70 percent of all agricultural tasks and yet the majority remain food insecure. The aim of his paper is to build on existing mixed evidence that indicates that empowering women in agriculture is conducive to improving food security. The WEAI is used to provide evidence on the relationship between women’s empowerment in agriculture and household food security. A multinomial logistic regression is applied to the Women Empowerment in Agriculture Index (WEAI) components and the Household Hunger Scale. The overall results show that the WEAI can be used to determine household food insecurity; however it has to be contextually adapted. Assets ownership, credit, group membership and leisure time are positively associated with food security. Contrary to other literature, empowerment in having control and decisions on income indicate negative association with household food security. These results could potentially better inform public, private and civil society stakeholders’ dialogues in creating the most effective and sustainable interventions to help women attain long-term food security.Keywords: food security, gender, empowerment, agriculture index, framework for African food security, household hunger scale
Procedia PDF Downloads 3682977 Research on Knowledge Graph Inference Technology Based on Proximal Policy Optimization
Authors: Yihao Kuang, Bowen Ding
Abstract:
With the increasing scale and complexity of knowledge graph, modern knowledge graph contains more and more types of entity, relationship, and attribute information. Therefore, in recent years, it has been a trend for knowledge graph inference to use reinforcement learning to deal with large-scale, incomplete, and noisy knowledge graph and improve the inference effect and interpretability. The Proximal Policy Optimization (PPO) algorithm utilizes a near-end strategy optimization approach. This allows for more extensive updates of policy parameters while constraining the update extent to maintain training stability. This characteristic enables PPOs to converge to improve strategies more rapidly, often demonstrating enhanced performance early in the training process. Furthermore, PPO has the advantage of offline learning, effectively utilizing historical experience data for training and enhancing sample utilization. This means that even with limited resources, PPOs can efficiently train for reinforcement learning tasks. Based on these characteristics, this paper aims to obtain better and more efficient inference effect by introducing PPO into knowledge inference technology.Keywords: reinforcement learning, PPO, knowledge inference, supervised learning
Procedia PDF Downloads 672976 Reliability-Based Life-Cycle Cost Model for Engineering Systems
Authors: Reza Lotfalian, Sudarshan Martins, Peter Radziszewski
Abstract:
The effect of reliability on life-cycle cost, including initial and maintenance cost of a system is studied. The failure probability of a component is used to calculate the average maintenance cost during the operation cycle of the component. The standard deviation of the life-cycle cost is also calculated as an error measure for the average life-cycle cost. As a numerical example, the model is used to study the average life cycle cost of an electric motor.Keywords: initial cost, life-cycle cost, maintenance cost, reliability
Procedia PDF Downloads 6052975 Numerical Study of Flow around Flat Tube between Parallel Walls
Authors: Hamidreza Bayat, Arash Mirabdolah Lavasani, Meysam Bolhasani, Sajad Moosavi
Abstract:
Flow around a flat tube is studied numerically. Reynolds number is defined base on equivalent circular tube and it is varied in range of 100 to 300. Equations are solved by using finite volume method and results are presented in form of drag and lift coefficient. Results show that drag coefficient of flat tube is up to 66% lower than circular tube with equivalent diameter. In addition, by increasing l/D from 1 to 2, the drag coefficient of flat tube is decreased about 14-27%.Keywords: laminar flow, flat-tube, drag coefficient, cross-flow, heat exchanger
Procedia PDF Downloads 5032974 An Application of a Machine Monitoring by Using the Internet of Things to Improve a Preventive Maintenance: Case Study of an Automated Plastic Granule-Packing Machine
Authors: Anek Apipatkul, Paphakorn Pitayachaval
Abstract:
Preventive maintenance is a standardized procedure to control and prevent risky problems affecting production in order to increase work efficiency. Machine monitoring also routinely works to collect data for a scheduling maintenance period. This paper is to present the application of machine monitoring by using the internet of things (IOTs) and a lean technique in order to manage with complex maintenance tasks of an automated plastic granule packing machine. To organize the preventive maintenance, there are several processes that the machine monitoring was applied, starting with defining a clear scope of the machine, establishing standards in maintenance work, applying a just-in-time (JIT) technique for timely delivery in the maintenance work, solving problems on the floor, and also improving the inspection process. The result has shown that wasted time was reduced, and machines have been operated as scheduled. Furthermore, the efficiency of the scheduled maintenance period was increased by 95%.Keywords: internet of things, preventive maintenance, machine monitoring, lean technique
Procedia PDF Downloads 1022973 Electrospray Plume Characterisation of a Single Source Cone-Jet for Micro-Electronic Cooling
Authors: M. J. Gibbons, A. J. Robinson
Abstract:
Increasing expectations on small form factor electronics to be more compact while increasing performance has driven conventional cooling technologies to a thermal management threshold. An emerging solution to this problem is electrospray (ES) cooling. ES cooling enables two phase cooling by utilising Coulomb forces for energy efficient fluid atomization. Generated charged droplets are accelerated to the grounded target surface by the applied electric field and surrounding gravitational force. While in transit the like charged droplets enable plume dispersion and inhibit droplet coalescence. If the electric field is increased in the cone-jet regime, a subsequent increase in the plume spray angle has been shown. Droplet segregation in the spray plume has been observed, with primary droplets in the plume core and satellite droplets positioned on the periphery of the plume. This segregation is facilitated by inertial and electrostatic effects. This result has been corroborated by numerous authors. These satellite droplets are usually more densely charged and move at a lower relative velocity to that of the spray core due to the radial decay of the electric field. Previous experimental research by Gomez and Tang has shown that the number of droplets deposited on the periphery can be up to twice that of the spray core. This result has been substantiated by a numerical models derived by Wilhelm et al., Oh et al. and Yang et al. Yang et al. showed from their numerical model, that by varying the extractor potential the dispersion radius of the plume also varies proportionally. This research aims to investigate this dispersion density and the role it plays in the local heat transfer coefficient profile (h) of ES cooling. This will be carried out for different extractor – target separation heights (H2), working fluid flow rates (Q), and extractor applied potential (V2). The plume dispersion will be recorded by spraying a 25 µm thick, joule heated steel foil and by recording the thermal footprint of the ES plume using a Flir A-40 thermal imaging camera. The recorded results will then be analysed by in-house developed MATLAB code.Keywords: electronic cooling, electrospray, electrospray plume dispersion, spray cooling
Procedia PDF Downloads 3972972 Scorbot-ER 4U Using Forward Kinematics Modelling and Analysis
Authors: D. Maneetham, L. Sivhour
Abstract:
Robotic arm manipulators are widely used to accomplish many kinds of tasks. SCORBOT-ER 4u is a 5-degree of freedom (DOF) vertical articulated educational robotic arm, and all joints are revolute. It is specifically designed to perform pick and place task with its gripper. The pick and place task consists of consideration of the end effector coordinate of the robotic arm and the desired position coordinate in its workspace. This paper describes about forward kinematics modeling and analysis of the robotic end effector motion through joint space. The kinematics problems are defined by the transformation from the Cartesian space to the joint space. Denavit-Hartenberg (D-H) model is used in order to model the robotic links and joints with 4x4 homogeneous matrix. The forward kinematics model is also developed and simulated in MATLAB. The mathematical model is validated by using robotic toolbox in MATLAB. By using this method, it may be applicable to get the end effector coordinate of this robotic arm and other similar types to this arm. The software development of SCORBOT-ER 4u is also described here. PC-and EtherCAT based control technology from BECKHOFF is used to control the arm to express the pick and place task.Keywords: forward kinematics, D-H model, robotic toolbox, PC- and EtherCAT-based control
Procedia PDF Downloads 1792971 A Novel Antenna Design for Telemedicine Applications
Authors: Amar Partap Singh Pharwaha, Shweta Rani
Abstract:
To develop a reliable and cost effective communication platform for the telemedicine applications, novel antenna design has been presented using bacterial foraging optimization (BFO) technique. The proposed antenna geometry is achieved by etching a modified Koch curve fractal shape at the edges and a square shape slot at the center of the radiating element of a patch antenna. It has been found that the new antenna has achieved 43.79% size reduction and better resonating characteristic than the original patch. Representative results for both simulations and numerical validations are reported in order to assess the effectiveness of the developed methodology.Keywords: BFO, electrical permittivity, fractals, Koch curve
Procedia PDF Downloads 5062970 Dynamic Log Parsing and Intelligent Anomaly Detection Method Combining Retrieval Augmented Generation and Prompt Engineering
Authors: Liu Linxin
Abstract:
As system complexity increases, log parsing and anomaly detection become more and more important in ensuring system stability. However, traditional methods often face the problems of insufficient adaptability and decreasing accuracy when dealing with rapidly changing log contents and unknown domains. To this end, this paper proposes an approach LogRAG, which combines RAG (Retrieval Augmented Generation) technology with Prompt Engineering for Large Language Models, applied to log analysis tasks to achieve dynamic parsing of logs and intelligent anomaly detection. By combining real-time information retrieval and prompt optimisation, this study significantly improves the adaptive capability of log analysis and the interpretability of results. Experimental results show that the method performs well on several public datasets, especially in the absence of training data, and significantly outperforms traditional methods. This paper provides a technical path for log parsing and anomaly detection, demonstrating significant theoretical value and application potential.Keywords: log parsing, anomaly detection, retrieval-augmented generation, prompt engineering, LLMs
Procedia PDF Downloads 292969 A Numerical Study on the Influence of CO2 Dilution on Combustion Characteristics of a Turbulent Diffusion Flame
Authors: Yasaman Tohidi, Rouzbeh Riazi, Shidvash Vakilipour, Masoud Mohammadi
Abstract:
The objective of the present study is to numerically investigate the effect of CO2 replacement of N2 in air stream on the flame characteristics of the CH4 turbulent diffusion flame. The Open source Field Operation and Manipulation (OpenFOAM) has been used as the computational tool. In this regard, laminar flamelet and modified k-ε models have been utilized as combustion and turbulence models, respectively. Results reveal that the presence of CO2 in air stream changes the flame shape and maximum flame temperature. Also, CO2 dilution causes an increment in CO mass fraction.Keywords: CH4 diffusion flame, CO2 dilution, OpenFOAM, turbulent flame
Procedia PDF Downloads 2762968 An Efficient and Provably Secure Three-Factor Authentication Scheme with Key Agreement
Authors: Mohan Ramasundaram, Amutha Prabakar Muniyandi
Abstract:
Remote user authentication is one of the important tasks for any kind of remote server applications. Several remote authentication schemes are proposed by the researcher for Telecare Medicine Information System (TMIS). Most of the existing techniques have limitations, vulnerable to various kind attacks, lack of functionalities, information leakage, no perfect forward security and ineffectiveness. Authentication is a process of user verification mechanism for allows him to access the resources of a server. Nowadays, most of the remote authentication protocols are using two-factor authentications. We have made a survey of several remote authentication schemes using three factors and this survey shows that the most of the schemes are inefficient and subject to several attacks. We observed from the experimental evaluation; the proposed scheme is very secure against various known attacks that include replay attack, man-in-the-middle attack. Furthermore, the analysis based on the communication cost and computational cost estimation of the proposed scheme with related schemes shows that our proposed scheme is efficient.Keywords: Telecare Medicine Information System, elliptic curve cryptography, three-factor, biometric, random oracle
Procedia PDF Downloads 2192967 A Comparative Study of k-NN and MLP-NN Classifiers Using GA-kNN Based Feature Selection Method for Wood Recognition System
Authors: Uswah Khairuddin, Rubiyah Yusof, Nenny Ruthfalydia Rosli
Abstract:
This paper presents a comparative study between k-Nearest Neighbour (k-NN) and Multi-Layer Perceptron Neural Network (MLP-NN) classifier using Genetic Algorithm (GA) as feature selector for wood recognition system. The features have been extracted from the images using Grey Level Co-Occurrence Matrix (GLCM). The use of GA based feature selection is mainly to ensure that the database used for training the features for the wood species pattern classifier consists of only optimized features. The feature selection process is aimed at selecting only the most discriminating features of the wood species to reduce the confusion for the pattern classifier. This feature selection approach maintains the ‘good’ features that minimizes the inter-class distance and maximizes the intra-class distance. Wrapper GA is used with k-NN classifier as fitness evaluator (GA-kNN). The results shows that k-NN is the best choice of classifier because it uses a very simple distance calculation algorithm and classification tasks can be done in a short time with good classification accuracy.Keywords: feature selection, genetic algorithm, optimization, wood recognition system
Procedia PDF Downloads 5452966 A Time-Reducible Approach to Compute Determinant |I-X|
Authors: Wang Xingbo
Abstract:
Computation of determinant in the form |I-X| is primary and fundamental because it can help to compute many other determinants. This article puts forward a time-reducible approach to compute determinant |I-X|. The approach is derived from the Newton’s identity and its time complexity is no more than that to compute the eigenvalues of the square matrix X. Mathematical deductions and numerical example are presented in detail for the approach. By comparison with classical approaches the new approach is proved to be superior to the classical ones and it can naturally reduce the computational time with the improvement of efficiency to compute eigenvalues of the square matrix.Keywords: algorithm, determinant, computation, eigenvalue, time complexity
Procedia PDF Downloads 4152965 Risk Measure from Investment in Finance by Value at Risk
Authors: Mohammed El-Arbi Khalfallah, Mohamed Lakhdar Hadji
Abstract:
Managing and controlling risk is a topic research in the world of finance. Before a risky situation, the stakeholders need to do comparison according to the positions and actions, and financial institutions must take measures of a particular market risk and credit. In this work, we study a model of risk measure in finance: Value at Risk (VaR), which is a new tool for measuring an entity's exposure risk. We explain the concept of value at risk, your average, tail, and describe the three methods for computing: Parametric method, Historical method, and numerical method of Monte Carlo. Finally, we briefly describe advantages and disadvantages of the three methods for computing value at risk.Keywords: average value at risk, conditional value at risk, tail value at risk, value at risk
Procedia PDF Downloads 4412964 Risk and Reliability Based Probabilistic Structural Analysis of Railroad Subgrade Using Finite Element Analysis
Authors: Asif Arshid, Ying Huang, Denver Tolliver
Abstract:
Finite Element (FE) method coupled with ever-increasing computational powers has substantially advanced the reliability of deterministic three dimensional structural analyses of a structure with uniform material properties. However, railways trackbed is made up of diverse group of materials including steel, wood, rock and soil, while each material has its own varying levels of heterogeneity and imperfections. It is observed that the application of probabilistic methods for trackbed structural analysis while incorporating the material and geometric variabilities is deeply underworked. The authors developed and validated a 3-dimensional FE based numerical trackbed model and in this study, they investigated the influence of variability in Young modulus and thicknesses of granular layers (Ballast and Subgrade) on the reliability index (-index) of the subgrade layer. The influence of these factors is accounted for by changing their Coefficients of Variance (COV) while keeping their means constant. These variations are formulated using Gaussian Normal distribution. Two failure mechanisms in subgrade namely Progressive Shear Failure and Excessive Plastic Deformation are examined. Preliminary results of risk-based probabilistic analysis for Progressive Shear Failure revealed that the variations in Ballast depth are the most influential factor for vertical stress at the top of subgrade surface. Whereas, in case of Excessive Plastic Deformations in subgrade layer, the variations in its own depth and Young modulus proved to be most important while ballast properties remained almost indifferent. For both these failure moods, it is also observed that the reliability index for subgrade failure increases with the increase in COV of ballast depth and subgrade Young modulus. The findings of this work is of particular significance in studying the combined effect of construction imperfections and variations in ground conditions on the structural performance of railroad trackbed and evaluating the associated risk involved. In addition, it also provides an additional tool to supplement the deterministic analysis procedures and decision making for railroad maintenance.Keywords: finite element analysis, numerical modeling, probabilistic methods, risk and reliability analysis, subgrade
Procedia PDF Downloads 1392963 Backward-Facing Step Measurements at Different Reynolds Numbers Using Acoustic Doppler Velocimetry
Authors: Maria Amelia V. C. Araujo, Billy J. Araujo, Brian Greenwood
Abstract:
The flow over a backward-facing step is characterized by the presence of flow separation, recirculation and reattachment, for a simple geometry. This type of fluid behaviour takes place in many practical engineering applications, hence the reason for being investigated. Historically, fluid flows over a backward-facing step have been examined in many experiments using a variety of measuring techniques such as laser Doppler velocimetry (LDV), hot-wire anemometry, particle image velocimetry or hot-film sensors. However, some of these techniques cannot conveniently be used in separated flows or are too complicated and expensive. In this work, the applicability of the acoustic Doppler velocimetry (ADV) technique is investigated to such type of flows, at various Reynolds numbers corresponding to different flow regimes. The use of this measuring technique in separated flows is very difficult to find in literature. Besides, most of the situations where the Reynolds number effect is evaluated in separated flows are in numerical modelling. The ADV technique has the advantage in providing nearly non-invasive measurements, which is important in resolving turbulence. The ADV Nortek Vectrino+ was used to characterize the flow, in a recirculating laboratory flume, at various Reynolds Numbers (Reh = 3738, 5452, 7908 and 17388) based on the step height (h), in order to capture different flow regimes, and the results compared to those obtained using other measuring techniques. To compare results with other researchers, the step height, expansion ratio and the positions upstream and downstream the step were reproduced. The post-processing of the AVD records was performed using a customized numerical code, which implements several filtering techniques. Subsequently, the Vectrino noise level was evaluated by computing the power spectral density for the stream-wise horizontal velocity component. The normalized mean stream-wise velocity profiles, skin-friction coefficients and reattachment lengths were obtained for each Reh. Turbulent kinetic energy, Reynolds shear stresses and normal Reynolds stresses were determined for Reh = 7908. An uncertainty analysis was carried out, for the measured variables, using the moving block bootstrap technique. Low noise levels were obtained after implementing the post-processing techniques, showing their effectiveness. Besides, the errors obtained in the uncertainty analysis were relatively low, in general. For Reh = 7908, the normalized mean stream-wise velocity and turbulence profiles were compared directly with those acquired by other researchers using the LDV technique and a good agreement was found. The ADV technique proved to be able to characterize the flow properly over a backward-facing step, although additional caution should be taken for measurements very close to the bottom. The ADV measurements showed reliable results regarding: a) the stream-wise velocity profiles; b) the turbulent shear stress; c) the reattachment length; d) the identification of the transition from transitional to turbulent flows. Despite being a relatively inexpensive technique, acoustic Doppler velocimetry can be used with confidence in separated flows and thus very useful for numerical model validation. However, it is very important to perform adequate post-processing of the acquired data, to obtain low noise levels, thus decreasing the uncertainty.Keywords: ADV, experimental data, multiple Reynolds number, post-processing
Procedia PDF Downloads 1482962 Breast Cancer Sensing and Imaging Utilized Printed Ultra Wide Band Spherical Sensor Array
Authors: Elyas Palantei, Dewiani, Farid Armin, Ardiansyah
Abstract:
High precision of printed microwave sensor utilized for sensing and monitoring the potential breast cancer existed in women breast tissue was optimally computed. The single element of UWB printed sensor that successfully modeled through several numerical optimizations was multiple fabricated and incorporated with woman bra to form the spherical sensors array. One sample of UWB microwave sensor obtained through the numerical computation and optimization was chosen to be fabricated. In overall, the spherical sensors array consists of twelve stair patch structures, and each element was individually measured to characterize its electrical properties, especially the return loss parameter. The comparison of S11 profiles of all UWB sensor elements is discussed. The constructed UWB sensor is well verified using HFSS programming, CST programming, and experimental measurement. Numerically, both HFSS and CST confirmed the potential operation bandwidth of UWB sensor is more or less 4.5 GHz. However, the measured bandwidth provided is about 1.2 GHz due to the technical difficulties existed during the manufacturing step. The configuration of UWB microwave sensing and monitoring system implemented consists of 12 element UWB printed sensors, vector network analyzer (VNA) to perform as the transceiver and signal processing part, the PC Desktop/Laptop acting as the image processing and displaying unit. In practice, all the reflected power collected from whole surface of artificial breast model are grouped into several numbers of pixel color classes positioned on the corresponding row and column (pixel number). The total number of power pixels applied in 2D-imaging process was specified to 100 pixels (or the power distribution pixels dimension 10x10). This was determined by considering the total area of breast phantom of average Asian women breast size and synchronizing with the single UWB sensor physical dimension. The interesting microwave imaging results were plotted and together with some technical problems arisen on developing the breast sensing and monitoring system are examined in the paper.Keywords: UWB sensor, UWB microwave imaging, spherical array, breast cancer monitoring, 2D-medical imaging
Procedia PDF Downloads 1952961 A Peg Board with Photo-Reflectors to Detect Peg Insertion and Pull-Out Moments
Authors: Hiroshi Kinoshita, Yasuto Nakanishi, Ryuhei Okuno, Toshio Higashi
Abstract:
Various kinds of pegboards have been developed and used widely in research and clinics of rehabilitation for evaluation and training of patient’s hand function. A common measure in these peg boards is a total time of performance execution assessed by a tester’s stopwatch. Introduction of electrical and automatic measurement technology to the apparatus, on the other hand, has been delayed. The present work introduces the development of a pegboard with an electric sensor to detect moments of individual peg’s insertion and removal. The work also gives fundamental data obtained from a group of healthy young individuals who performed peg transfer tasks using the pegboard developed. Through trails and errors in pilot tests, two 10-hole peg-board boxes installed with a small photo-reflector and a DC amplifier at the bottom of each hole were designed and built by the present authors. The amplified electric analogue signals from the 20 reflectors were automatically digitized at 500 Hz per channel, and stored in a PC. The boxes were set on a test table at different distances (25, 50, 75, and 125 mm) in parallel to examine the effect of hole-to-hole distance. Fifty healthy young volunteers (25 in each gender) as subjects of the study performed successive fast 80 time peg transfers at each distance using their dominant and non-dominant hands. The data gathered showed a clear-cut light interruption/continuation moment by the pegs, allowing accurately (no tester’s error involved) and precisely (an order of milliseconds) to determine the pull out and insertion times of each peg. This further permitted computation of individual peg movement duration (PMD: from peg-lift-off to insertion) apart from hand reaching duration (HRD: from peg insertion to lift-off). An accidental drop of a peg led to an exceptionally long ( < mean + 3 SD) PMD, which was readily detected from an examination of data distribution. The PMD data were commonly right-skewed, suggesting that the median can be a better estimate of individual PMD than the mean. Repeated measures ANOVA using the median values revealed significant hole-to-hole distance, and hand dominance effects, suggesting that these need to be fixed in the accurate evaluation of PMD. The gender effect was non-significant. Performance consistency was also evaluated by the use of quartile variation coefficient values, which revealed no gender, hole-to-hole, and hand dominance effects. The measurement reliability was further examined using interclass correlation obtained from 14 subjects who performed the 25 and 125 mm hole distance tasks at two 7-10 days separate test sessions. Inter-class correlation values between the two tests showed fair reliability for PMD (0.65-0.75), and for HRD (0.77-0.94). We concluded that a sensor peg board developed in the present study could provide accurate (excluding tester’s errors), and precise (at a millisecond rate) time information of peg movement separated from that used for hand movement. It could also easily detect and automatically exclude erroneous execution data from his/her standard data. These would lead to a better evaluation of hand dexterity function compared to the widely used conventional used peg boards.Keywords: hand, dexterity test, peg movement time, performance consistency
Procedia PDF Downloads 1332960 An Intelligent Thermal-Aware Task Scheduler in Multiprocessor System on a Chip
Authors: Sina Saadati
Abstract:
Multiprocessors Systems-On-Chips (MPSOCs) are used widely on modern computers to execute sophisticated software and applications. These systems include different processors for distinct aims. Most of the proposed task schedulers attempt to improve energy consumption. In some schedulers, the processor's temperature is considered to increase the system's reliability and performance. In this research, we have proposed a new method for thermal-aware task scheduling which is based on an artificial neural network (ANN). This method enables us to consider a variety of factors in the scheduling process. Some factors like ambient temperature, season (which is important for some embedded systems), speed of the processor, computing type of tasks and have a complex relationship with the final temperature of the system. This Issue can be solved using a machine learning algorithm. Another point is that our solution makes the system intelligent So that It can be adaptive. We have also shown that the computational complexity of the proposed method is cheap. As a consequence, It is also suitable for battery-powered systems.Keywords: task scheduling, MOSOC, artificial neural network, machine learning, architecture of computers, artificial intelligence
Procedia PDF Downloads 1032959 Work-Home Interference and Emotional Exhaustion: The Role of Psychological Detachment, Relaxation and Technology-Assisted Supplemental Work
Authors: Nidhi S. Bisht
Abstract:
The study examines the role of work-home interference, on enhancing emotional exhaustion in the branch officers of private MFIs in India. Additionally, the moderating role of recovery experiences and technology-assisted supplemental work (TASW) were studied. With the increasing expectations to perform job related tasks at home, technology-assisted supplemental work (TASW) was hypothesized to positively moderate the relationship between work-home interference and emotional exhaustion. Further, it was expected that recovery experiences-psychological detachment, relaxation will help to recover and unwind from work and negatively moderate the relationship between work-home interference and emotional exhaustion. Results of SEM-analyses largely offered support for the hypotheses. These findings increase our insight in the processes leading to increased emotional exhaustion and suggest that employees can protect themselves from emotional exhaustion by keeping a tab on technology-assisted supplemental work and facilitating recovery experiences.Keywords: emotional exhaustion, India, microfinance institutions (MFIs), work-home interference
Procedia PDF Downloads 2282958 Mobile Wireless Investigation Platform
Authors: Dimitar Karastoyanov, Todor Penchev
Abstract:
The paper presents the research of a kind of autonomous mobile robots, intended for work and adaptive perception in unknown and unstructured environment. The objective are robots, dedicated for multi-sensory environment perception and exploration, like measurements and samples taking, discovering and putting a mark on the objects as well as environment interactions–transportation, carrying in and out of equipment and objects. At that ground classification of the different types mobile robots in accordance with the way of locomotion (wheel- or chain-driven, walking, etc.), used drive mechanisms, kind of sensors, end effectors, area of application, etc. is made. Modular system for the mechanical construction of the mobile robots is proposed. Special PLC on the base of AtMega128 processor for robot control is developed. Electronic modules for the wireless communication on the base of Jennic processor as well as the specific software are developed. The methods, means and algorithms for adaptive environment behaviour and tasks realization are examined. The methods of group control of mobile robots and for suspicious objects detecting and handling are discussed too.Keywords: mobile robots, wireless communications, environment investigations, group control, suspicious objects
Procedia PDF Downloads 3562957 Lateral Capacity of Helical-Pile Groups Subjected to Bearing Combined Loads
Authors: Hesham Hamdy Abdelmohsen, Ahmed Shawky Abdul Azizb, Mona Fawzy Aldaghma
Abstract:
Helical piles have earned considerable attention as an effective deep foundation alternative due to their rapid installation process and their dual purpose in compression and tension. These piles find common uses as foundations for structures like solar panels, wind turbines, offshore platforms, and some kinds of retaining walls. These structures usually transfer different combinations of loads to their helical-pile foundations in the form of axial and lateral loads. Extensive research has been conducted to investigate and understand the behavior of these piles under the influence of either axial or lateral loads. However, the impacts of loading patterns that may act on the helical piles as combinations of axial compression and lateral loads still need more efforts of research work. This paper presents the results of an experimental (Lab tests) and numerical (PLAXIS-3D) study performed on vertical helical-pile groups under the action of combined loads as axial compression (bearing loads), acting successively with lateral (horizontal) loads. The study aims to clarify the effects of key factors, like helix location and direction of lateral load, on the lateral capacity of helical-pile groups and, consequently, on group efficiency. Besides the variation of helix location and lateral load direction, three patterns of successive bearing combined loads were considered, in which the axial vertical compression load was either zero, V1 or V2, whereas the lateral horizontal loads were varied under each vertical compression load. The study concluded that the lateral capacity of the helical-pile group is significantly affected by helix location within the length of the pile shaft. The optimal lateral performance is achieved with helices at a depth ratio of H/L = 0.4. Furthermore, groups of rectangular plan distribution exhibit greater lateral capacity if subjected to lateral horizontal load in the direction of its long axis. Additionally, the research emphasizes that the presence of vertical compression loading can enhance the lateral capacity of the group. This enhancement depends on the value of the vertical compression load, lateral load direction, and helix location, which highlights the complex interaction effect of these factors on the efficiency of helical-pile groups.Keywords: helical piles, experimental, numerical, lateral loading, group efficiency
Procedia PDF Downloads 322956 Financial Literacy as an Important Skill for Household Financial Decision Making
Authors: Rimac Smiljanic Ana, Pepur Sandra, Bulog Ivana
Abstract:
Financial decision-making in the household is not simple, and it demands that the decision-maker has proper knowledge and skills. Usually, high uncertainty, risk, and stress surround household financial decision-making since it is extremely important and critical for household wealth accumulation and for the well-being of all household members. Generally, skilful people tend to have higher confidence in certain tasks they perform, and they achieve better results. Therefore, in the household context, the possession of certain skills by the ones who make financial decisions for the household is of particular importance. This paper addresses financial literacy as an important skill for household decision-making. Apart from financial literacy, the paper also considers other factors, such as employment, education, and age, as significant for household financial decision-making. The analysis is based on quantitative individual-level survey data. The data collection was conducted during January and February 2021 in Croatia through an online survey. To reach a wide variety of participants, the snowball sampling method was used. The result revealed interesting and somewhat puzzling results. Our results point to the importance of financial literacy skills for household decision-making.Keywords: skill, financial literacy, decision-making, household financijal decision making
Procedia PDF Downloads 972955 School Autonomy in the United Kingdom: A Correlational Study Applied to English Principals
Authors: Pablo Javier Ortega-Rodriguez, Francisco Jose Pozuelos-Estrada
Abstract:
Recently, there has been a renewed interest in school autonomy in the United Kingdom and its impact on students' outcomes. English principals have a pivotal role in decision-making. The aim of this paper is to explore the correlation between the type of school (public or private) and the considerable responsibilities of English principals which participated in PISA 2015. The final sample consisted of 419 principals. Descriptive data (percentages and means) were generated for the variables related to professional autonomy. Pearson's chi-square test was used to determine if there is an association between the type of school and principals' responsibilities for relevant tasks. Statistical analysis was performed using SPSS software, version 22. Findings suggest a significant correlation between the type of school and principals' responsibility for firing teachers and formulating the school budget. This study confirms that the type of school is not associated with principals' responsibility for choosing which textbooks are used at school. The present study establishes a quantitative framework for defining four models of professional autonomy and some proposals to improve school autonomy in the United Kingdom.Keywords: decision making, principals, professional autonomy, school autonomy
Procedia PDF Downloads 7952954 Temporal and Spacial Adaptation Strategies in Aerodynamic Simulation of Bluff Bodies Using Vortex Particle Methods
Authors: Dario Milani, Guido Morgenthal
Abstract:
Fluid dynamic computation of wind caused forces on bluff bodies e.g light flexible civil structures or high incidence of ground approaching airplane wings, is one of the major criteria governing their design. For such structures a significant dynamic response may result, requiring the usage of small scale devices as guide-vanes in bridge design to control these effects. The focus of this paper is on the numerical simulation of the bluff body problem involving multiscale phenomena induced by small scale devices. One of the solution methods for the CFD simulation that is relatively successful in this class of applications is the Vortex Particle Method (VPM). The method is based on a grid free Lagrangian formulation of the Navier-Stokes equations, where the velocity field is modeled by particles representing local vorticity. These vortices are being convected due to the free stream velocity as well as diffused. This representation yields the main advantages of low numerical diffusion, compact discretization as the vorticity is strongly localized, implicitly accounting for the free-space boundary conditions typical for this class of FSI problems, and a natural representation of the vortex creation process inherent in bluff body flows. When the particle resolution reaches the Kolmogorov dissipation length, the method becomes a Direct Numerical Simulation (DNS). However, it is crucial to note that any solution method aims at balancing the computational cost against the accuracy achievable. In the classical VPM method, if the fluid domain is discretized by Np particles, the computational cost is O(Np2). For the coupled FSI problem of interest, for example large structures such as long-span bridges, the aerodynamic behavior may be influenced or even dominated by small structural details such as barriers, handrails or fairings. For such geometrically complex and dimensionally large structures, resolving the complete domain with the conventional VPM particle discretization might become prohibitively expensive to compute even for moderate numbers of particles. It is possible to reduce this cost either by reducing the number of particles or by controlling its local distribution. It is also possible to increase the accuracy of the solution without increasing substantially the global computational cost by computing a correction of the particle-particle interaction in some regions of interest. In this paper different strategies are presented in order to extend the conventional VPM method to reduce the computational cost whilst resolving the required details of the flow. The methods include temporal sub stepping to increase the accuracy of the particles convection in certain regions as well as dynamically re-discretizing the particle map to locally control the global and the local amount of particles. Finally, these methods will be applied on a test case and the improvements in the efficiency as well as the accuracy of the proposed extension to the method are presented. The important benefits in terms of accuracy and computational cost of the combination of these methods will be thus presented as long as their relevant applications.Keywords: adaptation, fluid dynamic, remeshing, substepping, vortex particle method
Procedia PDF Downloads 2622953 Teacher in Character Strengthening for Early Childhood
Authors: Siti Aisyah
Abstract:
This article discusses character education which is a very basic education for early childhood with the aim of instilling moral values to prevent unacceptable behaviours. Children can absorb good character when they are in a supportive environment, for that schools should understand and implement character education in the learning process. In the school environment, good character education and habituation can be developed. All parties in the school should be involved, especially the teachers. This research discusses how teachers apply characters on the values of responsibility, honesty, discipline, love and compassion, caring, courage, independence, hard work, mutual cooperation, courtesy, justice, self-control and tolerance. The respondents of this study were teachers involving 200 children from all over Indonesia. The methodology used was a survey method with the result that more than 80% of teachers have been able to exhibit the expected behaviours. The survey was conducted based on observations, types of tasks and assessed performance. The character values can be optimally taught in the school environment based on the teacher's ability to implement them. Through the character education in schools, children can also instil a positive outlook on life.Keywords: teachers, character strengthening, early childhood, behavior
Procedia PDF Downloads 912952 Computer-Aided Exudate Diagnosis for the Screening of Diabetic Retinopathy
Authors: Shu-Min Tsao, Chung-Ming Lo, Shao-Chun Chen
Abstract:
Most diabetes patients tend to suffer from its complication of retina diseases. Therefore, early detection and early treatment are important. In clinical examinations, using color fundus image was the most convenient and available examination method. According to the exudates appeared in the retinal image, the status of retina can be confirmed. However, the routine screening of diabetic retinopathy by color fundus images would bring time-consuming tasks to physicians. This study thus proposed a computer-aided exudate diagnosis for the screening of diabetic retinopathy. After removing vessels and optic disc in the retinal image, six quantitative features including region number, region area, and gray-scale values etc… were extracted from the remaining regions for classification. As results, all six features were evaluated to be statistically significant (p-value < 0.001). The accuracy of classifying the retinal images into normal and diabetic retinopathy achieved 82%. Based on this system, the clinical workload could be reduced. The examination procedure may also be improved to be more efficient.Keywords: computer-aided diagnosis, diabetic retinopathy, exudate, image processing
Procedia PDF Downloads 2702951 Evaluation of a Surrogate Based Method for Global Optimization
Authors: David Lindström
Abstract:
We evaluate the performance of a numerical method for global optimization of expensive functions. The method is using a response surface to guide the search for the global optimum. This metamodel could be based on radial basis functions, kriging, or a combination of different models. We discuss how to set the cycling parameters of the optimization method to get a balance between local and global search. We also discuss the eventual problem with Runge oscillations in the response surface.Keywords: expensive function, infill sampling criterion, kriging, global optimization, response surface, Runge phenomenon
Procedia PDF Downloads 5782950 A Tool for Assessing Performance and Structural Quality of Business Process
Authors: Mariem Kchaou, Wiem Khlif, Faiez Gargouri
Abstract:
Modeling business processes is an essential task when evaluating, improving, or documenting existing business processes. To be efficient in such tasks, a business process model (BPM) must have high structural quality and high performance. Evidently, evaluating the performance of a business process model is a necessary step to reduce time, cost, while assessing the structural quality aims to improve the understandability and the modifiability of the BPMN model. To achieve these objectives, a set of structural and performance measures have been proposed. Since the diversity of measures, we propose a framework that integrates both structural and performance aspects for classifying them. Our measure classification is based on business process model perspectives (e.g., informational, functional, organizational, behavioral, and temporal), and the elements (activity, event, actor, etc.) involved in computing the measures. Then, we implement this framework in a tool assisting the structural quality and the performance of a business process. The tool helps the designers to select an appropriate subset of measures associated with the corresponding perspective and to calculate and interpret their values in order to improve the structural quality and the performance of the model.Keywords: performance, structural quality, perspectives, tool, classification framework, measures
Procedia PDF Downloads 157