Search results for: pragmatic factors; Wason selection task
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4277

Search results for: pragmatic factors; Wason selection task

1877 An Exploratory Study in Nursing Education: Factors Influencing Nursing Students’ Acceptance of Mobile Learning

Authors: R. Abdulrahman, A. Eardley, A. Soliman

Abstract:

The proliferation in the development of mobile learning (m-learning) has played a vital role in the rapidly growing electronic learning market. This relatively new technology can help to encourage the development of in learning and to aid knowledge transfer a number of areas, by familiarizing students with innovative information and communications technologies (ICT). M-learning plays a substantial role in the deployment of learning methods for nursing students by using the Internet and portable devices to access learning resources ‘anytime and anywhere’. However, acceptance of m-learning by students is critical to the successful use of m-learning systems. Thus, there is a need to study the factors that influence student’s intention to use m-learning. This paper addresses this issue. It outlines the outcomes of a study that evaluates the unified theory of acceptance and use of technology (UTAUT) model as applied to the subject of user acceptance in relation to m-learning activity in nurse education. The model integrates the significant components across eight prominent user acceptance models. Therefore, a standard measure is introduced with core determinants of user behavioural intention. The research model extends the UTAUT in the context of m-learning acceptance by modifying and adding individual innovativeness (II) and quality of service (QoS) to the original structure of UTAUT. The paper goes on to add the factors of previous experience (of using mobile devices in similar applications) and the nursing students’ readiness (to use the technology) to influence their behavioural intentions to use m-learning. This study uses a technique called ‘convenience sampling’ which involves student volunteers as participants in order to collect numerical data. A quantitative method of data collection was selected and involves an online survey using a questionnaire form. This form contains 33 questions to measure the six constructs, using a 5-point Likert scale. A total of 42 respondents participated, all from the Nursing Institute at the Armed Forces Hospital in Saudi Arabia. The gathered data were then tested using a research model that employs the structural equation modelling (SEM), including confirmatory factor analysis (CFA). The results of the CFA show that the UTAUT model has the ability to predict student behavioural intention and to adapt m-learning activity to the specific learning activities. It also demonstrates satisfactory, dependable and valid scales of the model constructs. This suggests further analysis to confirm the model as a valuable instrument in order to evaluate the user acceptance of m-learning activity.

Keywords: Mobile learning, nursing institute, unified theory of acceptance and use of technology model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1196
1876 An Approach on Integrating Cooperative Education Experience into the Engineering Curriculum

Authors: Robin Lok-Wang

Abstract:

The center/unit for industry engagement and collaboration, as well as Internship, plays a significant role at a university. In general, the Center serves as the official interface between industry and the school or department to cultivate students’ early exposure to professional experience. The missions of the Center are not limited to provide a communication channel and collaborative platform for the industries and the university but also to assist students to build their career paths early while still at the university. In recent years, a cooperative education experience (commonly known as a co-op) has been strongly advocated for students to make the school-to-work transition. The nature of the co-op program is not only consistent with the internships/final year design projects, but it is also more industrial-oriented with academic support from faculty at the university. The purpose of this paper is to describe an approach to how cooperative education experience can be integrated into the engineering curriculum. It provides a mutual understanding and exchange of ideas for the approach between the university and industry. A suggested format in terms of timeline, duration, selection of candidates, students, and companies’ expectations for the co-op program is described. Also, feedback from employers/industries shows that a longer-term co-op program is well suited for students compared with a short-term internship. To this end, it provides an insight into collaboration and/or partnership between the university and the industries to prepare professional work-ready graduates.

Keywords: Cooperative education, internship, industry collaboration, engineering curriculum.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 270
1875 Night-Time Traffic Light Detection Based On SVM with Geometric Moment Features

Authors: Hyun-Koo Kim, Young-Nam Shin, Sa-gong Kuk, Ju H. Park, Ho-Youl Jung

Abstract:

This paper presents an effective traffic lights detection method at the night-time. First, candidate blobs of traffic lights are extracted from RGB color image. Input image is represented on the dominant color domain by using color transform proposed by Ruta, then red and green color dominant regions are selected as candidates. After candidate blob selection, we carry out shape filter for noise reduction using information of blobs such as length, area, area of boundary box, etc. A multi-class classifier based on SVM (Support Vector Machine) applies into the candidates. Three kinds of features are used. We use basic features such as blob width, height, center coordinate, area, area of blob. Bright based stochastic features are also used. In particular, geometric based moment-s values between candidate region and adjacent region are proposed and used to improve the detection performance. The proposed system is implemented on Intel Core CPU with 2.80 GHz and 4 GB RAM and tested with the urban and rural road videos. Through the test, we show that the proposed method using PF, BMF, and GMF reaches up to 93 % of detection rate with computation time of in average 15 ms/frame.

Keywords: Night-time traffic light detection, multi-class classification, driving assistance system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3873
1874 Locating Cultural Centers in Shiraz (Iran) Applying Geographic Information System (GIS)

Authors: R. Mokhtari Malekabadi, S. Ghaed Rahmati, S. Aram

Abstract:

Optimal cultural site selection is one of the ways that can lead to the promotion of citizenship culture in addition to ensuring the health and leisure of city residents. This study examines the social and cultural needs of the community and optimal cultural site allocation and after identifying the problems and shortcomings, provides a suitable model for finding the best location for these centers where there is the greatest impact on the promotion of citizenship culture. On the other hand, non-scientific methods cause irreversible impacts to the urban environment and citizens. But modern efficient methods can reduce these impacts. One of these methods is using geographical information systems (GIS). In this study, Analytical Hierarchy Process (AHP) method was used to locate the optimal cultural site. In AHP, three principles (decomposition), (comparative analysis), and (combining preferences) are used. The objectives of this research include providing optimal contexts for passing time and performing cultural activities by Shiraz residents and also proposing construction of some cultural sites in different areas of the city. The results of this study show the correct positioning of cultural sites based on social needs of citizens. Thus, considering the population parameters and radii access, GIS and AHP model for locating cultural centers can meet social needs of citizens.

Keywords: Analytical Hierarchy Process (AHP), geographical information systems (GIS), Cultural site, locating, Shiraz.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1593
1873 Analysis of Driver Point of Regard Determinations with Eye-Gesture Templates Using Receiver Operating Characteristic

Authors: Siti Nor Hafizah binti Mohd Zaid, Mohamed Abdel-Maguid, Abdel-Hamid Soliman

Abstract:

An Advance Driver Assistance System (ADAS) is a computer system on board a vehicle which is used to reduce the risk of vehicular accidents by monitoring factors relating to the driver, vehicle and environment and taking some action when a risk is identified. Much work has been done on assessing vehicle and environmental state but there is still comparatively little published work that tackles the problem of driver state. Visual attention is one such driver state. In fact, some researchers claim that lack of attention is the main cause of accidents as factors such as fatigue, alcohol or drug use, distraction and speeding all impair the driver-s capacity to pay attention to the vehicle and road conditions [1]. This seems to imply that the main cause of accidents is inappropriate driver behaviour in cases where the driver is not giving full attention while driving. The work presented in this paper proposes an ADAS system which uses an image based template matching algorithm to detect if a driver is failing to observe particular windscreen cells. This is achieved by dividing the windscreen into 24 uniform cells (4 rows of 6 columns) and matching video images of the driver-s left eye with eye-gesture templates drawn from images of the driver looking at the centre of each windscreen cell. The main contribution of this paper is to assess the accuracy of this approach using Receiver Operating Characteristic analysis. The results of our evaluation give a sensitivity value of 84.3% and a specificity value of 85.0% for the eye-gesture template approach indicating that it may be useful for driver point of regard determinations.

Keywords: Advanced Driver Assistance Systems, Eye-Tracking, Hazard Detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1627
1872 Embedded Semi-Fragile Signature Based Scheme for Ownership Identification and Color Image Authentication with Recovery

Authors: M. Hamad Hassan, S.A.M. Gilani

Abstract:

In this paper, a novel scheme is proposed for Ownership Identification and Color Image Authentication by deploying Cryptography & Digital Watermarking. The color image is first transformed from RGB to YST color space exclusively designed for watermarking. Followed by color space transformation, each channel is divided into 4×4 non-overlapping blocks with selection of central 2×2 sub-blocks. Depending upon the channel selected two to three LSBs of each central 2×2 sub-block are set to zero to hold the ownership, authentication and recovery information. The size & position of sub-block is important for correct localization, enhanced security & fast computation. As YS ÔèÑ T so it is suitable to embed the recovery information apart from the ownership and authentication information, therefore 4×4 block of T channel along with ownership information is then deployed by SHA160 to compute the content based hash that is unique and invulnerable to birthday attack or hash collision instead of using MD5 that may raise the condition i.e. H(m)=H(m'). For recovery, intensity mean of 4x4 block of each channel is computed and encoded upto eight bits. For watermark embedding, key based mapping of blocks is performed using 2DTorus Automorphism. Our scheme is oblivious, generates highly imperceptible images with correct localization of tampering within reasonable time and has the ability to recover the original work with probability of near one.

Keywords: Hash Collision, LSB, MD5, PSNR, SHA160

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1511
1871 Deep Learning Application for Object Image Recognition and Robot Automatic Grasping

Authors: Shiuh-Jer Huang, Chen-Zon Yan, C. K. Huang, Chun-Chien Ting

Abstract:

Since the vision system application in industrial environment for autonomous purposes is required intensely, the image recognition technique becomes an important research topic. Here, deep learning algorithm is employed in image system to recognize the industrial object and integrate with a 7A6 Series Manipulator for object automatic gripping task. PC and Graphic Processing Unit (GPU) are chosen to construct the 3D Vision Recognition System. Depth Camera (Intel RealSense SR300) is employed to extract the image for object recognition and coordinate derivation. The YOLOv2 scheme is adopted in Convolution neural network (CNN) structure for object classification and center point prediction. Additionally, image processing strategy is used to find the object contour for calculating the object orientation angle. Then, the specified object location and orientation information are sent to robotic controller. Finally, a six-axis manipulator can grasp the specific object in a random environment based on the user command and the extracted image information. The experimental results show that YOLOv2 has been successfully employed to detect the object location and category with confidence near 0.9 and 3D position error less than 0.4 mm. It is useful for future intelligent robotic application in industrial 4.0 environment.

Keywords: Deep learning, image processing, convolution neural network, YOLOv2, 7A6 series manipulator.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1082
1870 The Effect of Sport Specific Exercises on the Visual Skills of Rugby Players

Authors: P.J. Du Toit, P. Janse Van Vuuren , S. Le Roux , E. Henning, M. Kleynhans, H.C. Terblanche, D. Crafford, C. Grobbelaar, P.S. Wood, C.C. Grant, L. Fletcher

Abstract:

Introduction: Visual performance is an important factor in sport excellence. Visual involvement in a sport varies according to environmental demands associated with that sport. These environmental demands are matched by a task specific motor response. The purpose of this study was to determine if sport specific exercises will improve the visual performance of male rugby players, in order to achieve maximal results on the sports field. Materials & Methods: Twenty six adult male rugby players, aged 16-22, were chosen as subjects. In order to evaluate the effect of sport specific exercises on visual skills, a pre-test - post-test experimental group design was adopted for the study. Results: Significant differences (p≤0.05) were seen in the focussing, tracking, vergence, sequencing, eye-hand coordination and visualisation components Discussion & Conclusions: Sport specific exercises improved visual skills in rugby players which may provide them with an advantage over their opponents. This study suggests that these training programs and participation in regular on-line EyeDrills sports vision exercises (www.eyedrills.co.za) aimed at improving the athlete-s visual coordination, concentration, focus, hand-eye co-ordination, anticipation and motor response should be incorpotated in the rugby players exercise regime.

Keywords: Rugby players, sport specific exercises, visual skills.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2250
1869 Automatic Sleep Stage Scoring with Wavelet Packets Based on Single EEG Recording

Authors: Luay A. Fraiwan, Natheer Y. Khaswaneh, Khaldon Y. Lweesy

Abstract:

Sleep stage scoring is the process of classifying the stage of the sleep in which the subject is in. Sleep is classified into two states based on the constellation of physiological parameters. The two states are the non-rapid eye movement (NREM) and the rapid eye movement (REM). The NREM sleep is also classified into four stages (1-4). These states and the state wakefulness are distinguished from each other based on the brain activity. In this work, a classification method for automated sleep stage scoring based on a single EEG recording using wavelet packet decomposition was implemented. Thirty two ploysomnographic recording from the MIT-BIH database were used for training and validation of the proposed method. A single EEG recording was extracted and smoothed using Savitzky-Golay filter. Wavelet packets decomposition up to the fourth level based on 20th order Daubechies filter was used to extract features from the EEG signal. A features vector of 54 features was formed. It was reduced to a size of 25 using the gain ratio method and fed into a classifier of regression trees. The regression trees were trained using 67% of the records available. The records for training were selected based on cross validation of the records. The remaining of the records was used for testing the classifier. The overall correct rate of the proposed method was found to be around 75%, which is acceptable compared to the techniques in the literature.

Keywords: Features selection, regression trees, sleep stagescoring, wavelet packets.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2325
1868 Neural Network Implementation Using FPGA: Issues and Application

Authors: A. Muthuramalingam, S. Himavathi, E. Srinivasan

Abstract:

.Hardware realization of a Neural Network (NN), to a large extent depends on the efficient implementation of a single neuron. FPGA-based reconfigurable computing architectures are suitable for hardware implementation of neural networks. FPGA realization of ANNs with a large number of neurons is still a challenging task. This paper discusses the issues involved in implementation of a multi-input neuron with linear/nonlinear excitation functions using FPGA. Implementation method with resource/speed tradeoff is proposed to handle signed decimal numbers. The VHDL coding developed is tested using Xilinx XC V50hq240 Chip. To improve the speed of operation a lookup table method is used. The problems involved in using a lookup table (LUT) for a nonlinear function is discussed. The percentage saving in resource and the improvement in speed with an LUT for a neuron is reported. An attempt is also made to derive a generalized formula for a multi-input neuron that facilitates to estimate approximately the total resource requirement and speed achievable for a given multilayer neural network. This facilitates the designer to choose the FPGA capacity for a given application. Using the proposed method of implementation a neural network based application, namely, a Space vector modulator for a vector-controlled drive is presented

Keywords: FPGA implementation, multi-input neuron, neural network, nn based space vector modulator.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4412
1867 Combined Feature Based Hyperspectral Image Classification Technique Using Support Vector Machines

Authors: Mrs.K.Kavitha, S.Arivazhagan

Abstract:

A spatial classification technique incorporating a State of Art Feature Extraction algorithm is proposed in this paper for classifying a heterogeneous classes present in hyper spectral images. The classification accuracy can be improved if and only if both the feature extraction and classifier selection are proper. As the classes in the hyper spectral images are assumed to have different textures, textural classification is entertained. Run Length feature extraction is entailed along with the Principal Components and Independent Components. A Hyperspectral Image of Indiana Site taken by AVIRIS is inducted for the experiment. Among the original 220 bands, a subset of 120 bands is selected. Gray Level Run Length Matrix (GLRLM) is calculated for the selected forty bands. From GLRLMs the Run Length features for individual pixels are calculated. The Principle Components are calculated for other forty bands. Independent Components are calculated for next forty bands. As Principal & Independent Components have the ability to represent the textural content of pixels, they are treated as features. The summation of Run Length features, Principal Components, and Independent Components forms the Combined Features which are used for classification. SVM with Binary Hierarchical Tree is used to classify the hyper spectral image. Results are validated with ground truth and accuracies are calculated.

Keywords: Multi-class, Run Length features, PCA, ICA, classification and Support Vector Machines.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1516
1866 Prediction of Time to Crack Reinforced Concrete by Chloride Induced Corrosion

Authors: Anuruddha Jayasuriya, Thanakorn Pheeraphan

Abstract:

In this paper, a review of different mathematical models which can be used as prediction tools to assess the time to crack reinforced concrete (RC) due to corrosion is investigated. This investigation leads to an experimental study to validate a selected prediction model. Most of these mathematical models depend upon the mechanical behaviors, chemical behaviors, electrochemical behaviors or geometric aspects of the RC members during a corrosion process. The experimental program is designed to verify the accuracy of a well-selected mathematical model from a rigorous literature study. Fundamentally, the experimental program exemplifies both one-dimensional chloride diffusion using RC squared slab elements of 500 mm by 500 mm and two-dimensional chloride diffusion using RC squared column elements of 225 mm by 225 mm by 500 mm. Each set consists of three water-to-cement ratios (w/c); 0.4, 0.5, 0.6 and two cover depths; 25 mm and 50 mm. 12 mm bars are used for column elements and 16 mm bars are used for slab elements. All the samples are subjected to accelerated chloride corrosion in a chloride bath of 5% (w/w) sodium chloride (NaCl) solution. Based on a pre-screening of different models, it is clear that the well-selected mathematical model had included mechanical properties, chemical and electrochemical properties, nature of corrosion whether it is accelerated or natural, and the amount of porous area that rust products can accommodate before exerting expansive pressure on the surrounding concrete. The experimental results have shown that the selected model for both one-dimensional and two-dimensional chloride diffusion had ±20% and ±10% respective accuracies compared to the experimental output. The half-cell potential readings are also used to see the corrosion probability, and experimental results have shown that the mass loss is proportional to the negative half-cell potential readings that are obtained. Additionally, a statistical analysis is carried out in order to determine the most influential factor that affects the time to corrode the reinforcement in the concrete due to chloride diffusion. The factors considered for this analysis are w/c, bar diameter, and cover depth. The analysis is accomplished by using Minitab statistical software, and it showed that cover depth is the significant effect on the time to crack the concrete from chloride induced corrosion than other factors considered. Thus, the time predictions can be illustrated through the selected mathematical model as it covers a wide range of factors affecting the corrosion process, and it can be used to predetermine the durability concern of RC structures that are vulnerable to chloride exposure. And eventually, it is further concluded that cover thickness plays a vital role in durability in terms of chloride diffusion.

Keywords: Accelerated corrosion, chloride diffusion, corrosion cracks, passivation layer, reinforcement corrosion.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 887
1865 Anti-Aging Effects of Retinol and Alpha Hydroxy Acid on Elastin Fibers of Artificially Photo-Aged Human Dermal Fibroblast Cell Lines

Authors: M. Jarrar, S. Behl, N. Shaheen, A. Fatima, R. Nasab

Abstract:

Skin aging is a slow multifactorial process influenced by both internal as well as external factors. Ultra-violet radiations (UV), diet, smoking and personal habits are the most common environmental factors that affect skin aging. Fat contents and fibrous proteins as collagen and elastin are core internal structural components. The direct influence of UV on elastin integrity and health is central on aging of skin especially by time. The deposition of abnormal elastic material is a major marker in a photo-aged skin. Searching for compounds that may protect against cutaneous photodamage is exceedingly valued. Retinoids and alpha hydroxy acids have been endorsed by some researchers as possible candidates for protecting and or repairing the effect of UV damaged skin. For consolidating a better system of anti- and protective effects of such anti-aging agents, we evaluated the combinatory effects of various dosages of lactic acid and retinol on the dermal fibroblast’s elastin levels exposed to UV. The UV exposed cells showed significant reduction in the elastin levels. A combination of drugs with a higher concentration of lactic acid (30 -35 mM) and a lower concentration of retinol (10-15mg/mL) showed to work better in maintaining elastin concentration in UV exposed cells. We assume this preservation could be the result of increased tropo-elastin gene expression stimulated by retinol whereas lactic acid probably repaired the UV irradiated damage by enhancing the amount and integrity of the elastin fibers.

Keywords: Alpha Hydroxy Acid, Elastin, Retinol, Ultraviolet radiations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3086
1864 Hamiltonian Factors in Hamiltonian Graphs

Authors: Sizhong Zhou, Bingyuan Pu

Abstract:

Let G be a Hamiltonian graph. A factor F of G is called a Hamiltonian factor if F contains a Hamiltonian cycle. In this paper, two sufficient conditions are given, which are two neighborhood conditions for a Hamiltonian graph G to have a Hamiltonian factor.

Keywords: graph, neighborhood, factor, Hamiltonian factor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1168
1863 Automatic Tuning for a Systemic Model of Banking Originated Losses (SYMBOL) Tool on Multicore

Authors: Ronal Muresano, Andrea Pagano

Abstract:

Nowadays, the mathematical/statistical applications are developed with more complexity and accuracy. However, these precisions and complexities have brought as result that applications need more computational power in order to be executed faster. In this sense, the multicore environments are playing an important role to improve and to optimize the execution time of these applications. These environments allow us the inclusion of more parallelism inside the node. However, to take advantage of this parallelism is not an easy task, because we have to deal with some problems such as: cores communications, data locality, memory sizes (cache and RAM), synchronizations, data dependencies on the model, etc. These issues are becoming more important when we wish to improve the application’s performance and scalability. Hence, this paper describes an optimization method developed for Systemic Model of Banking Originated Losses (SYMBOL) tool developed by the European Commission, which is based on analyzing the application's weakness in order to exploit the advantages of the multicore. All these improvements are done in an automatic and transparent manner with the aim of improving the performance metrics of our tool. Finally, experimental evaluations show the effectiveness of our new optimized version, in which we have achieved a considerable improvement on the execution time. The time has been reduced around 96% for the best case tested, between the original serial version and the automatic parallel version.

Keywords: Algorithm optimization, Bank Failures, OpenMP, Parallel Techniques, Statistical tool.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1897
1862 Implementation of Congestion Management Strategies on Arterial Roads: Case Study of Geelong

Authors: A. Das, L. Hitihamillage, S. Moridpour

Abstract:

Natural disasters are inevitable to the biodiversity. Disasters such as flood, tsunami and tornadoes could be brutal, harsh and devastating. In Australia, flooding is a major issue experienced by different parts of the country. In such crisis, delays in evacuation could decide the life and death of the people living in those regions. Congestion management could become a mammoth task if there are no steps taken before such situations. In the past to manage congestion in such circumstances, many strategies were utilised such as converting the road shoulders to extra lanes or changing the road geometry by adding more lanes. However, expansion of road to resolving congestion problems is not considered a viable option nowadays. The authorities avoid this option due to many reasons, such as lack of financial support and land space. They tend to focus their attention on optimising the current resources they possess and use traffic signals to overcome congestion problems. Traffic Signal Management strategy was considered a viable option, to alleviate congestion problems in the City of Geelong, Victoria. Arterial road with signalised intersections considered in this paper and the traffic data required for modelling collected from VicRoads. Traffic signalling software SIDRA used to model the roads, and the information gathered from VicRoads. In this paper, various signal parameters utilised to assess and improve the corridor performance to achieve the best possible Level of Services (LOS) for the arterial road.

Keywords: Congestion, constraints, management, LOS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 958
1861 The Analysis of Secondary Case Studies as a Starting Point for Grounded Theory Studies: An Example from the Enterprise Software Industry

Authors: Abilio Avila, Orestis Terzidis

Abstract:

A fundamental principle of Grounded Theory (GT) is to prevent the formation of preconceived theories. This implies the need to start a research study with an open mind and to avoid being absorbed by the existing literature. However, to start a new study without an understanding of the research domain and its context can be extremely challenging. This paper presents a research approach that simultaneously supports a researcher to identify and to focus on critical areas of a research project and prevent the formation of prejudiced concepts by the current body of literature. This approach comprises of four stages: Selection of secondary case studies, analysis of secondary case studies, development of an initial conceptual framework, development of an initial interview guide. The analysis of secondary case studies as a starting point for a research project allows a researcher to create a first understanding of a research area based on real-world cases without being influenced by the existing body of theory. It enables a researcher to develop through a structured course of actions a firm guide that establishes a solid starting point for further investigations. Thus, the described approach may have significant implications for GT researchers who aim to start a study within a given research area.

Keywords: Grounded theory, qualitative research, secondary case studies, secondary data analysis, interview guide.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1837
1860 Case Study Analysis of 2017 European Railway Traffic Management Incident: The Application of System for Investigation of Railway Interfaces Methodology

Authors: Sanjeev Kumar Appicharla

Abstract:

This paper presents the results of the modelling and analysis of the European Railway Traffic Management (ERTMS) safety critical incident to raise awareness of biases in systems engineering process on the Cambrian Railway in the UK using the RAIB 17/2019 as a primary input. The RAIB, the UK independent accident investigator, published the Report- RAIB 17/2019 giving the details of their investigation of the focal event in the form of immediate cause, causal factors and underlying factors and recommendations to prevent a repeat of the safety-critical incident on the Cambrian Line. The Systems for Investigation of Railway Interfaces (SIRI) is the Methodology used to model and analyse the safety-critical incident. The SIRI Methodology uses the Swiss Cheese Model to model the incident and identify latent failure conditions (potentially less than adequate conditions) by means of the Management Oversight and Risk Tree technique. The benefits of the SIRI Methodology are threefold: first is that it incorporates “Heuristics and Biases” approach, in the Management Oversight and Risk Tree technique to identify systematic errors. Civil engineering and programme management railway professionals are aware of role “optimism bias” plays in programme cost overruns and are aware of bow tie (fault and event tree) model-based safety risk modelling technique. However, the role of systematic errors due to “Heuristics and Biases” is not appreciated as yet. This overcomes the problems of omission of human and organisational factors from accident analysis. Second, the scope of the investigation includes all levels of the socio-technical system, including government, regulatory, railway safety bodies, duty holders, signalling firms and transport planners, and front-line staff such that lessons learned at the decision making and implementation level as well. Third, the author’s past accident case studies are supplemented with research pieces of evidence drawn from the practitioner’s and academic researchers’ publications as well. This is to discuss the role of system thinking to improve the decision making and risk management processes and practices in the IEC 15288 Systems Engineering standard, and in the industrial context such as the GB railways and Artificial Intelligence (AI) contexts as well.

Keywords: Accident analysis, AI algorithm internal audit, bounded rationality, Byzantine failures, heuristics and biases approach.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 366
1859 The Applications of Toyota Production System to Reduce Wastes in Agricultural Products Packing Process: A Study of Onion Packing Plant

Authors: Paisarn Larpsomboonchai

Abstract:

Agro-industry is one of major industries that have strong impacts on national economic incomes, growth, stability, and sustainable development. Moreover, this industry also has strong influences on social, cultural and political issues. Furthermore, this industry, as producing primary and secondary products, is facing challenges from such diverse factors such as demand inconsistency, intense international competition, technological advancements and new competitors. In order to maintain and to improve industry’s competitiveness in both domestics and international markets, science and technology are key factors. Besides hard sciences and technologies, modern industrial engineering concepts such as Just in Time (JIT) Total Quality Management (TQM), Quick Response (QR), Supply Chain Management (SCM) and Lean can be very effective to support to increase efficiency and effectiveness of these agricultural products on world stage. Onion is one of Thailand’s major export products which bring back national incomes. But, it is also facing challenges in many ways. This paper focused its interests in onion packing process and its related activities such as storage and shipment from one of major packing plant and storage in Mae Wang District, Chiang Mai, Thailand, by applying Toyota Production System (TPS) or Lean concepts, to improve process capability throughout the entire packing and distribution process which will be profitable for the whole onion supply chain. And it will be beneficial to other related agricultural products in Thailand and other ASEAN countries.

Keywords: Lean Concepts, Lean in Agro-industries Activities, Packing Process, Toyota Production System (TPS), Waste Reduction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2218
1858 An Algorithm Proposed for FIR Filter Coefficients Representation

Authors: Mohamed Al Mahdi Eshtawie, Masuri Bin Othman

Abstract:

Finite impulse response (FIR) filters have the advantage of linear phase, guaranteed stability, fewer finite precision errors, and efficient implementation. In contrast, they have a major disadvantage of high order need (more coefficients) than IIR counterpart with comparable performance. The high order demand imposes more hardware requirements, arithmetic operations, area usage, and power consumption when designing and fabricating the filter. Therefore, minimizing or reducing these parameters, is a major goal or target in digital filter design task. This paper presents an algorithm proposed for modifying values and the number of non-zero coefficients used to represent the FIR digital pulse shaping filter response. With this algorithm, the FIR filter frequency and phase response can be represented with a minimum number of non-zero coefficients. Therefore, reducing the arithmetic complexity needed to get the filter output. Consequently, the system characteristic i.e. power consumption, area usage, and processing time are also reduced. The proposed algorithm is more powerful when integrated with multiplierless algorithms such as distributed arithmetic (DA) in designing high order digital FIR filters. Here the DA usage eliminates the need for multipliers when implementing the multiply and accumulate unit (MAC) and the proposed algorithm will reduce the number of adders and addition operations needed through the minimization of the non-zero values coefficients to get the filter output.

Keywords: Pulse shaping Filter, Distributed Arithmetic, Optimization algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3164
1857 Automatic Detection of Breast Tumors in Sonoelastographic Images Using DWT

Authors: A. Sindhuja, V. Sadasivam

Abstract:

Breast Cancer is the most common malignancy in women and the second leading cause of death for women all over the world. Earlier the detection of cancer, better the treatment. The diagnosis and treatment of the cancer rely on segmentation of Sonoelastographic images. Texture features has not considered for Sonoelastographic segmentation. Sonoelastographic images of 15 patients containing both benign and malignant tumorsare considered for experimentation.The images are enhanced to remove noise in order to improve contrast and emphasize tumor boundary. It is then decomposed into sub-bands using single level Daubechies wavelets varying from single co-efficient to six coefficients. The Grey Level Co-occurrence Matrix (GLCM), Local Binary Pattern (LBP) features are extracted and then selected by ranking it using Sequential Floating Forward Selection (SFFS) technique from each sub-band. The resultant images undergo K-Means clustering and then few post-processing steps to remove the false spots. The tumor boundary is detected from the segmented image. It is proposed that Local Binary Pattern (LBP) from the vertical coefficients of Daubechies wavelet with two coefficients is best suited for segmentation of Sonoelastographic breast images among the wavelet members using one to six coefficients for decomposition. The results are also quantified with the help of an expert radiologist. The proposed work can be used for further diagnostic process to decide if the segmented tumor is benign or malignant.

Keywords: Breast Cancer, Segmentation, Sonoelastography, Tumor Detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2200
1856 Application of KL Divergence for Estimation of Each Metabolic Pathway Genes

Authors: Shohei Maruyama, Yasuo Matsuyama, Sachiyo Aburatani

Abstract:

Development of a method to estimate gene functions is an important task in bioinformatics. One of the approaches for the annotation is the identification of the metabolic pathway that genes are involved in. Since gene expression data reflect various intracellular phenomena, those data are considered to be related with genes’ functions. However, it has been difficult to estimate the gene function with high accuracy. It is considered that the low accuracy of the estimation is caused by the difficulty of accurately measuring a gene expression. Even though they are measured under the same condition, the gene expressions will vary usually. In this study, we proposed a feature extraction method focusing on the variability of gene expressions to estimate the genes' metabolic pathway accurately. First, we estimated the distribution of each gene expression from replicate data. Next, we calculated the similarity between all gene pairs by KL divergence, which is a method for calculating the similarity between distributions. Finally, we utilized the similarity vectors as feature vectors and trained the multiclass SVM for identifying the genes' metabolic pathway. To evaluate our developed method, we applied the method to budding yeast and trained the multiclass SVM for identifying the seven metabolic pathways. As a result, the accuracy that calculated by our developed method was higher than the one that calculated from the raw gene expression data. Thus, our developed method combined with KL divergence is useful for identifying the genes' metabolic pathway.

Keywords: Metabolic pathways, gene expression data, microarray, Kullback–Leibler divergence, KL divergence, support vector machines, SVM, machine learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2325
1855 Information Retrieval: A Comparative Study of Textual Indexing Using an Oriented Object Database (db4o) and the Inverted File

Authors: Mohammed Erritali

Abstract:

The growth in the volume of text data such as books and articles in libraries for centuries has imposed to establish effective mechanisms to locate them. Early techniques such as abstraction, indexing and the use of classification categories have marked the birth of a new field of research called "Information Retrieval". Information Retrieval (IR) can be defined as the task of defining models and systems whose purpose is to facilitate access to a set of documents in electronic form (corpus) to allow a user to find the relevant ones for him, that is to say, the contents which matches with the information needs of the user. Most of the models of information retrieval use a specific data structure to index a corpus which is called "inverted file" or "reverse index". This inverted file collects information on all terms over the corpus documents specifying the identifiers of documents that contain the term in question, the frequency of each term in the documents of the corpus, the positions of the occurrences of the word... In this paper we use an oriented object database (db4o) instead of the inverted file, that is to say, instead to search a term in the inverted file, we will search it in the db4o database. The purpose of this work is to make a comparative study to see if the oriented object databases may be competing for the inverse index in terms of access speed and resource consumption using a large volume of data.

Keywords: Information Retrieval, indexation, oriented object database (db4o), inverted file.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1728
1854 A Survey on Data-Centric and Data-Aware Techniques for Large Scale Infrastructures

Authors: Silvina Caíno-Lores, Jesús Carretero

Abstract:

Large scale computing infrastructures have been widely developed with the core objective of providing a suitable platform for high-performance and high-throughput computing. These systems are designed to support resource-intensive and complex applications, which can be found in many scientific and industrial areas. Currently, large scale data-intensive applications are hindered by the high latencies that result from the access to vastly distributed data. Recent works have suggested that improving data locality is key to move towards exascale infrastructures efficiently, as solutions to this problem aim to reduce the bandwidth consumed in data transfers, and the overheads that arise from them. There are several techniques that attempt to move computations closer to the data. In this survey we analyse the different mechanisms that have been proposed to provide data locality for large scale high-performance and high-throughput systems. This survey intends to assist scientific computing community in understanding the various technical aspects and strategies that have been reported in recent literature regarding data locality. As a result, we present an overview of locality-oriented techniques, which are grouped in four main categories: application development, task scheduling, in-memory computing and storage platforms. Finally, the authors include a discussion on future research lines and synergies among the former techniques.

Keywords: Co-scheduling, data-centric, data-intensive, data locality, in-memory storage, large scale.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1485
1853 Ex-Offenders’ Labelling, Stigmatisation and Unsuccessful Re-Integration as Factors Leading into Recidivism: A South African Context

Authors: Tshimangadzo Oscar Magadze

Abstract:

For successful re-integration, the individual offender must adapt and transform, which requires that the offender should adopt and internalise socially approved norms, attitudes, values, and beliefs. However, the offender’s labelling and community stigmatisation decide the destination of the offender. Community involvement in ex-offenders’ re-integration is an important issue in efforts to reduce recidivism and to control overcrowding in our correctional facilities. Crime is a social problem that requires society to come together to fight against it. This study was conducted in the Limpopo Province in Vhembe District Municipality within four local municipalities, namely Musina, Makhado, Mutale, and Thulamela. A total number of 30 participants were interviewed, and all were members of the Community Corrections Forums. This was necessitated by the fact that Musina is a very small area, which compelled the Department of Correctional Services to combine the two (Musina and Makhado) into one social re-integration entity. This is a qualitative research study where participants were selected through the use of purposive sampling. Participants were selected based on the value they would add to this study in order to achieve the objectives. The data collection method of this study was the focus group, which comprised of three groups of 10 participants each. Thulamela and Mutale local municipalities formed a group with (10) participants each, whereas Musina (2) and Makhado (8) formed another. Results indicate that the current situation is not conducive for re-integration to be successful. Participants raised many factors that need serious redress, namely offenders’ discrimination, lack of forgiveness by members of the community, which is fuelled by lack of community awareness due to the failure of the Department of Correctional Services in educating communities on ex-offenders’ re-integration.

Keywords: Ex-offender, labelling, re-integration, stigmatisation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 780
1852 Why Are Entrepreneurs Resistant to E-tools?

Authors: D. Ščeulovs, E. Gaile-Sarkane

Abstract:

Latvia is the fourth in the world by means of broadband internet speed. The total number of internet users in Latvia exceeds 70% of its population. The number of active mailboxes of the local internet e-mail service Inbox.lv accounts for 68% of the population and 97.6% of the total number of internet users. The Latvian portal Draugiem.lv is a phenomenon of social media, because 58.4 % of the population and 83.5% of internet users use it. A majority of Latvian company profiles are available on social networks, the most popular being Twitter.com. These and other parameters prove the fact consumers and companies are actively using the Internet. 

However, after the authors in a number of studies analyzed how enterprises are employing the e-environment, namely, e-environment tools, they arrived to the conclusions that are not as flattering as the aforementioned statistics. There is an obvious contradiction between the statistical data and the actual studies. As a result, the authors have posed a question: Why are entrepreneurs resistant to e-tools? In order to answer this question, the authors have addressed the Technology Acceptance Model (TAM). The authors analyzed each phase and determined several factors affecting the use of e-environment, reaching the main conclusion that entrepreneurs do not have a sufficient level of e-literacy (digital literacy). 

The authors employ well-established quantitative and qualitative methods of research: grouping, analysis, statistic method, factor analysis in SPSS 20  environment etc. 

The theoretical and methodological background of the research is formed by, scientific researches and publications, that from the mass media and professional literature, statistical information from legal institutions as well as information collected by the author during the survey.

Keywords: E-environment, e-environment tools, technology acceptance model, factors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1519
1851 Health Information Technology in Developing Countries: A Structured Literature Review with Reference to the Case of Libya

Authors: Haythem A. Nakkas, Philip J. Scott, Jim S. Briggs

Abstract:

This paper reports a structured literature review of the application of Health Information Technology in developing countries, defined as the World Bank categories Low-income countries, Lower-middle-income, and Upper-middle-income countries. The aim was to identify and classify the various applications of health information technology to assess its current state in developing countries and explore potential areas of research. We offer specific analysis and application of HIT in Libya as one of the developing countries. A structured literature review was conducted using the following online databases: IEEE, Science Direct, PubMed, and Google Scholar. Publication dates were set for 2000-2013. For the PubMed search, publications in English, French, and Arabic were specified. Using a content analysis approach, 159 papers were analyzed and a total number of 26 factors were identified that affect the adoption of health information technology. Of the 2681 retrieved articles, 159 met the inclusion criteria which were carefully analyzed and classified. The implementation of health information technology across developing countries is varied. Whilst it was initially expected financial constraints would have severely limited health information technology implementation, some developing countries like India have nevertheless dominated the literature and taken the lead in conducting scientific research. Comparing the number of studies to the number of countries in each category, we found that Low-income countries and Lower-middle-income had more studies carried out than Upper-middle-income countries. However, whilst IT has been used in various sectors of the economy, the healthcare sector in developing countries is still failing to benefit fully from the potential advantages that IT can offer.

Keywords: Developing Countries, Developed Countries, Factors, Failure, Implementation, Libya, Success.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2219
1850 Pilot Study on the Impact of VLE on Mathematical Concepts Acquisition within Secondary Education in England

Authors: Aaron A. R. Nwabude

Abstract:

The research investigates the “impact of VLE on mathematical concepts acquisition of the special education needs (SENs) students at KS4 secondary education sector" in England. The overall aim of the study is to establish possible areas of difficulties to approach for above or below knowledge standard requirements for KS4 students in the acquisition and validation of basic mathematical concepts. A teaching period, in which virtual learning environment (Fronter) was used to emphasise different mathematical perception and symbolic representation was carried out and task based survey conducted to 20 special education needs students [14 actually took part]. The result shows that students were able to process information and consider images, objects and numbers within the VLE at early stages of acquisition process. They were also able to carry out perceptual tasks but with limiting process of different quotient, thus they need teacher-s guidance to connect them to symbolic representations and sometimes coach them through. The pilot study further indicates that VLE curriculum approaches for students were minutely aligned with mathematics teaching which does not emphasise the integration of VLE into the existing curriculum and current teaching practice. There was also poor alignment of vision regarding the use of VLE in realisation of the objectives of teaching mathematics by the management. On the part of teacher training, not much was done to develop teacher-s skills in the technical and pedagogical aspects of VLE that is in-use at the school. The classroom observation confirmed teaching practice will find a reliance on VLE as an enhancer of mathematical skills, providing interaction and personalisation of learning to SEN students.

Keywords: VLE, Mathematical Concepts Acquisition, PilotStudy, SENs, KS4, Education, Teacher

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1447
1849 3D Numerical Studies on Jets Acoustic Characteristics of Chevron Nozzles for Aerospace Applications

Authors: R. Kanmaniraja, R. Freshipali, J. Abdullah, K. Niranjan, K. Balasubramani, V. R. Sanal Kumar

Abstract:

The present environmental issues have made aircraft jet noise reduction a crucial problem in aero-acoustics research. Acoustic studies reveal that addition of chevrons to the nozzle reduces the sound pressure level reasonably with acceptable reduction in performance. In this paper comprehensive numerical studies on acoustic characteristics of different types of chevron nozzles have been carried out with non-reacting flows for the shape optimization of chevrons in supersonic nozzles for aerospace applications. The numerical studies have been carried out using a validated steady 3D density based, k-ε turbulence model. In this paper chevron with sharp edge, flat edge, round edge and U-type edge are selected for the jet acoustic characterization of supersonic nozzles. We observed that compared to the base model a case with round-shaped chevron nozzle could reduce 4.13% acoustic level with 0.6% thrust loss. We concluded that the prudent selection of the chevron shape will enable an appreciable reduction of the aircraft jet noise without compromising its overall performance. It is evident from the present numerical simulations that k-ε model can predict reasonably well the acoustic level of chevron supersonic nozzles for its shape optimization.

Keywords: Supersonic nozzle, Chevron, Acoustic level, Shape Optimization of Chevron Nozzles, Jet noise suppression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3817
1848 Copper Price Prediction Model for Various Economic Situations

Authors: Haidy S. Ghali, Engy Serag, A. Samer Ezeldin

Abstract:

Copper is an essential raw material used in the construction industry. During 2021 and the first half of 2022, the global market suffered from a significant fluctuation in copper raw material prices due to the aftermath of both the COVID-19 pandemic and the Russia-Ukraine war which exposed its consumers to an unexpected financial risk. Thereto, this paper aims to develop two hybrid price prediction models using artificial neural network and long short-term memory (ANN-LSTM), by Python, that can forecast the average monthly copper prices, traded in the London Metal Exchange; the first model is a multivariate model that forecasts the copper price of the next 1-month and the second is a univariate model that predicts the copper prices of the upcoming three months. Historical data of average monthly London Metal Exchange copper prices are collected from January 2009 till July 2022 and potential external factors are identified and employed in the multivariate model. These factors lie under three main categories: energy prices, and economic indicators of the three major exporting countries of copper depending on the data availability. Before developing the LSTM models, the collected external parameters are analyzed with respect to the copper prices using correlation, and multicollinearity tests in R software; then, the parameters are further screened to select the parameters that influence the copper prices. Then, the two LSTM models are developed, and the dataset is divided into training, validation, and testing sets. The results show that the performance of the 3-month prediction model is better than the 1-month prediction model; but still, both models can act as predicting tools for diverse economic situations.

Keywords: Copper prices, prediction model, neural network, time series forecasting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 174