Search results for: measurement problem
8319 A Study of Generation Y's Career Attitude at Workplace
Authors: Supriadi Hardianto, Aditya Daniswara
Abstract:
Today's workplace, flooded by millennial Generation or known also as Generation Y. A common problem that faced by the company towards Gen Y is a high turnover rate, attitudes problem, communication style, and different work style than the older generation. This is common in private sector. The objective of this study is to get a better understanding of the Gen Y Career Attitude at the workplace. The subject of this study is focusing on 430 respondent of Gen Y which age between 20 – 35 years old who works for a private company. The Questionnaire as primary data source captured 9 aspects of career attitude based on Career Attitudes Strategy Inventory (CASI). This Survey distributes randomly among Gen Y in the IT Industry (125 Respondent) and Manufacture Company (305 Respondent). A Random deep interview was conducted to get the better understanding of the etiology of their primary obstacles. The study showed that most of Indonesia Gen Y have a moderate score on Job satisfaction but in the other aspects, Gen Y has the lowest score on Skill Development, Career Worries, Risk-Taking Style, Dominant Style, Work Involvement, Geographical Barrier, Interpersonal Abuse, and Family Commitment. The top 5 obstacles outside that 9 aspects that faced by Gen Y are 1. Lower communication & networking support; 2. Self-confidence issues; 3. Financial Problem; 4. Emotional issues; 5. Age. We also found that parent perspective toward the way they are nurturing their child are not aligned with their child’s real life. This research fundamentally helps the organization and other Gen Y’s Stakeholders to have a better understanding of Gen Y Career Attitude at the workplace.Keywords: career attitudes, CASI, Gen Y, career attitude at workplace
Procedia PDF Downloads 1608318 Microstructural Mechanical Properties of Human Trabecular Bone Based on Nanoindentation Test
Authors: K. Jankowski, M. Pawlikowski, A. Makuch, K. Skalski
Abstract:
Depth-sensing indentation (DSI) or nanoindentation is becoming a more and more popular method of measuring mechanical properties of various materials and tissues at a micro-scale. This technique allows measurements without complicated sample preparation procedures which makes this method very useful. As a result of measurement force and displacement of the intender are obtained. It is also possible to determine three measures of hardness i.e. Martens hardness (HM), nanohardness (HIT), Vickers hardness (HV) and Young modulus EIT. In this work trabecular bone mechanical properties were investigated. The bone samples were harvested from human femoral heads during hip replacement surgery. Patients were of different age, sexes and stages of tissue degeneration caused by osteoarthritis. The specimens were divided into three groups. Each group contained samples harvested from patients of different range of age. All samples were investigated with the same measurement conditions. The maximum load was Pmax=500 mN and the loading rate was 500 mN/min. The tests were held without hold at the peak force. The tests were conducted with indenter Vickers tip and spherical tip of the diameter 0.2 mm. Each trabecular bone sample was tested 7 times in a close area of the same trabecula. The measured loading P as a function of indentation depth allowed to obtain hysteresis loop and HM, HIT, HV, EIT. Results for arbitrarily chosen sample are HM=289.95 ± 42.31 MPa, HIT=430.75 ± 45.37 MPa, HV=40.66 ± 4.28 Vickers, EIT=7.37 ± 1.84 GPa for Vickers tip and HM=115.19 ± 15.03 MPa, HIT=165.80 ± 19.30 MPa, HV=16.90 ± 1.97 Vickers, EIT=5.30 ± 1.31 GPa for spherical tip. Results of nanoindentation tests show that this method is very useful and is perfect for obtaining mechanical properties of trabecular bone. Estimated values of elastic modulus are similar. The differences between hardness are significant but it is a result of using two different types of tips. However, it has to be emphasised that the differences in the values of elastic modulus and hardness result from different testing protocols, anisotropy and asymmetry of the micro-samples and the hydration of bone.Keywords: human bone, mechanical properties, nano hardness nanoindentation, trabecular bone
Procedia PDF Downloads 2808317 Chemical Analysis of Particulate Matter (PM₂.₅) and Volatile Organic Compound Contaminants
Authors: S. Ebadzadsahraei, H. Kazemian
Abstract:
The main objective of this research was to measure particulate matter (PM₂.₅) and Volatile Organic Compound (VOCs) as two classes of air pollutants, at Prince George (PG) neighborhood in warm and cold seasons. To fulfill this objective, analytical protocols were developed for accurate sampling and measurement of the targeted air pollutants. PM₂.₅ samples were analyzed for their chemical composition (i.e., toxic trace elements) in order to assess their potential source of emission. The City of Prince George, widely known as the capital of northern British Columbia (BC), Canada, has been dealing with air pollution challenges for a long time. The city has several local industries including pulp mills, a refinery, and a couple of asphalt plants that are the primary contributors of industrial VOCs. In this research project, which is the first study of this kind in this region it measures physical and chemical properties of particulate air pollutants (PM₂.₅) at the city neighborhood. Furthermore, this study quantifies the percentage of VOCs at the city air samples. One of the outcomes of this project is updated data about PM₂.₅ and VOCs inventory in the selected neighborhoods. For examining PM₂.₅ chemical composition, an elemental analysis methodology was developed to measure major trace elements including but not limited to mercury and lead. The toxicity of inhaled particulates depends on both their physical and chemical properties; thus, an understanding of aerosol properties is essential for the evaluation of such hazards, and the treatment of such respiratory and other related diseases. Mixed cellulose ester (MCE) filters were selected for this research as a suitable filter for PM₂.₅ air sampling. Chemical analyses were conducted using Inductively Coupled Plasma Mass Spectrometry (ICP-MS) for elemental analysis. VOCs measurement of the air samples was performed using a Gas Chromatography-Flame Ionization Detector (GC-FID) and Gas Chromatography-Mass Spectrometry (GC-MS) allowing for quantitative measurement of VOC molecules in sub-ppb levels. In this study, sorbent tube (Anasorb CSC, Coconut Charcoal), 6 x 70-mm size, 2 sections, 50/100 mg sorbent, 20/40 mesh was used for VOCs air sampling followed by using solvent extraction and solid-phase micro extraction (SPME) techniques to prepare samples for measuring by a GC-MS/FID instrument. Air sampling for both PM₂.₅ and VOC were conducted in summer and winter seasons for comparison. Average concentrations of PM₂.₅ are very different between wildfire and daily samples. At wildfire time average of concentration is 83.0 μg/m³ and daily samples are 23.7 μg/m³. Also, higher concentrations of iron, nickel and manganese found at all samples and mercury element is found in some samples. It is able to stay too high doses negative effects.Keywords: air pollutants, chemical analysis, particulate matter (PM₂.₅), volatile organic compound, VOCs
Procedia PDF Downloads 1468316 Outsourcing the Front End of Innovation
Abstract:
The paper presents a new method for efficient innovation process management. Even though the innovation management methods, tools and knowledge are well established and documented in literature, most of the companies still do not manage it efficiently. Especially in SMEs the front end of innovation - problem identification, idea creation and selection - is often not optimally performed. Our eMIPS methodology represents a sort of "umbrella methodology"- a well-defined set of procedures, which can be dynamically adapted to the concrete case in a company. In daily practice, various methods (e.g. for problem identification and idea creation) can be applied, depending on the company's needs. It is based on the proactive involvement of the company's employees supported by the appropriate methodology and external experts. The presented phases are performed via a mixture of face-to-face activities (workshops) and online (eLearning) activities taking place in eLearning Moodle environment and using other e-communication channels. One part of the outcomes is an identified set of opportunities and concrete solutions ready for implementation. The other also very important result is connected to innovation competences for the participating employees related with concrete tools and methods for idea management. In addition, the employees get a strong experience for dynamic, efficient and solution oriented managing of the invention process. The eMIPS also represents a way of establishing or improving the innovation culture in the organization. The first results in a pilot company showed excellent results regarding the motivation of participants and also as to the results achieved.Keywords: creativity, distance learning, front end, innovation, problem
Procedia PDF Downloads 3338315 [Keynote Talk]: Analysis of One Dimensional Advection Diffusion Model Using Finite Difference Method
Authors: Vijay Kumar Kukreja, Ravneet Kaur
Abstract:
In this paper, one dimensional advection diffusion model is analyzed using finite difference method based on Crank-Nicolson scheme. A practical problem of filter cake washing of chemical engineering is analyzed. The model is converted into dimensionless form. For the grid Ω × ω = [0, 1] × [0, T], the Crank-Nicolson spatial derivative scheme is used in space domain and forward difference scheme is used in time domain. The scheme is found to be unconditionally convergent, stable, first order accurate in time and second order accurate in space domain. For a test problem, numerical results are compared with the analytical ones for different values of parameter.Keywords: Crank-Nicolson scheme, Lax-Richtmyer theorem, stability, consistency, Peclet number, Greschgorin circle
Procedia PDF Downloads 2278314 Green Closed-Loop Supply Chain Network Design Considering Different Production Technologies Levels and Transportation Modes
Authors: Mahsa Oroojeni Mohammad Javad
Abstract:
Globalization of economic activity and rapid growth of information technology has resulted in shorter product lifecycles, reduced transport capacity, dynamic and changing customer behaviors, and an increased focus on supply chain design in recent years. The design of the supply chain network is one of the most important supply chain management decisions. These decisions will have a long-term impact on the efficacy and efficiency of the supply chain. In this paper, a two-objective mixed-integer linear programming (MILP) model is developed for designing and optimizing a closed-loop green supply chain network that, to the greatest extent possible, includes all real-world assumptions such as multi-level supply chain, the multiplicity of production technologies, and multiple modes of transportation, with the goals of minimizing the total cost of the chain (first objective) and minimizing total emissions of emissions (second objective). The ε-constraint and CPLEX Solver have been used to solve the problem as a single-objective problem and validate the problem. Finally, the sensitivity analysis is applied to study the effect of the real-world parameters’ changes on the objective function. The optimal management suggestions and policies are presented.Keywords: closed-loop supply chain, multi-level green supply chain, mixed-integer programming, transportation modes
Procedia PDF Downloads 838313 Regularization of Gene Regulatory Networks Perturbed by White Noise
Authors: Ramazan I. Kadiev, Arcady Ponosov
Abstract:
Mathematical models of gene regulatory networks can in many cases be described by ordinary differential equations with switching nonlinearities, where the initial value problem is ill-posed. Several regularization methods are known in the case of deterministic networks, but the presence of stochastic noise leads to several technical difficulties. In the presentation, it is proposed to apply the methods of the stochastic singular perturbation theory going back to Yu. Kabanov and Yu. Pergamentshchikov. This approach is used to regularize the above ill-posed problem, which, e.g., makes it possible to design stable numerical schemes. Several examples are provided in the presentation, which support the efficiency of the suggested analysis. The method can also be of interest in other fields of biomathematics, where differential equations contain switchings, e.g., in neural field models.Keywords: ill-posed problems, singular perturbation analysis, stochastic differential equations, switching nonlinearities
Procedia PDF Downloads 2008312 An Improvement of Multi-Label Image Classification Method Based on Histogram of Oriented Gradient
Authors: Ziad Abdallah, Mohamad Oueidat, Ali El-Zaart
Abstract:
Image Multi-label Classification (IMC) assigns a label or a set of labels to an image. The big demand for image annotation and archiving in the web attracts the researchers to develop many algorithms for this application domain. The existing techniques for IMC have two drawbacks: The description of the elementary characteristics from the image and the correlation between labels are not taken into account. In this paper, we present an algorithm (MIML-HOGLPP), which simultaneously handles these limitations. The algorithm uses the histogram of gradients as feature descriptor. It applies the Label Priority Power-set as multi-label transformation to solve the problem of label correlation. The experiment shows that the results of MIML-HOGLPP are better in terms of some of the evaluation metrics comparing with the two existing techniques.Keywords: data mining, information retrieval system, multi-label, problem transformation, histogram of gradients
Procedia PDF Downloads 3798311 Characterization of Group Dynamics for Fostering Mathematical Modeling Competencies
Authors: Ayse Ozturk
Abstract:
The study extends the prior research on modeling competencies by positioning students’ cognitive and language resources as the fundamentals for pursuing their own inquiry and expression lines through mathematical modeling. This strategy aims to answer the question that guides this study, “How do students’ group approaches to modeling tasks affect their modeling competencies over a unit of instruction?” Six bilingual tenth-grade students worked on open-ended modeling problems along with the content focused on quantities over six weeks. Each group was found to have a unique cognitive approach for solving these problems. Three different problem-solving strategies affected how the groups’ modeling competencies changed. The results provide evidence that the discussion around groups’ solutions, coupled with their reflections, advances group interpreting and validating competencies in the mathematical modeling processKeywords: cognition, collective learning, mathematical modeling competencies, problem-solving
Procedia PDF Downloads 1668310 Testifying in Court as a Victim of Crime for Persons with Little or No Functional Speech: Vocabulary Implications
Authors: Robyn White, Juan Bornman, Ensa Johnson
Abstract:
People with disabilities are at a high risk of becoming victims of crime. Individuals with little or no functional speech (LNFS) face an even higher risk. One way of reducing the risk of remaining a victim of crime is to face the alleged perpetrator in court as a witness – therefore it is important for a person with LNFS who has been a victim of crime to have the required vocabulary to testify in court. The aim of this study was to identify and describe the core and fringe legal vocabulary required by illiterate victims of crime, who have little or no functional speech, to testify in court as witnesses. A mixed-method, the exploratory sequential design consisting of two distinct phases was used to address the aim of the research. The first phase was of a qualitative nature and included two different data sources, namely in-depth semi-structured interviews and focus group discussions. The overall aim of this phase was to identify and describe core and fringe legal vocabulary and to develop a measurement instrument based on these results. Results from Phase 1 were used in Phase 2, the quantitative phase, during which the measurement instrument (a custom-designed questionnaire) was socially validated. The results produced six distinct vocabulary categories that represent the legal core vocabulary and 99 words that represent the legal fringe vocabulary. The findings suggested that communication boards should be individualised to the individual and the specific crime. It is believed that the vocabulary lists developed in this study act as a valid and reliable springboard from which communication boards can be developed. Recommendations were therefore made to develop an Alternative and Augmentative Communication Resource Tool Kit to assist the legal justice system.Keywords: augmentative and alternative communication, person with little or no functional speech, sexual crimes, testifying in court, victim of crime, witness competency
Procedia PDF Downloads 4848309 Understanding Cognitive Fatigue From FMRI Scans With Self-supervised Learning
Authors: Ashish Jaiswal, Ashwin Ramesh Babu, Mohammad Zaki Zadeh, Fillia Makedon, Glenn Wylie
Abstract:
Functional magnetic resonance imaging (fMRI) is a neuroimaging technique that records neural activations in the brain by capturing the blood oxygen level in different regions based on the task performed by a subject. Given fMRI data, the problem of predicting the state of cognitive fatigue in a person has not been investigated to its full extent. This paper proposes tackling this issue as a multi-class classification problem by dividing the state of cognitive fatigue into six different levels, ranging from no-fatigue to extreme fatigue conditions. We built a spatio-temporal model that uses convolutional neural networks (CNN) for spatial feature extraction and a long short-term memory (LSTM) network for temporal modeling of 4D fMRI scans. We also applied a self-supervised method called MoCo (Momentum Contrast) to pre-train our model on a public dataset BOLD5000 and fine-tuned it on our labeled dataset to predict cognitive fatigue. Our novel dataset contains fMRI scans from Traumatic Brain Injury (TBI) patients and healthy controls (HCs) while performing a series of N-back cognitive tasks. This method establishes a state-of-the-art technique to analyze cognitive fatigue from fMRI data and beats previous approaches to solve this problem.Keywords: fMRI, brain imaging, deep learning, self-supervised learning, contrastive learning, cognitive fatigue
Procedia PDF Downloads 1948308 Performance Measurement by Analytic Hierarchy Process in Performance Based Logistics
Authors: M. Hilmi Ozdemir, Gokhan Ozkan
Abstract:
Performance Based Logistics (PBL) is a strategic approach that enables creating long-term and win-win relations among stakeholders in the acquisition. Contrary to the traditional single transactions, the expected value is created by the performance of the service pertaining to the strategic relationships in this approach. PBL motivates all relevant stakeholders to focus on their core competencies to produce the desired outcome in a collective way. The desired outcome can only be assured with a cost effective way as long as it is periodically measured with the right performance parameters. Thus, defining these parameters is a crucial step for the PBL contracts. In performance parameter determination, Analytic Hierarchy Process (AHP), which is a multi-criteria decision making methodology for complex cases, was used within this study for a complex system. AHP has been extensively applied in various areas including supply chain, inventory management, outsourcing, and logistics. This methodology made it possible to convert end-user’s main operation and maintenance requirements to sub criteria contained by a single performance parameter. Those requirements were categorized and assigned weights by the relevant stakeholders. Single performance parameter capable of measuring the overall performance of a complex system is the major outcome of this study. The parameter deals with the integrated assessment of different functions spanning from training, operation, maintenance, reporting, and documentation that are implemented within a complex system. The aim of this study is to show the methodology and processes implemented to identify a single performance parameter for measuring the whole performance of a complex system within a PBL contract. AHP methodology is recommended as an option for the researches and the practitioners who seek for a lean and integrated approach for performance assessment within PBL contracts. The implementation of AHP methodology in this study may help PBL practitioners from methodological perception and add value to AHP in becoming prevalent.Keywords: analytic hierarchy process, performance based logistics, performance measurement, performance parameters
Procedia PDF Downloads 2858307 A Deep Learning Approach to Subsection Identification in Electronic Health Records
Authors: Nitin Shravan, Sudarsun Santhiappan, B. Sivaselvan
Abstract:
Subsection identification, in the context of Electronic Health Records (EHRs), is identifying the important sections for down-stream tasks like auto-coding. In this work, we classify the text present in EHRs according to their information, using machine learning and deep learning techniques. We initially describe briefly about the problem and formulate it as a text classification problem. Then, we discuss upon the methods from the literature. We try two approaches - traditional feature extraction based machine learning methods and deep learning methods. Through experiments on a private dataset, we establish that the deep learning methods perform better than the feature extraction based Machine Learning Models.Keywords: deep learning, machine learning, semantic clinical classification, subsection identification, text classification
Procedia PDF Downloads 2238306 A New OvS Approach in Assembly Line Balancing Problem
Authors: P. Azimi, B. Behtoiy, A. A. Najafi, H. R. Charmchi
Abstract:
According to the previous studies, one of the most famous techniques which affect the efficiency of a production line is the assembly line balancing (ALB) technique. This paper examines the balancing effect of a whole production line of a real auto glass manufacturer in three steps. In the first step, processing time of each activity in the workstations is generated according to a practical approach. In the second step, the whole production process is simulated and the bottleneck stations have been identified, and finally in the third step, several improvement scenarios are generated to optimize the system throughput, and the best one is proposed. The main contribution of the current research is the proposed framework which combines two famous approaches including Assembly Line Balancing and Optimization via Simulation technique (OvS). The results show that the proposed framework could be applied in practical environments, easily.Keywords: assembly line balancing problem, optimization via simulation, production planning
Procedia PDF Downloads 5298305 Evaluation of E-Government Service Quality
Authors: Nguyen Manh Hien
Abstract:
Service quality is the highest requirement from users, especially for the service in electronic government. During the past decades, it has become a major area of academic investigation. Considering this issue, there are many researches that evaluated the dimensions and e-service contexts. This study also identified the dimensions of service quality but focused on a new conceptual and provides a new methodological in developing measurement scales of e-service quality such as information quality, service quality and organization quality. Finally, the study will suggest a key factor to evaluate e-government service quality better.Keywords: dimensionality, e-government, e-service, e-service quality
Procedia PDF Downloads 5508304 Secondary Charged Fragments Tracking for On-Line Beam Range Monitoring in Particle Therapy
Authors: G. Traini, G. Battistoni, F. Collamati, E. De Lucia, R. Faccini, C. Mancini-Terracciano, M. Marafini, I. Mattei, S. Muraro, A. Sarti, A. Sciubba, E. Solfaroli Camillocci, M. Toppi, S. M. Valle, C. Voena, V. Patera
Abstract:
In Particle Therapy (PT) treatments a large amount of secondary particles, whose emission point is correlated to the dose released in the crossed tissues, is produced. The measurement of the secondary charged fragments component could represent a valid technique to monitor the beam range during the PT treatments, that is a still missing item in the clinical practice. A sub-millimetrical precision on the beam range measurement is required to significantly optimise the technique and to improve the treatment quality. In this contribution, a detector, named Dose Profiler (DP), is presented. It is specifically planned to monitor on-line the beam range exploiting the secondary charged particles produced in PT Carbon ions treatment. In particular, the DP is designed to track the secondary fragments emitted at large angles with respect to the beam direction (mainly protons), with the aim to reconstruct the spatial coordinates of the fragment emission point extrapolating the measured track toward the beam axis. The DP is currently under development within of the INSIDE collaboration (Innovative Solutions for In-beam Dosimetry in hadrontherapy). The tracker is made by six layers (20 × 20 cm²) of BCF-12 square scintillating fibres (500 μm) coupled to Silicon Photo-Multipliers, followed by two plastic scintillator layers of 6 mm thickness. A system of front-end boards based on FPGAs arranged around the detector provides the data acquisition. The detector characterization with cosmic rays is currently undergoing, and a data taking campaign with protons will take place in May 2017. The DP design and the performances measured with using MIPs and protons beam will be reviewed.Keywords: fragmentation, monitoring, particle therapy, tracking
Procedia PDF Downloads 2388303 Optimization of Topology-Aware Job Allocation on a High-Performance Computing Cluster by Neural Simulated Annealing
Authors: Zekang Lan, Yan Xu, Yingkun Huang, Dian Huang, Shengzhong Feng
Abstract:
Jobs on high-performance computing (HPC) clusters can suffer significant performance degradation due to inter-job network interference. Topology-aware job allocation problem (TJAP) is such a problem that decides how to dedicate nodes to specific applications to mitigate inter-job network interference. In this paper, we study the window-based TJAP on a fat-tree network aiming at minimizing the cost of communication hop, a defined inter-job interference metric. The window-based approach for scheduling repeats periodically, taking the jobs in the queue and solving an assignment problem that maps jobs to the available nodes. Two special allocation strategies are considered, i.e., static continuity assignment strategy (SCAS) and dynamic continuity assignment strategy (DCAS). For the SCAS, a 0-1 integer programming is developed. For the DCAS, an approach called neural simulated algorithm (NSA), which is an extension to simulated algorithm (SA) that learns a repair operator and employs them in a guided heuristic search, is proposed. The efficacy of NSA is demonstrated with a computational study against SA and SCIP. The results of numerical experiments indicate that both the model and algorithm proposed in this paper are effective.Keywords: high-performance computing, job allocation, neural simulated annealing, topology-aware
Procedia PDF Downloads 1238302 Groundwater Investigation Using Resistivity Method and Drilling for Irrigation during the Dry Season in Lwantonde District, Uganda
Authors: Tamale Vincent
Abstract:
Groundwater investigation is the investigation of underground formations to understand the hydrologic cycle, known groundwater occurrences, and identify the nature and types of aquifers. There are different groundwater investigation methods and surface geophysical method is one of the groundwater investigation more especially the Geoelectrical resistivity Schlumberger configuration method which provides valuable information regarding the lateral and vertical successions of subsurface geomaterials in terms of their individual thickness and corresponding resistivity values besides using surface geophysical method, hydrogeological and geological investigation methods are also incorporated to aid in preliminary groundwater investigation. Investigation for groundwater in lwantonde district has been implemented. The area project is located cattle corridor and the dry seasonal troubles the communities in lwantonde district of which 99% of people living there are farmers, thus making agriculture difficult and local government to provide social services to its people. The investigation was done using the Geoelectrical resistivity Schlumberger configuration method. The measurement point is located in the three sub-counties, with a total of 17 measurement points. The study location is at 0025S, 3110E, and covers an area of 160 square kilometers. Based on the results of the Geoelectrical information data, it was found two types of aquifers, which are open aquifers in depth ranging from six meters to twenty-two meters and a confined aquifer in depth ranging from forty-five meters to eighty meters. In addition to the Geoelectrical information data, drilling was done at an accessible point by heavy equipment in the Lwakagura village, Kabura sub-county. At the drilling point, artesian wells were obtained at a depth of eighty meters and can rise to two meters above the soil surface. The discovery of artesian well is then used by residents to meet the needs of clean water and for irrigation considering that in this area most wells contain iron content.Keywords: artesian well, geoelectrical, lwantonde, Schlumberger
Procedia PDF Downloads 1338301 The Correlation between Hypomania, Creative Potential and Type of Major in Undergraduate Students
Authors: Dhea Kothari
Abstract:
There is an extensive amount of research that has examined the positive relationship between creativity and hypomania in terms of creative accomplishments, eminence, behaviors, occupations. Previous research had recruited participants based on creative occupations or stages of hypomania or bipolar disorder. This thesis focused on the relationship between hypomania and creative cognitive potential, such as divergent thinking and insight problem-solving. This was examined at an undergraduate educational level by recruiting students majoring in art, majoring in natural sciences (NSCI) and those double majoring in arts and NSCI. Participants were given a modified Alternate Uses Task (AUT) to measure divergent thinking and a set of rebus puzzles to measure insight problem-solving. Both tasks involved a level of overcoming functional fixedness. A negative association was observed between hypomania and originality of responses on the AUT when an object with low functional fixedness was given to all participants. On the other hand, a positive association was found between hypomania and originality of responses on the AUT when an object with high functional fixedness was given to the participants majoring in NSCI. Therefore, the research suggests that an increased ability to overcome functional fixedness might be central to individuals with hypomania and individuals with higher creative cognitive potential.Keywords: creative cognition, convergent thinking, creativity, divergent thinking, insight, major type, problem-solving
Procedia PDF Downloads 988300 On the Interactive Search with Web Documents
Authors: Mario Kubek, Herwig Unger
Abstract:
Due to the large amount of information in the World Wide Web (WWW, web) and the lengthy and usually linearly ordered result lists of web search engines that do not indicate semantic relationships between their entries, the search for topically similar and related documents can become a tedious task. Especially, the process of formulating queries with proper terms representing specific information needs requires much effort from the user. This problem gets even bigger when the user's knowledge on a subject and its technical terms is not sufficient enough to do so. This article presents the new and interactive search application DocAnalyser that addresses this problem by enabling users to find similar and related web documents based on automatic query formulation and state-of-the-art search word extraction. Additionally, this tool can be used to track topics across semantically connected web documentsKeywords: DocAnalyser, interactive web search, search word extraction, query formulation, source topic detection, topic tracking
Procedia PDF Downloads 3978299 Detection of Resistive Faults in Medium Voltage Overhead Feeders
Authors: Mubarak Suliman, Mohamed Hassan
Abstract:
Detection of downed conductors occurring with high fault resistance (reaching kilo-ohms) has always been a challenge, especially in countries like Saudi Arabia, on which earth resistivity is very high in general (reaching more than 1000 Ω-meter). The new approaches for the detection of resistive and high impedance faults are based on the analysis of the fault current waveform. These methods are still under research and development, and they are currently lacking security and dependability. The other approach is communication-based solutions which depends on voltage measurement at the end of overhead line branches and communicate the measured signals to substation feeder relay or a central control center. However, such a detection method is costly and depends on the availability of communication medium and infrastructure. The main objective of this research is to utilize the available standard protection schemes to increase the probability of detection of downed conductors occurring with a low magnitude of fault currents and at the same time avoiding unwanted tripping in healthy conditions and feeders. By specifying the operating region of the faulty feeder, use of tripping curve for discrimination between faulty and healthy feeders, and with proper selection of core balance current transformer (CBCT) and voltage transformers with fewer measurement errors, it is possible to set the pick-up of sensitive earth fault current to minimum values of few amps (i.e., Pick-up Settings = 3 A or 4 A, …) for the detection of earth faults with fault resistance more than (1 - 2 kΩ) for 13.8kV overhead network and more than (3-4) kΩ fault resistance in 33kV overhead network. By implementation of the outcomes of this study, the probability of detection of downed conductors is increased by the utilization of existing schemes (i.e., Directional Sensitive Earth Fault Protection).Keywords: sensitive earth fault, zero sequence current, grounded system, resistive fault detection, healthy feeder
Procedia PDF Downloads 1198298 An Enhanced Harmony Search (ENHS) Algorithm for Solving Optimization Problems
Authors: Talha A. Taj, Talha A. Khan, M. Imran Khalid
Abstract:
Optimization techniques attract researchers to formulate a problem and determine its optimum solution. This paper presents an Enhanced Harmony Search (ENHS) algorithm for solving optimization problems. The proposed algorithm increases the convergence and is more efficient than the standard Harmony Search (HS) algorithm. The paper discusses the novel techniques in detail and also provides the strategy for tuning the decisive parameters that affects the efficiency of the ENHS algorithm. The algorithm is tested on various benchmark functions, a real world optimization problem and a constrained objective function. Also, the results of ENHS are compared to standard HS, and various other optimization algorithms. The ENHS algorithms prove to be significantly better and more efficient than other algorithms. The simulation and testing of the algorithms is performed in MATLAB.Keywords: optimization, harmony search algorithm, MATLAB, electronic
Procedia PDF Downloads 4668297 Model of Multi-Criteria Evaluation for Railway Lines
Authors: Juraj Camaj, Martin Kendra, Jaroslav Masek
Abstract:
The paper is focused to the evaluation railway tracks in the Slovakia by using Multi-Criteria method. Evaluation of railway tracks has important impacts for the assessment of investment in technical equipment. Evaluation of railway tracks also has an important impact for the allocation of marshalling yards. Marshalling yards are in transport model as centers for the operation assigned catchment area. This model is one of the effective ways to meet the development strategy of the European Community's railways. By applying this model in practice, a transport company can guarantee a higher quality of service and then expect an increase in performance. The model is also applicable to other rail networks. This model supplements a theoretical problem of train formation problem of new ways of looking at evaluation of factors affecting the organization of wagon flows.Keywords: railway track, multi-criteria methods, evaluation, transportation model
Procedia PDF Downloads 4758296 An Implicit Methodology for the Numerical Modeling of Locally Inextensible Membranes
Authors: Aymen Laadhari
Abstract:
We present in this paper a fully implicit finite element method tailored for the numerical modeling of inextensible fluidic membranes in a surrounding Newtonian fluid. We consider a highly simplified version of the Canham-Helfrich model for phospholipid membranes, in which the bending force and spontaneous curvature are disregarded. The coupled problem is formulated in a fully Eulerian framework and the membrane motion is tracked using the level set method. The resulting nonlinear problem is solved by a Newton-Raphson strategy, featuring a quadratic convergence behavior. A monolithic solver is implemented, and we report several numerical experiments aimed at model validation and illustrating the accuracy of the proposed method. We show that stability is maintained for significantly larger time steps with respect to an explicit decoupling method.Keywords: finite element method, level set, Newton, membrane
Procedia PDF Downloads 3348295 DCDNet: Lightweight Document Corner Detection Network Based on Attention Mechanism
Authors: Kun Xu, Yuan Xu, Jia Qiao
Abstract:
The document detection plays an important role in optical character recognition and text analysis. Because the traditional detection methods have weak generalization ability, and deep neural network has complex structure and large number of parameters, which cannot be well applied in mobile devices, this paper proposes a lightweight Document Corner Detection Network (DCDNet). DCDNet is a two-stage architecture. The first stage with Encoder-Decoder structure adopts depthwise separable convolution to greatly reduce the network parameters. After introducing the Feature Attention Union (FAU) module, the second stage enhances the feature information of spatial and channel dim and adaptively adjusts the size of receptive field to enhance the feature expression ability of the model. Aiming at solving the problem of the large difference in the number of pixel distribution between corner and non-corner, Weighted Binary Cross Entropy Loss (WBCE Loss) is proposed to define corner detection problem as a classification problem to make the training process more efficient. In order to make up for the lack of Dataset of document corner detection, a Dataset containing 6620 images named Document Corner Detection Dataset (DCDD) is made. Experimental results show that the proposed method can obtain fast, stable and accurate detection results on DCDD.Keywords: document detection, corner detection, attention mechanism, lightweight
Procedia PDF Downloads 3568294 Traffic Signal Control Using Citizens’ Knowledge through the Wisdom of the Crowd
Authors: Aleksandar Jovanovic, Katarina Kukic, Ana Uzelac, Dusan Teodorovic
Abstract:
Wisdom of the Crowd (WoC) is a decentralized method that uses the collective intelligence of humans. Individual guesses may be far from the target, but when considered as a group, they converge on optimal solutions for a given problem. We will utilize WoC to address the challenge of controlling traffic lights within intersections from the streets of Kragujevac, Serbia. The problem at hand falls within the category of NP-hard problems. We will employ an algorithm that leverages the swarm intelligence of bees: Bee Colony Optimization (BCO). Data regarding traffic signal timing at a single intersection will be gathered from citizens through a survey. Results obtained in that manner will be compared to the BCO results for different traffic scenarios. We will use Vissim traffic simulation software as a tool to compare the performance of bees’ and humans’ collective intelligence.Keywords: wisdom of the crowd, traffic signal control, combinatorial optimization, bee colony optimization
Procedia PDF Downloads 1148293 A Multidimensional Genetic Algorithm Applicable for Our VRP Variant Dealing with the Problems of Infrastructure Defaults SVRDP-CMTW: “Safety Vehicle Routing Diagnosis Problem with Control and Modified Time Windows”
Authors: Ben Mansour Mouin, Elloumi Abdelkarim
Abstract:
We will discuss the problem of routing a fleet of different vehicles from a central depot to different types of infrastructure-defaults with dynamic maintenance requests, modified time windows, and control of default maintained. For this reason, we propose a modified metaheuristicto to solve our mathematical model. SVRDP-CMTW is a variant VRP of an optimal vehicle plan that facilitates the maintenance task of different types of infrastructure-defaults. This task will be monitored after the maintenance, based on its priorities, the degree of danger associated with each default, and the neighborhood at the black-spots. We will present, in this paper, a multidimensional genetic algorithm “MGA” by detailing its characteristics, proposed mechanisms, and roles in our work. The coding of this algorithm represents the necessary parameters that characterize each infrastructure-default with the objective of minimizing a combination of cost, distance and maintenance times while satisfying the priority levels of the most urgent defaults. The developed algorithm will allow the dynamic integration of newly detected defaults at the execution time. This result will be displayed in our programmed interactive system at the routing time. This multidimensional genetic algorithm replaces N genetic algorithm to solve P different type problems of infrastructure defaults (instead of N algorithm for P problem we can solve in one multidimensional algorithm simultaneously who can solve all these problemsatonce).Keywords: mathematical model, VRP, multidimensional genetic algorithm, metaheuristics
Procedia PDF Downloads 2028292 Slope Effect in Emission Evaluation to Assess Real Pollutant Factors
Authors: G. Meccariello, L. Della Ragione
Abstract:
The exposure to outdoor air pollution causes lung cancer and increases the risk of bladder cancer. Because air pollution in urban areas is mainly caused by transportation, it is necessary to evaluate pollutant exhaust emissions from vehicles during their real-world use. Nevertheless their evaluation and reduction is a key problem, especially in the cities, that account for more than 50% of world population. A particular attention was given to the slope variability along the streets during each journey performed by the instrumented vehicle. In this paper we dealt with the problem of describing a quantitatively approach for the reconstruction of GPS coordinates and altitude, in the context of correlation study between driving cycles / emission / geographical location, during an experimental campaign realized with some instrumented cars. Finally the slope analysis can be correlated to the emission and consumption values in a specific road position, and it could be evaluated its influence on their behaviour.Keywords: air pollution, driving cycles, GPS signal, slope, emission factor, fuel consumption
Procedia PDF Downloads 3978291 Solving Flowshop Scheduling Problems with Ant Colony Optimization Heuristic
Authors: Arshad Mehmood Ch, Riaz Ahmad, Imran Ali Ch, Waqas Durrani
Abstract:
This study deals with the application of Ant Colony Optimization (ACO) approach to solve no-wait flowshop scheduling problem (NW-FSSP). ACO algorithm so developed has been coded on Matlab computer application. The paper covers detailed steps to apply ACO and focuses on judging the strength of ACO in relation to other solution techniques previously applied to solve no-wait flowshop problem. The general purpose approach was able to find reasonably accurate solutions for almost all the problems under consideration and was able to handle a fairly large spectrum of problems with far reduced CPU effort. Careful scrutiny of the results reveals that the algorithm presented results better than other approaches like Genetic algorithm and Tabu Search heuristics etc; earlier applied to solve NW-FSSP data sets.Keywords: no-wait, flowshop, scheduling, ant colony optimization (ACO), makespan
Procedia PDF Downloads 4408290 Coupling Random Demand and Route Selection in the Transportation Network Design Problem
Authors: Shabnam Najafi, Metin Turkay
Abstract:
Network design problem (NDP) is used to determine the set of optimal values for certain pre-specified decision variables such as capacity expansion of nodes and links by optimizing various system performance measures including safety, congestion, and accessibility. The designed transportation network should improve objective functions defined for the system by considering the route choice behaviors of network users at the same time. The NDP studies mostly investigated the random demand and route selection constraints separately due to computational challenges. In this work, we consider both random demand and route selection constraints simultaneously. This work presents a nonlinear stochastic model for land use and road network design problem to address the development of different functional zones in urban areas by considering both cost function and air pollution. This model minimizes cost function and air pollution simultaneously with random demand and stochastic route selection constraint that aims to optimize network performance via road capacity expansion. The Bureau of Public Roads (BPR) link impedance function is used to determine the travel time function in each link. We consider a city with origin and destination nodes which can be residential or employment or both. There are set of existing paths between origin-destination (O-D) pairs. Case of increasing employed population is analyzed to determine amount of roads and origin zones simultaneously. Minimizing travel and expansion cost of routes and origin zones in one side and minimizing CO emission in the other side is considered in this analysis at the same time. In this work demand between O-D pairs is random and also the network flow pattern is subject to stochastic user equilibrium, specifically logit route choice model. Considering both demand and route choice, random is more applicable to design urban network programs. Epsilon-constraint is one of the methods to solve both linear and nonlinear multi-objective problems. In this work epsilon-constraint method is used to solve the problem. The problem was solved by keeping first objective (cost function) as the objective function of the problem and second objective as a constraint that should be less than an epsilon, where epsilon is an upper bound of the emission function. The value of epsilon should change from the worst to the best value of the emission function to generate the family of solutions representing Pareto set. A numerical example with 2 origin zones and 2 destination zones and 7 links is solved by GAMS and the set of Pareto points is obtained. There are 15 efficient solutions. According to these solutions as cost function value increases, emission function value decreases and vice versa.Keywords: epsilon-constraint, multi-objective, network design, stochastic
Procedia PDF Downloads 651