Search results for: time series data mining
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 38095

Search results for: time series data mining

35395 Exceptional Cost and Time Optimization with Successful Leak Repair and Restoration of Oil Production: West Kuwait Case Study

Authors: Nasser Al-Azmi, Al-Sabea Salem, Abu-Eida Abdullah, Milan Patra, Mohamed Elyas, Daniel Freile, Larisa Tagarieva

Abstract:

Well intervention was done along with Production Logging Tools (PLT) to detect sources of water, and to check well integrity for two West Kuwait oil wells started to produce 100 % water. For the first well, to detect the source of water, PLT was performed to check the perforations, no production observed from the bottom two perforation intervals, and an intake of water was observed from the top most perforation. Then a decision was taken to extend the PLT survey from tag depth to the Y-tool. For the second well, the aim was to detect the source of water and if there was a leak in the 7’’liner in front of the upper zones. Data could not be recorded in flowing conditions due to the casing deformation at almost 8300 ft. For the first well from the interpretation of PLT and well integrity data, there was a hole in the 9 5/8'' casing from 8468 ft to 8494 ft producing almost the majority of water, which is 2478 bbl/d. The upper perforation from 10812 ft to 10854 ft was taking 534 stb/d. For the second well, there was a hole in the 7’’liner from 8303 ft MD to 8324 ft MD producing 8334.0 stb/d of water with an intake zone from10322.9-10380.8 ft MD taking the whole fluid. To restore the oil production, W/O rig was mobilized to prevent dump flooding, and during the W/O, the leaking interval was confirmed for both wells. The leakage was cement squeezed and tested at 900-psi positive pressure and 500-psi drawdown pressure. The cement squeeze job was successful. After W/O, the wells kept producing for cleaning, and eventually, the WC reduced to 0%. Regular PLT and well integrity logs are required to study well performance, and well integrity issues, proper cement behind casing is essential to well longevity and well integrity, and the presence of the Y-tool is essential as monitoring of well parameters and ESP to facilitate well intervention tasks. Cost and time optimization in oil and gas and especially during rig operations is crucial. PLT data quality and the accuracy of the interpretations contributed a lot to identify the leakage interval accurately and, in turn, saved a lot of time and reduced the repair cost with almost 35 to 45 %. The added value here was more related to the cost reduction and effective and quick proper decision making based on the economic environment.

Keywords: leak, water shut-off, cement, water leak

Procedia PDF Downloads 104
35394 Hardware Implementation on Field Programmable Gate Array of Two-Stage Algorithm for Rough Set Reduct Generation

Authors: Tomasz Grzes, Maciej Kopczynski, Jaroslaw Stepaniuk

Abstract:

The rough sets theory developed by Prof. Z. Pawlak is one of the tools that can be used in the intelligent systems for data analysis and processing. Banking, medicine, image recognition and security are among the possible fields of utilization. In all these fields, the amount of the collected data is increasing quickly, but with the increase of the data, the computation speed becomes the critical factor. Data reduction is one of the solutions to this problem. Removing the redundancy in the rough sets can be achieved with the reduct. A lot of algorithms of generating the reduct were developed, but most of them are only software implementations, therefore have many limitations. Microprocessor uses the fixed word length, consumes a lot of time for either fetching as well as processing of the instruction and data; consequently, the software based implementations are relatively slow. Hardware systems don’t have these limitations and can process the data faster than a software. Reduct is the subset of the decision attributes that provides the discernibility of the objects. For the given decision table there can be more than one reduct. Core is the set of all indispensable condition attributes. None of its elements can be removed without affecting the classification power of all condition attributes. Moreover, every reduct consists of all the attributes from the core. In this paper, the hardware implementation of the two-stage greedy algorithm to find the one reduct is presented. The decision table is used as an input. Output of the algorithm is the superreduct which is the reduct with some additional removable attributes. First stage of the algorithm is calculating the core using the discernibility matrix. Second stage is generating the superreduct by enriching the core with the most common attributes, i.e., attributes that are more frequent in the decision table. Described above algorithm has two disadvantages: i) generating the superreduct instead of reduct, ii) additional first stage may be unnecessary if the core is empty. But for the systems focused on the fast computation of the reduct the first disadvantage is not the key problem. The core calculation can be achieved with a combinational logic block, and thus add respectively little time to the whole process. Algorithm presented in this paper was implemented in Field Programmable Gate Array (FPGA) as a digital device consisting of blocks that process the data in a single step. Calculating the core is done by the comparators connected to the block called 'singleton detector', which detects if the input word contains only single 'one'. Calculating the number of occurrences of the attribute is performed in the combinational block made up of the cascade of the adders. The superreduct generation process is iterative and thus needs the sequential circuit for controlling the calculations. For the research purpose, the algorithm was also implemented in C language and run on a PC. The times of execution of the reduct calculation in a hardware and software were considered. Results show increase in the speed of data processing.

Keywords: data reduction, digital systems design, field programmable gate array (FPGA), reduct, rough set

Procedia PDF Downloads 205
35393 Machine Learning Analysis of Student Success in Introductory Calculus Based Physics I Course

Authors: Chandra Prayaga, Aaron Wade, Lakshmi Prayaga, Gopi Shankar Mallu

Abstract:

This paper presents the use of machine learning algorithms to predict the success of students in an introductory physics course. Data having 140 rows pertaining to the performance of two batches of students was used. The lack of sufficient data to train robust machine learning models was compensated for by generating synthetic data similar to the real data. CTGAN and CTGAN with Gaussian Copula (Gaussian) were used to generate synthetic data, with the real data as input. To check the similarity between the real data and each synthetic dataset, pair plots were made. The synthetic data was used to train machine learning models using the PyCaret package. For the CTGAN data, the Ada Boost Classifier (ADA) was found to be the ML model with the best fit, whereas the CTGAN with Gaussian Copula yielded Logistic Regression (LR) as the best model. Both models were then tested for accuracy with the real data. ROC-AUC analysis was performed for all the ten classes of the target variable (Grades A, A-, B+, B, B-, C+, C, C-, D, F). The ADA model with CTGAN data showed a mean AUC score of 0.4377, but the LR model with the Gaussian data showed a mean AUC score of 0.6149. ROC-AUC plots were obtained for each Grade value separately. The LR model with Gaussian data showed consistently better AUC scores compared to the ADA model with CTGAN data, except in two cases of the Grade value, C- and A-.

Keywords: machine learning, student success, physics course, grades, synthetic data, CTGAN, gaussian copula CTGAN

Procedia PDF Downloads 29
35392 The Negative Effects of Controlled Motivation on Mathematics Achievement

Authors: John E. Boberg, Steven J. Bourgeois

Abstract:

The decline in student engagement and motivation through the middle years is well documented and clearly associated with a decline in mathematics achievement that persists through high school. To combat this trend and, very often, to meet high-stakes accountability standards, a growing number of parents, teachers, and schools have implemented various methods to incentivize learning. However, according to Self-Determination Theory, forms of incentivized learning such as public praise, tangible rewards, or threats of punishment tend to undermine intrinsic motivation and learning. By focusing on external forms of motivation that thwart autonomy in children, adults also potentially threaten relatedness measures such as trust and emotional engagement. Furthermore, these controlling motivational techniques tend to promote shallow forms of cognitive engagement at the expense of more effective deep processing strategies. Therefore, any short-term gains in apparent engagement or test scores are overshadowed by long-term diminished motivation, resulting in inauthentic approaches to learning and lower achievement. The current study focuses on the relationships between student trust, engagement, and motivation during these crucial years as students transition from elementary to middle school. In order to test the effects of controlled motivational techniques on achievement in mathematics, this quantitative study was conducted on a convenience sample of 22 elementary and middle schools from a single public charter school district in the south-central United States. The study employed multi-source data from students (N = 1,054), parents (N = 7,166), and teachers (N = 356), along with student achievement data and contextual campus variables. Cross-sectional questionnaires were used to measure the students’ self-regulated learning, emotional and cognitive engagement, and trust in teachers. Parents responded to a single item on incentivizing the academic performance of their child, and teachers responded to a series of questions about their acceptance of various incentive strategies. Structural equation modeling (SEM) was used to evaluate model fit and analyze the direct and indirect effects of the predictor variables on achievement. Although a student’s trust in teacher positively predicted both emotional and cognitive engagement, none of these three predictors accounted for any variance in achievement in mathematics. The parents’ use of incentives, on the other hand, predicted a student’s perception of his or her controlled motivation, and these two variables had significant negative effects on achievement. While controlled motivation had the greatest effects on achievement, parental incentives demonstrated both direct and indirect effects on achievement through the students’ self-reported controlled motivation. Comparing upper elementary student data with middle-school student data revealed that controlling forms of motivation may be taking their toll on student trust and engagement over time. While parental incentives positively predicted both cognitive and emotional engagement in the younger sub-group, such forms of controlling motivation negatively predicted both trust in teachers and emotional engagement in the middle-school sub-group. These findings support the claims, posited by Self-Determination Theory, about the dangers of incentivizing learning. Short-term gains belie the underlying damage to motivational processes that lead to decreased intrinsic motivation and achievement. Such practices also appear to thwart basic human needs such as relatedness.

Keywords: controlled motivation, student engagement, incentivized learning, mathematics achievement, self-determination theory, student trust

Procedia PDF Downloads 203
35391 Controlling the Process of a Chicken Dressing Plant through Statistical Process Control

Authors: Jasper Kevin C. Dionisio, Denise Mae M. Unsay

Abstract:

In a manufacturing firm, controlling the process ensures that optimum efficiency, productivity, and quality in an organization are achieved. An operation with no standardized procedure yields a poor productivity, inefficiency, and an out of control process. This study focuses on controlling the small intestine processing of a chicken dressing plant through the use of Statistical Process Control (SPC). Since the operation does not employ a standard procedure and does not have an established standard time, the process through the assessment of the observed time of the overall operation of small intestine processing, through the use of X-Bar R Control Chart, is found to be out of control. In the solution of this problem, the researchers conduct a motion and time study aiming to establish a standard procedure for the operation. The normal operator was picked through the use of Westinghouse Rating System. Instead of utilizing the traditional motion and time study, the researchers used the X-Bar R Control Chart in determining the process average of the process that is used for establishing the standard time. The observed time of the normal operator was noted and plotted to the X-Bar R Control Chart. Out of control points that are due to assignable cause were removed and the process average, or the average time the normal operator conducted the process, which was already in control and free form any outliers, was obtained. The process average was then used in determining the standard time of small intestine processing. As a recommendation, the researchers suggest the implementation of the standard time established which is with consonance to the standard procedure which was adopted from the normal operator. With that recommendation, the whole operation will induce a 45.54 % increase in their productivity.

Keywords: motion and time study, process controlling, statistical process control, X-Bar R Control chart

Procedia PDF Downloads 200
35390 Mapping Contested Sites - Permanence Of The Temporary Mouttalos Case Study

Authors: M. Hadjisoteriou, A. Kyriacou Petrou

Abstract:

This paper will discuss ideas of social sustainability in urban design and human behavior in multicultural contested sites. It will focus on the potential of the re-reading of the “site” through mapping that acts as a research methodology and will discuss the chosen site of Mouttalos, Cyprus as a place of multiple identities. Through a methodology of mapping using a bottom up approach, a process of disassembling derives that acts as a mechanism to re-examine space and place by searching for the invisible and the non-measurable, understanding the site through its detailed inhabitation patterns. The significance of this study lies in the use of mapping as an active form of thinking rather than a passive process of representation that allows for a new site to be discovered, giving multiple opportunities for adaptive urban strategies and socially engaged design approaches. We will discuss the above thematic based on the chosen contested site of Mouttalos, a small Turkish Cypriot neighbourhood, in the old centre of Paphos (Ktima), SW of Cyprus. During the political unrest, between Greek and Turkish Cypriot communities, in 1963, the area became an enclave to the Turkish Cypriots, excluding any contact with the rest of the area. Following the Turkish invasion of 1974, the residents left their homes, plots and workplaces, resettling in the North of Cyprus. Greek Cypriot refugees moved into the area. The presence of the Greek Cypriot refugees is still considered to be a temporary resettlement. The buildings and the residents themselves exist in a state of uncertainty. The site is documented through a series of parallel investigations into the physical conditions and history of the site. Research methodologies use the process of mapping to expose the complex and often invisible layers of information that coexist. By registering the site through the subjective experiences, and everyday stories of inhabitants, a series of cartographic recordings reveals the space between: happening and narrative and especially space between different cultures and religions. Research put specific emphasis on engaging the public, promoting social interaction, identifying spatial patterns of occupation by previous inhabitants through social media. Findings exposed three main areas of interest. Firstly we identified inter-dependent relationships between permanence and temporality, characterised by elements such us, signage through layers of time, past events and periodical street festivals, unfolding memory and belonging. Secondly issues of co-ownership and occupation, found through particular narratives of exchange between the two communities and through appropriation of space. Finally formal and informal inhabitation of space, revealed through the presence of informal shared back yards, alternative paths, porous street edges and formal and informal landmarks. The importance of the above findings, was achieving a shift of focus from the built infrastructure to the soft network of multiple and complex relations of dependence and autonomy. Proposed interventions for this contested site were informed and led by a new multicultural identity where invisible qualities were revealed though the process of mapping, taking on issues of layers of time, formal and informal inhabitation and the “permanence of the temporary”.

Keywords: contested sites, mapping, social sustainability, temporary urban strategies

Procedia PDF Downloads 407
35389 The Visualizer for Real-Time Analysis of Internet Trends

Authors: Radek Malinský, Ivan Jelínek

Abstract:

The current web has become a modern encyclopedia, where people share their thoughts and ideas on various topics around them. Such kind of encyclopedia is very useful for other people who are looking for answers to their questions. However, with the growing popularity of social networking and blogging and ever expanding network services, there has also been a growing diversity of technologies along with different structure of individual websites. It is, therefore, difficult to directly find a relevant answer for a common Internet user. This paper presents a web application for the real-time end-to-end analysis of selected Internet trends; where the trend can be whatever the people post online. The application integrates fully configurable tools for data collection and analysis using selected webometric algorithms, and for its chronological visualization to user. It can be assumed that the application facilitates the users to evaluate the quality of various products that are mentioned online.

Keywords: Trend, visualizer, web analysis, web 2.0.

Procedia PDF Downloads 245
35388 Multi-Channel Charge-Coupled Device Sensors Real-Time Cell Growth Monitor System

Authors: Han-Wei Shih, Yao-Nan Wang, Ko-Tung Chang, Lung-Ming Fu

Abstract:

A multi-channel cell growth real-time monitor and evaluation system using charge-coupled device (CCD) sensors with 40X lens integrating a NI LabVIEW image processing program is proposed and demonstrated. The LED light source control of monitor system is utilizing 8051 microprocessor integrated with NI LabVIEW software. In this study, the same concentration RAW264.7 cells growth rate and morphology in four different culture conditions (DMEM, LPS, G1, G2) were demonstrated. The real-time cells growth image was captured and analyzed by NI Vision Assistant every 10 minutes in the incubator. The image binarization technique was applied for calculating cell doubling time and cell division index. The cells doubling time and cells division index of four group with DMEM, LPS, LPS+G1, LPS+G2 are 12.3 hr,10.8 hr,14.0 hr,15.2 hr and 74.20%, 78.63%, 69.53%, 66.49%. The image magnification of multi-channel CCDs cell real-time monitoring system is about 100X~200X which compares with the traditional microscope.

Keywords: charge-coupled device (CCD), RAW264.7, doubling time, division index

Procedia PDF Downloads 342
35387 Collaboration of UNFPA and USAID to Mobilize Domestic Government Resources for Contraceptive Procurement in Madagascar

Authors: Josiane Yaguibou, Ngoy Kishimba, Issiaka v. Coulibaly, Sabrina Pestilli, Falinirina Razanalison, Hantanirina Andremanisa

Abstract:

Background: In recent years, Madagascar has faced a significant reduction in donors’ financial resources for the purchase of contraceptive products to meet the family planning needs of the population. In order to ensure the sustainability of the family planning program in the current context, UNFPA Madagascar engaged in a series of initiatives with the ultimate scope of identifying sustainable financing mechanisms for the program. Program intervention: UNFPA Madagascar established a strict collaboration with USAID to engage in a series of joint advocacy and resource mobilization activities with the government. The following initiatives were conducted: (i) Organization of a high-level Round Table to engage the government; (ii) Support to the government in renewing the FP2030 Commitments; (iii) Signature of the Country Compact 2022-2024; (iv) Allocation of government funds in 2022 and 2023 of over 829,222 USD; (v) Obtaining a Matching Fund of 1.5 million USD from UNFPA to encourage the government to allocate resources for the purchase of contraceptive products. Program Implications: The collaboration and the joint advocacy made it possible to (i) have budgetary allocations from the government to purchase products in 2022 and 2023 with a significant reduction in financing gaps; (ii) to convince the government to seek additional financing from partners such as the World Bank which granted more than 8 million USD for the purchase of products; (iii) reduce stock shortages from more than 30% to 15%.

Keywords: UNFPA, USAID, collaboration, contraceptives

Procedia PDF Downloads 51
35386 A 0-1 Goal Programming Approach to Optimize the Layout of Hospital Units: A Case Study in an Emergency Department in Seoul

Authors: Farhood Rismanchian, Seong Hyeon Park, Young Hoon Lee

Abstract:

This paper proposes a method to optimize the layout of an emergency department (ED) based on real executions of care processes by considering several planning objectives simultaneously. Recently, demand for healthcare services has been dramatically increased. As the demand for healthcare services increases, so do the need for new healthcare buildings as well as the need for redesign and renovating existing ones. The importance of implementation of a standard set of engineering facilities planning and design techniques has been already proved in both manufacturing and service industry with many significant functional efficiencies. However, high complexity of care processes remains a major challenge to apply these methods in healthcare environments. Process mining techniques applied in this study to tackle the problem of complexity and to enhance care process analysis. Process related information such as clinical pathways extracted from the information system of an ED. A 0-1 goal programming approach is then proposed to find a single layout that simultaneously satisfies several goals. The proposed model solved by optimization software CPLEX 12. The solution reached using the proposed method has 42.2% improvement in terms of walking distance of normal patients and 47.6% improvement in walking distance of critical patients at minimum cost of relocation. It has been observed that lots of patients must unnecessarily walk long distances during their visit to the emergency department because of an inefficient design. A carefully designed layout can significantly decrease patient walking distance and related complications.

Keywords: healthcare operation management, goal programming, facility layout problem, process mining, clinical processes

Procedia PDF Downloads 273
35385 Predictive Analytics in Oil and Gas Industry

Authors: Suchitra Chnadrashekhar

Abstract:

Earlier looked as a support function in an organization information technology has now become a critical utility to manage their daily operations. Organizations are processing huge amount of data which was unimaginable few decades before. This has opened the opportunity for IT sector to help industries across domains to handle the data in the most intelligent manner. Presence of IT has been a leverage for the Oil & Gas industry to store, manage and process the data in most efficient way possible thus deriving the economic value in their day-to-day operations. Proper synchronization between Operational data system and Information Technology system is the need of the hour. Predictive analytics supports oil and gas companies by addressing the challenge of critical equipment performance, life cycle, integrity, security, and increase their utilization. Predictive analytics go beyond early warning by providing insights into the roots of problems. To reach their full potential, oil and gas companies need to take a holistic or systems approach towards asset optimization and thus have the functional information at all levels of the organization in order to make the right decisions. This paper discusses how the use of predictive analysis in oil and gas industry is redefining the dynamics of this sector. Also, the paper will be supported by real time data and evaluation of the data for a given oil production asset on an application tool, SAS. The reason for using SAS as an application for our analysis is that SAS provides an analytics-based framework to improve uptimes, performance and availability of crucial assets while reducing the amount of unscheduled maintenance, thus minimizing maintenance-related costs and operation disruptions. With state-of-the-art analytics and reporting, we can predict maintenance problems before they happen and determine root causes in order to update processes for future prevention.

Keywords: hydrocarbon, information technology, SAS, predictive analytics

Procedia PDF Downloads 338
35384 A CMOS Capacitor Array for ESPAR with Fast Switching Time

Authors: Jin-Sup Kim, Se-Hwan Choi, Jae-Young Lee

Abstract:

A 8-bit CMOS capacitor array is designed for using in electrically steerable passive array radiator (ESPAR). The proposed capacitor array shows the fast response time in rising and falling characteristics. Compared to other works in silicon-on-insulator (SOI) or silicon-on-sapphire (SOS) technologies, it shows a comparable tuning range and switching time with low power consumption. Using the 0.18um CMOS, the capacitor array features a tuning range of 1.5 to 12.9 pF at 2.4GHz. Including the 2X4 decoder for control interface, the Chip size is 350um X 145um. Current consumption is about 80 nA at 1.8 V operation.

Keywords: CMOS capacitor array, ESPAR, SOI, SOS, switching time

Procedia PDF Downloads 577
35383 Geovisualization of Human Mobility Patterns in Los Angeles Using Twitter Data

Authors: Linna Li

Abstract:

The capability to move around places is doubtless very important for individuals to maintain good health and social functions. People’s activities in space and time have long been a research topic in behavioral and socio-economic studies, particularly focusing on the highly dynamic urban environment. By analyzing groups of people who share similar activity patterns, many socio-economic and socio-demographic problems and their relationships with individual behavior preferences can be revealed. Los Angeles, known for its large population, ethnic diversity, cultural mixing, and entertainment industry, faces great transportation challenges such as traffic congestion, parking difficulties, and long commuting. Understanding people’s travel behavior and movement patterns in this metropolis sheds light on potential solutions to complex problems regarding urban mobility. This project visualizes people’s trajectories in Greater Los Angeles (L.A.) Area over a period of two months using Twitter data. A Python script was used to collect georeferenced tweets within the Greater L.A. Area including Ventura, San Bernardino, Riverside, Los Angeles, and Orange counties. Information associated with tweets includes text, time, location, and user ID. Information associated with users includes name, the number of followers, etc. Both aggregated and individual activity patterns are demonstrated using various geovisualization techniques. Locations of individual Twitter users were aggregated to create a surface of activity hot spots at different time instants using kernel density estimation, which shows the dynamic flow of people’s movement throughout the metropolis in a twenty-four-hour cycle. In the 3D geovisualization interface, the z-axis indicates time that covers 24 hours, and the x-y plane shows the geographic space of the city. Any two points on the z axis can be selected for displaying activity density surface within a particular time period. In addition, daily trajectories of Twitter users were created using space-time paths that show the continuous movement of individuals throughout the day. When a personal trajectory is overlaid on top of ancillary layers including land use and road networks in 3D visualization, the vivid representation of a realistic view of the urban environment boosts situational awareness of the map reader. A comparison of the same individual’s paths on different days shows some regular patterns on weekdays for some Twitter users, but for some other users, their daily trajectories are more irregular and sporadic. This research makes contributions in two major areas: geovisualization of spatial footprints to understand travel behavior using the big data approach and dynamic representation of activity space in the Greater Los Angeles Area. Unlike traditional travel surveys, social media (e.g., Twitter) provides an inexpensive way of data collection on spatio-temporal footprints. The visualization techniques used in this project are also valuable for analyzing other spatio-temporal data in the exploratory stage, thus leading to informed decisions about generating and testing hypotheses for further investigation. The next step of this research is to separate users into different groups based on gender/ethnic origin and compare their daily trajectory patterns.

Keywords: geovisualization, human mobility pattern, Los Angeles, social media

Procedia PDF Downloads 103
35382 Kinetics, Equilibrium and Thermodynamics of the Adsorption of Triphenyltin onto NanoSiO₂/Fly Ash/Activated Carbon Composite

Authors: Olushola S. Ayanda, Olalekan S. Fatoki, Folahan A. Adekola, Bhekumusa J. Ximba, Cecilia O. Akintayo

Abstract:

In the present study, the kinetics, equilibrium and thermodynamics of the adsorption of triphenyltin (TPT) from TPT-contaminated water onto nanoSiO2/fly ash/activated carbon composite was investigated in batch adsorption system. Equilibrium adsorption data were analyzed using Langmuir, Freundlich, Temkin and Dubinin–Radushkevich (D-R) isotherm models. Pseudo first- and second-order, Elovich and fractional power models were applied to test the kinetic data and in order to understand the mechanism of adsorption, thermodynamic parameters such as ΔG°, ΔSo and ΔH° were also calculated. The results showed a very good compliance with pseudo second-order equation while the Freundlich and D-R models fit the experiment data. Approximately 99.999 % TPT was removed from the initial concentration of 100 mg/L TPT at 80oC, contact time of 60 min, pH 8 and a stirring speed of 200 rpm. Thus, nanoSiO2/fly ash/activated carbon composite could be used as effective adsorbent for the removal of TPT from contaminated water and wastewater.

Keywords: isotherm, kinetics, nanoSiO₂/fly ash/activated carbon composite, tributyltin

Procedia PDF Downloads 283
35381 Real-Time Detection of Space Manipulator Self-Collision

Authors: Zhang Xiaodong, Tang Zixin, Liu Xin

Abstract:

In order to avoid self-collision of space manipulators during operation process, a real-time detection method is proposed in this paper. The manipulator is fitted into a cylinder enveloping surface, and then the detection algorithm of collision between cylinders is analyzed. The collision model of space manipulator self-links can be detected by using this algorithm in real-time detection during the operation process. To ensure security of the operation, a safety threshold is designed. The simulation and experiment results verify the effectiveness of the proposed algorithm for a 7-DOF space manipulator.

Keywords: space manipulator, collision detection, self-collision, the real-time collision detection

Procedia PDF Downloads 448
35380 Data Access, AI Intensity, and Scale Advantages

Authors: Chuping Lo

Abstract:

This paper presents a simple model demonstrating that ceteris paribus countries with lower barriers to accessing global data tend to earn higher incomes than other countries. Therefore, large countries that inherently have greater data resources tend to have higher incomes than smaller countries, such that the former may be more hesitant than the latter to liberalize cross-border data flows to maintain this advantage. Furthermore, countries with higher artificial intelligence (AI) intensity in production technologies tend to benefit more from economies of scale in data aggregation, leading to higher income and more trade as they are better able to utilize global data.

Keywords: digital intensity, digital divide, international trade, scale of economics

Procedia PDF Downloads 51
35379 Visual Thinking Routines: A Mixed Methods Approach Applied to Student Teachers at the American University in Dubai

Authors: Alain Gholam

Abstract:

Visual thinking routines are principles based on several theories, approaches, and strategies. Such routines promote thinking skills, call for collaboration and sharing of ideas, and above all, make thinking and learning visible. Visual thinking routines were implemented in the teaching methodology graduate course at the American University in Dubai. The study used mixed methods. It was guided by the following two research questions: 1). To what extent do visual thinking inspire learning in the classroom, and make time for students’ questions, contributions, and thinking? 2). How do visual thinking routines inspire learning in the classroom and make time for students’ questions, contributions, and thinking? Eight student teachers enrolled in the teaching methodology course at the American University in Dubai (Spring 2017) participated in the following study. First, they completed a survey that measured to what degree they believed visual thinking routines inspired learning in the classroom and made time for students’ questions, contributions, and thinking. In order to build on the results from the quantitative phase, the student teachers were next involved in a qualitative data collection phase, where they had to answer the question: How do visual thinking routines inspire learning in the classroom and make time for students’ questions, contributions, and thinking? Results revealed that the implementation of visual thinking routines in the classroom strongly inspire learning in the classroom and make time for students’ questions, contributions, and thinking. In addition, student teachers explained how visual thinking routines allow for organization, variety, thinking, and documentation. As with all original, new, and unique resources, visual thinking routines are not free of challenges. To make the most of this useful and valued resource, educators, need to comprehend, model and spread an awareness of the effective ways of using such routines in the classroom. It is crucial that such routines become part of the curriculum to allow for and document students’ questions, contributions, and thinking.

Keywords: classroom display, student engagement, thinking classroom, visual thinking routines

Procedia PDF Downloads 212
35378 Identity Verification Using k-NN Classifiers and Autistic Genetic Data

Authors: Fuad M. Alkoot

Abstract:

DNA data have been used in forensics for decades. However, current research looks at using the DNA as a biometric identity verification modality. The goal is to improve the speed of identification. We aim at using gene data that was initially used for autism detection to find if and how accurate is this data for identification applications. Mainly our goal is to find if our data preprocessing technique yields data useful as a biometric identification tool. We experiment with using the nearest neighbor classifier to identify subjects. Results show that optimal classification rate is achieved when the test set is corrupted by normally distributed noise with zero mean and standard deviation of 1. The classification rate is close to optimal at higher noise standard deviation reaching 3. This shows that the data can be used for identity verification with high accuracy using a simple classifier such as the k-nearest neighbor (k-NN). 

Keywords: biometrics, genetic data, identity verification, k nearest neighbor

Procedia PDF Downloads 237
35377 Feasibility Study and Experiment of On-Site Nuclear Material Identification in Fukushima Daiichi Fuel Debris by Compact Neutron Source

Authors: Yudhitya Kusumawati, Yuki Mitsuya, Tomooki Shiba, Mitsuru Uesaka

Abstract:

After the Fukushima Daiichi nuclear power reactor incident, there are a lot of unaccountable nuclear fuel debris in the reactor core area, which is subject to safeguard and criticality safety. Before the actual precise analysis is performed, preliminary on-site screening and mapping of nuclear debris activity need to be performed to provide a reliable data on the nuclear debris mass-extraction planning. Through a collaboration project with Japan Atomic Energy Agency, an on-site nuclear debris screening system by using dual energy X-Ray inspection and neutron energy resonance analysis has been established. By using the compact and mobile pulsed neutron source constructed from 3.95 MeV X-Band electron linac, coupled with Tungsten as electron-to-photon converter and Beryllium as a photon-to-neutron converter, short-distance neutron Time of Flight measurement can be performed. Experiment result shows this system can measure neutron energy spectrum up to 100 eV range with only 2.5 meters Time of Flightpath in regards to the X-Band accelerator’s short pulse. With this, on-site neutron Time of Flight measurement can be used to identify the nuclear debris isotope contents through Neutron Resonance Transmission Analysis (NRTA). Some preliminary NRTA experiments have been done with Tungsten sample as dummy nuclear debris material, which isotopes Tungsten-186 has close energy absorption value with Uranium-238 (15 eV). The results obtained shows that this system can detect energy absorption in the resonance neutron area within 1-100 eV. It can also detect multiple elements in a material at once with the experiment using a combined sample of Indium, Tantalum, and silver makes it feasible to identify debris containing mixed material. This compact neutron Time of Flight measurement system is a great complementary for dual energy X-Ray Computed Tomography (CT) method that can identify atomic number quantitatively but with 1-mm spatial resolution and high error bar. The combination of these two measurement methods will able to perform on-site nuclear debris screening at Fukushima Daiichi reactor core area, providing the data for nuclear debris activity mapping.

Keywords: neutron source, neutron resonance, nuclear debris, time of flight

Procedia PDF Downloads 223
35376 The Revised Completion of Student Internship Report by Goal Mapping

Authors: Faizah Herman

Abstract:

This study aims to explore the attitudes and behavior of goal mapping performed by the student in completing the internship report revised on time. The approach is phenomenological research with qualitative methods. Data sources include observation, interviews and questionnaires, focus group discussions. Research subject 5 students who have completed the internship report revisions in a timely manner. The analysis technique is an interactive model of Miles&Huberman data analysis techniques. The results showed that the students have a goal of mapping that includes the ultimate goal, formulate goals by identifying what are the things that need to be done, action to be taken and what kind of support is needed from the environment.

Keywords: goal mapping, revision internship report, students, Brawijaya

Procedia PDF Downloads 379
35375 Effect of Inflorescence Removal and Earthing-Up Times on Growth and Yield of Potato (Solanum tuberosum L.) at Jimma Southwestern Ethiopia

Authors: Dessie Fisseha, Derbew Belew, Ambecha Olika

Abstract:

Potato is a high-potential food security crop in Ethiopia. However, the yield and productivity of the crop have been far below the world average. This is due to several factors, including appropriate agronomic practices, such as time of earthing-up and inflorescence management. A field experiment was conducted at Jimma, Southwest Ethiopia, during 2016/17 under irrigation to determine the effect of time of earthing-up and inflorescence removal on the growth, yield, and quality of potatoes. The treatments consisted of a time of earthing-up (no earthing-up, earthing-up at 15, 30, and 45 days after complete plant emergence) and inflorescence removal (inflorescence removed and not removed). Potato variety (Belete) was used for this experiment. A 2x4 factorial experiment was laid out with three replications. Data collected on the growth, yield, and quality components of potatoes were analyzed using SAS Version 9.3 statistical software. Inflorescence removal affected the majority of the growth and yield parameters, while the time of earthing-up affected all growth, yield, and quality (green tuber number) parameters. Earthing-up at 15 days in combination with inflorescence removal (at 60 days after complete plant emergence) gave better plant growth and maximum tuber yield of the Belete potato variety under irrigated conditions. Since the current research was conducted at one location, in one season, and with one potato cultivar (Belete), it would be advisable to repeat the experiment so as to arrive at a final conclusion and subsequent recommendation.

Keywords: Belete, earthing-up, inflorescence, yield

Procedia PDF Downloads 52
35374 Comparative Study of Vertical and Horizontal Triplex Tube Latent Heat Storage Units

Authors: Hamid El Qarnia

Abstract:

This study investigates the impact of the eccentricity of the central tube on the thermal and fluid characteristics of a triplex tube used in latent heat energy storage technologies. Two triplex tube orientations are considered in the proposed study: vertical and horizontal. The energy storage material, which is a phase change material (PCM), is placed in the space between the inside and outside tubes. During the thermal energy storage period, a heat transfer fluid (HTF) flows inside the two tubes, transmitting the heat to the PCM through two heat exchange surfaces instead of one heat exchange surface as it is the case for double tube heat storage systems. A CFD model is developed and validated against experimental data available in the literature. The mesh independency study is carried out to select the appropriate mesh. In addition, different time steps are examined to determine a time step ensuring accuracy of the numerical results and reduction in the computational time. The numerical model is then used to conduct numerical investigations of the thermal behavior and thermal performance of the storage unit. The effects of eccentricity of the central tube and HTF mass flow rate on thermal characteristics and performance indicators are examined for two flow arrangements: co-current and counter current flows. The results are given in terms of isotherm plots, streamlines, melting time and thermal energy storage efficiency.

Keywords: energy storage, heat transfer, melting, solidification

Procedia PDF Downloads 44
35373 A Review on Intelligent Systems for Geoscience

Authors: R Palson Kennedy, P.Kiran Sai

Abstract:

This article introduces machine learning (ML) researchers to the hurdles that geoscience problems present, as well as the opportunities for improvement in both ML and geosciences. This article presents a review from the data life cycle perspective to meet that need. Numerous facets of geosciences present unique difficulties for the study of intelligent systems. Geosciences data is notoriously difficult to analyze since it is frequently unpredictable, intermittent, sparse, multi-resolution, and multi-scale. The first half addresses data science’s essential concepts and theoretical underpinnings, while the second section contains key themes and sharing experiences from current publications focused on each stage of the data life cycle. Finally, themes such as open science, smart data, and team science are considered.

Keywords: Data science, intelligent system, machine learning, big data, data life cycle, recent development, geo science

Procedia PDF Downloads 123
35372 Modelling a Hospital as a Queueing Network: Analysis for Improving Performance

Authors: Emad Alenany, M. Adel El-Baz

Abstract:

In this paper, the flow of different classes of patients into a hospital is modelled and analyzed by using the queueing network analyzer (QNA) algorithm and discrete event simulation. Input data for QNA are the rate and variability parameters of the arrival and service times in addition to the number of servers in each facility. Patient flows mostly match real flow for a hospital in Egypt. Based on the analysis of the waiting times, two approaches are suggested for improving performance: Separating patients into service groups, and adopting different service policies for sequencing patients through hospital units. The separation of a specific group of patients, with higher performance target, to be served separately from the rest of patients requiring lower performance target, requires the same capacity while improves performance for the selected group of patients with higher target. Besides, it is shown that adopting the shortest processing time and shortest remaining processing time service policies among other tested policies would results in, respectively, 11.47% and 13.75% reduction in average waiting time relative to first come first served policy.

Keywords: queueing network, discrete-event simulation, health applications, SPT

Procedia PDF Downloads 172
35371 Assessment of the Impact of Traffic Safety Policy in Barcelona, 2010-2019

Authors: Lluís Bermúdez, Isabel Morillo

Abstract:

Road safety involves carrying out a determined and explicit policy to reduce accidents. In the city of Barcelona, through the Local Road Safety Plan 2013-2018, in line with the framework that has been established at the European and state level, a series of preventive, corrective and technical measures are specified, with the priority objective of reducing the number of serious injuries and fatalities. In this work, based on the data from the accidents managed by the local police during the period 2010-2019, an analysis is carried out to verify whether the measures established in the Plan to reduce the accident rate have had an effect or not and to what extent. The analysis focuses on the type of accident and the type of vehicles involved. Different count regression models have been fitted, from which it can be deduced that the number of serious and fatal victims of the accidents that have occurred in the city of Barcelona has been reduced as the measures approved by the authorities.

Keywords: accident reduction, count regression models, road safety, urban traffic

Procedia PDF Downloads 112
35370 Determinants of Consultation Time at a Family Medicine Center

Authors: Ali Alshahrani, Adel Almaai, Saad Garni

Abstract:

Aim of the study: To explore duration and determinants of consultation time at a family medicine center. Methodology: This study was conducted at the Family Medicine Center in Ahad Rafidah City, at the southwestern part of Saudi Arabia. It was conducted on the working days of March 2013. Trained nurses helped in filling in the checklist. A total of 459 patients were included. A checklist was designed and used in this study. It included patient’s age, sex, diagnosis, type of visit, referral and its type, psychological problems and additional work-up. In addition, number of daily bookings, physician`s experience and consultation time. Results: More than half of patients (58.39%) had less than 10 minutes’ consultation (Mean+SD: 12.73+9.22 minutes). Patients treated by physicians with shortest experience (i.e., ≤5 years) had the longest consultation time while those who were treated with physicians with the longest experience (i.e., > 10 years) had the shortest consultation time (13.94±10.99 versus 10.79±7.28, p=0.011). Regarding patients’ diagnosis, those with chronic diseases had the longest consultation time (p<0.001). Patients who did not need referral had significantly shorter consultation time compared with those who had routine or urgent referral (11.91±8.42,14.60±9.03 and 22.42±14.81 minutes, respectively, p<0.001). Patients with associated psychological problems needed significantly longer consultation time than those without associated psychological problems (20.06±13.32 versus 12.45±8.93, p<0.001). Conclusions: The average length of consultation time at Ahad Rafidah Family Medicine Center is approximately 13 minutes. Less-experienced physicians tend to spend longer consultation times with patients. Referred patients, those with psychological problems, those with chronic diseases tend to have longer consultation time. Recommendations: Family physicians should be encouraged to keep their optimal consultation time. Booking an adequate number of patients per shift would allow the family physician to provide enough consultation time for each patient.

Keywords: consultation, quality, medicine, clinics

Procedia PDF Downloads 272
35369 The Quality Assessment of Seismic Reflection Survey Data Using Statistical Analysis: A Case Study of Fort Abbas Area, Cholistan Desert, Pakistan

Authors: U. Waqas, M. F. Ahmed, A. Mehmood, M. A. Rashid

Abstract:

In geophysical exploration surveys, the quality of acquired data holds significant importance before executing the data processing and interpretation phases. In this study, 2D seismic reflection survey data of Fort Abbas area, Cholistan Desert, Pakistan was taken as test case in order to assess its quality on statistical bases by using normalized root mean square error (NRMSE), Cronbach’s alpha test (α) and null hypothesis tests (t-test and F-test). The analysis challenged the quality of the acquired data and highlighted the significant errors in the acquired database. It is proven that the study area is plain, tectonically least affected and rich in oil and gas reserves. However, subsurface 3D modeling and contouring by using acquired database revealed high degrees of structural complexities and intense folding. The NRMSE had highest percentage of residuals between the estimated and predicted cases. The outcomes of hypothesis testing also proved the biasness and erraticness of the acquired database. Low estimated value of alpha (α) in Cronbach’s alpha test confirmed poor reliability of acquired database. A very low quality of acquired database needs excessive static correction or in some cases, reacquisition of data is also suggested which is most of the time not feasible on economic grounds. The outcomes of this study could be used to assess the quality of large databases and to further utilize as a guideline to establish database quality assessment models to make much more informed decisions in hydrocarbon exploration field.

Keywords: Data quality, Null hypothesis, Seismic lines, Seismic reflection survey

Procedia PDF Downloads 141
35368 Framework to Quantify Customer Experience

Authors: Anant Sharma, Ashwin Rajan

Abstract:

Customer experience is measured today based on defining a set of metrics and KPIs, setting up thresholds and defining triggers across those thresholds. While this is an effective way of measuring against a Key Performance Indicator ( referred to as KPI in the rest of the paper ), this approach cannot capture the various nuances that make up the overall customer experience. Customers consume a product or service at various levels, which is not reflected in metrics like Customer Satisfaction or Net Promoter Score, but also across other measurements like recurring revenue, frequency of service usage, e-learning and depth of usage. Here we explore an alternative method of measuring customer experience by flipping the traditional views. Rather than rolling customers up to a metric, we roll up metrics to hierarchies and then measure customer experience. This method allows any team to quantify customer experience across multiple touchpoints in a customer’s journey. We make use of various data sources which contain information for metrics like CXSAT, NPS, Renewals, and depths of service usage collected across a customer lifecycle. This data can be mined systematically to get linkages between different data points like geographies, business groups, products and time. Additional views can be generated by blending synthetic contexts into the data to show trends and top/bottom types of reports. We have created a framework that allows us to measure customer experience using the above logic.

Keywords: analytics, customers experience, BI, business operations, KPIs, metrics

Procedia PDF Downloads 58
35367 Universe at Zero Second and the Creation Process of the First Particle from the Absolute Void

Authors: Shivan Sirdy

Abstract:

In this study, we discuss the properties of absolute void space or the universe at zero seconds, and how these properties play a vital role in creating a mechanism in which the very first particle gets created simultaneously everywhere. We find the limit in which when the absolute void volume reaches will lead to the collapse that leads to the creation of the first particle. This discussion is made following the elementary dimensions theory study that was peer-reviewed at the end of 2020; everything in the universe is made from four elementary dimensions, these dimensions are the three spatial dimensions (X, Y, and Z) and the Void resistance as the factor of change among the four. Time itself was not considered as the fourth dimension. Rather time corresponds to a factor of change, and during the research, it was found out that the Void resistance is the factor of change in the absolute Void space, where time is a hypothetical concept that represents changes during certain events compared to a constant change rate event. Therefore, time does exist, but as a factor of change as the Void resistance: Time= factor of change= Void resistance.

Keywords: elementary dimensions, absolute void, time alternative, early universe, universe at zero second, Void resistant, Hydrogen atom, Hadron field, Lepton field

Procedia PDF Downloads 186
35366 Capacity Optimization in Cooperative Cognitive Radio Networks

Authors: Mahdi Pirmoradian, Olayinka Adigun, Christos Politis

Abstract:

Cooperative spectrum sensing is a crucial challenge in cognitive radio networks. Cooperative sensing can increase the reliability of spectrum hole detection, optimize sensing time and reduce delay in cooperative networks. In this paper, an efficient central capacity optimization algorithm is proposed to minimize cooperative sensing time in a homogenous sensor network using OR decision rule subject to the detection and false alarm probabilities constraints. The evaluation results reveal significant improvement in the sensing time and normalized capacity of the cognitive sensors.

Keywords: cooperative networks, normalized capacity, sensing time

Procedia PDF Downloads 614