Search results for: Content based image retrieval (CBIR)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 13170

Search results for: Content based image retrieval (CBIR)

1050 Six Sigma-Based Optimization of Shrinkage Accuracy in Injection Molding Processes

Authors: Sky Chou, Joseph C. Chen

Abstract:

This paper focuses on using six sigma methodologies to reach the desired shrinkage of a manufactured high-density polyurethane (HDPE) part produced by the injection molding machine. It presents a case study where the correct shrinkage is required to reduce or eliminate defects and to improve the process capability index Cp and Cpk for an injection molding process. To improve this process and keep the product within specifications, the six sigma methodology, design, measure, analyze, improve, and control (DMAIC) approach, was implemented in this study. The six sigma approach was paired with the Taguchi methodology to identify the optimized processing parameters that keep the shrinkage rate within the specifications by our customer. An L9 orthogonal array was applied in the Taguchi experimental design, with four controllable factors and one non-controllable/noise factor. The four controllable factors identified consist of the cooling time, melt temperature, holding time, and metering stroke. The noise factor is the difference between material brand 1 and material brand 2. After the confirmation run was completed, measurements verify that the new parameter settings are optimal. With the new settings, the process capability index has improved dramatically. The purpose of this study is to show that the six sigma and Taguchi methodology can be efficiently used to determine important factors that will improve the process capability index of the injection molding process.

Keywords: Injection molding, shrinkage, six sigma, Taguchi parameter design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1381
1049 A Corpus-Based Analysis on Code-Mixing Features in Mandarin-English Bilingual Children in Singapore

Authors: Xunan Huang, Caicai Zhang

Abstract:

This paper investigated the code-mixing features in Mandarin-English bilingual children in Singapore. First, it examined whether the code-mixing rate was different in Mandarin Chinese and English contexts. Second, it explored the syntactic categories of code-mixing in Singapore bilingual children. Moreover, this study investigated whether morphological information was preserved when inserting syntactic components into the matrix language. Data are derived from the Singapore Bilingual Corpus, in which the recordings and transcriptions of sixty English-Mandarin 5-to-6-year-old children were preserved for analysis. Results indicated that the rate of code-mixing was asymmetrical in the two language contexts, with the rate being significantly higher in the Mandarin context than that in the English context. The asymmetry is related to language dominance in that children are more likely to code-mix when using their nondominant language. Concerning the syntactic categories of code-mixing words in the Singaporean bilingual children, we found that noun-mixing, verb-mixing, and adjective-mixing are the three most frequently used categories in code-mixing in the Mandarin context. This pattern mirrors the syntactic categories of code-mixing in the Cantonese context in Cantonese-English bilingual children, and the general trend observed in lexical borrowing. Third, our results also indicated that English vocabularies that carry morphological information are embedded in bare forms in the Mandarin context. These findings shed light upon how bilingual children take advantage of the two languages in mixed utterances in a bilingual environment.

Keywords: Code-mixing, Mandarin Chinese, English, bilingual children.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1121
1048 Experimental Study on Strength and Durability Properties of Bio-Self-Cured Fly Ash Based Concrete under Aggressive Environments

Authors: R. Malathy

Abstract:

High performance concrete is not only characterized by its high strength, workability, and durability but also by its smartness in performance without human care since the first day. If the concrete can cure on its own without external curing without compromising its strength and durability, then it is said to be high performance self-curing concrete. In this paper, an attempt is made on the performance study of internally cured concrete using biomaterials, namely Spinacea pleracea and Calatropis gigantea as self-curing agents, and it is compared with the performance of concrete with existing self-cure chemical, namely polyethylene glycol. The present paper focuses on workability, strength, and durability study on M20, M30, and M40 grade concretes replacing 30% of fly ash for cement. The optimum dosage of Spinacea pleracea, Calatropis gigantea, and polyethylene glycol was taken as 0.6%, 0.24%, and 0.3% by weight of cement from the earlier research studies. From the slump tests performed, it was found that there is a minimum variation between conventional concrete and self-cured concrete. The strength activity index is determined by keeping compressive strength of conventionally cured concrete for 28 days as unity and observed that, for self-cured concrete, it is more than 1 after 28 days and more than 1.15 after 56 days because of secondary reaction of fly ash. The performance study of concretes in aggressive environment like acid attack, sea water attack, and chloride attack was made, and the results are positive and encouraging in bio-self-cured concretes which are ecofriendly, cost effective, and high performance materials.

Keywords: Biomaterials, Calatropis gigantea, polyethylene glycol, Spinacea oleracea, self-curing concrete.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2838
1047 Continuous Measurement of Spatial Exposure Based on Visual Perception in Three-Dimensional Space

Authors: Nanjiang Chen

Abstract:

In the backdrop of expanding urban landscapes, accurately assessing spatial openness is critical. Traditional visibility analysis methods grapple with discretization errors and inefficiencies, creating a gap in truly capturing the human experience of space. Addressing these gaps, this paper presents a continuous visibility algorithm, providing a potentially valuable approach to measuring urban spaces from a human - centric perspective. This study presents a methodological breakthrough by applying this algorithm to urban visibility analysis. Unlike conventional approaches, this technique allows for a continuous range of visibility assessment, closely mirroring human visual perception. By eliminating the need for predefined subdivisions in ray casting, it offers a more accurate and efficient tool for urban planners and architects. The proposed algorithm not only reduces computational errors but also demonstrates faster processing capabilities, validated through a case study in Beijing's urban setting. Its key distinction lies in its potential to benefit a broad spectrum of stakeholders, ranging from urban developers to public policymakers, aiding in the creation of urban spaces that prioritize visual openness and quality of life. This advancement in urban analysis methods could lead to more inclusive, comfortable, and well-integrated urban environments, enhancing the spatial experience for communities worldwide.

Keywords: Visual openness, spatial continuity, ray-tracing algorithms, urban computation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30
1046 Road Traffic Accidents Analysis in Mexico City through Crowdsourcing Data and Data Mining Techniques

Authors: Gabriela V. Angeles Perez, Jose Castillejos Lopez, Araceli L. Reyes Cabello, Emilio Bravo Grajales, Adriana Perez Espinosa, Jose L. Quiroz Fabian

Abstract:

Road traffic accidents are among the principal causes of traffic congestion, causing human losses, damages to health and the environment, economic losses and material damages. Studies about traditional road traffic accidents in urban zones represents very high inversion of time and money, additionally, the result are not current. However, nowadays in many countries, the crowdsourced GPS based traffic and navigation apps have emerged as an important source of information to low cost to studies of road traffic accidents and urban congestion caused by them. In this article we identified the zones, roads and specific time in the CDMX in which the largest number of road traffic accidents are concentrated during 2016. We built a database compiling information obtained from the social network known as Waze. The methodology employed was Discovery of knowledge in the database (KDD) for the discovery of patterns in the accidents reports. Furthermore, using data mining techniques with the help of Weka. The selected algorithms was the Maximization of Expectations (EM) to obtain the number ideal of clusters for the data and k-means as a grouping method. Finally, the results were visualized with the Geographic Information System QGIS.

Keywords: Data mining, K-means, road traffic accidents, Waze, Weka.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1215
1045 Diversity for Safety and Security of Autonomous Vehicles against Accidental and Deliberate Faults

Authors: Anil Ranjitbhai Patel, Clement John Shaji, Peter Liggesmeyer

Abstract:

Safety and security of Autonomous Vehicles (AVs) is a growing concern, first, due to the increased number of safety-critical functions taken over by automotive embedded systems; second, due to the increased exposure of the software-intensive systems to potential attackers; third, due to dynamic interaction in an uncertain and unknown environment at runtime which results in changed functional and non-functional properties of the system. Frequently occurring environmental uncertainties, random component failures, and compromise security of the AVs might result in hazardous events, sometimes even in an accident, if left undetected. Beyond these technical issues, we argue that the safety and security of AVs against accidental and deliberate faults are poorly understood and rarely implemented. One possible way to overcome this is through a well-known diversity approach. As an effective approach to increase safety and security, diversity has been widely used in the aviation, railway, and aerospace industries. Thus, paper proposes fault-tolerance by diversity model taking into consideration the mitigation of accidental and deliberate faults by application of structure and variant redundancy. The model can be used to design the AVs with various types of diversity in hardware and software-based multi-version system. The paper evaluates the presented approach by employing an example from adaptive cruise control, followed by discussing the case study with initial findings.

Keywords: Autonomous vehicles, diversity, fault-tolerance, adaptive cruise control, safety, security.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 487
1044 Finite Volume Method for Flow Prediction Using Unstructured Meshes

Authors: Juhee Lee, Yongjun Lee

Abstract:

In designing a low-energy-consuming buildings, the heat transfer through a large glass or wall becomes critical. Multiple layers of the window glasses and walls are employed for the high insulation. The gravity driven air flow between window glasses or wall layers is a natural heat convection phenomenon being a key of the heat transfer. For the first step of the natural heat transfer analysis, in this study the development and application of a finite volume method for the numerical computation of viscous incompressible flows is presented. It will become a part of the natural convection analysis with high-order scheme, multi-grid method, and dual-time step in the future. A finite volume method based on a fully-implicit second-order is used to discretize and solve the fluid flow on unstructured grids composed of arbitrary-shaped cells. The integrations of the governing equation are discretised in the finite volume manner using a collocated arrangement of variables. The convergence of the SIMPLE segregated algorithm for the solution of the coupled nonlinear algebraic equations is accelerated by using a sparse matrix solver such as BiCGSTAB. The method used in the present study is verified by applying it to some flows for which either the numerical solution is known or the solution can be obtained using another numerical technique available in the other researches. The accuracy of the method is assessed through the grid refinement.

Keywords: Finite volume method, fluid flow, laminar flow, unstructured grid.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1845
1043 The Development of a Teachers- Self-Efficacy Instrument for High School Physical Education Teacher

Authors: Yi-Hsiang Pan

Abstract:

The purpose of this study was to develop a “teachers’ self-efficacy scale for high school physical education teachers (TSES-HSPET)” in Taiwan. This scale is based on the self-efficacy theory of Bandura [1], [2]. This study used exploratory and confirmatory factor analyses to test the reliability and validity. The participants were high school physical education teachers in Taiwan. Both stratified random sampling and cluster sampling were used to sample participants for the study. 350 teachers were sampled in the first stage and 234 valid scales (male 133, female 101) returned. During the second stage, 350 teachers were sampled and 257 valid scales (male 143, female 110, 4 did not indicate gender) returned. The exploratory factor analysis was used in the first stage, and it got 60.77% of total variance for construct validity. The Cronbach’s alpha coefficient of internal consistency was 0.91 for sumscale, and subscales were 0.84 and 0.90. In the second stage, confirmatory factor analysis was used to test construct validity. The result showed that the fit index could be accepted (χ2 (75) =167.94, p <.05, RMSEA =0.07, SRMR=0.05, GFI=0.92, NNFI=0.97, CFI=0.98, PNFI=0.79). Average variance extracted of latent variables were 0.43 and 0.53, which composite reliability are 0.78 and 0.90. It is concluded that the TSES-HSPET is a well-considered measurement instrument with acceptable validity and reliability. It may be used to estimate teachers’ self-efficacy for high school physical education teachers.

Keywords: teaching in physical education, teacher's self-efficacy, teacher's belief

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3180
1042 Determination of Lithology, Porosity and Water Saturation for Mishrif Carbonate Formation

Authors: F. S. Kadhim, A. Samsuri, H. Alwan

Abstract:

Well logging records can help to answer many questions from a wide range of special interested information and basic petrophysical properties to formation evaluation of oil and gas reservoirs. The accurate calculations of porosity in carbonate reservoirs are the most challenging aspects of the well logging analysis. Many equations have been developed over the years based on known physical principles or on empirically derived relationships, which are used to calculate porosity, estimate lithology, and water saturation; however these parameters are calculated from well logs by using modern technique in a current study. Nasiriya oil field is one of the giant oilfields in the Middle East, and the formation under study is the Mishrif carbonate formation which is the shallowest hydrocarbon bearing zone in this oilfield. Neurolog software was used to digitize the scanned copies of the available logs. Environmental corrections had been made as per Schlumberger charts 2005, which supplied in the Interactive Petrophysics software. Three saturation models have been used to calculate water saturation of carbonate formations, which are simple Archie equation, Dual water model, and Indonesia model. Results indicate that the Mishrif formation consists mainly of limestone, some dolomite, and shale. The porosity interpretation shows that the logging tools have a good quality after making the environmental corrections. The average formation water saturation for Mishrif formation is around 0.4- 0.6.This study is provided accurate behavior of petrophysical properties with depth for this formation by using modern software.

Keywords: Lithology, Porosity, Water Saturation, Carbonate Formation, Mishrif Formation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4070
1041 Banking Risk Management between the Prudential and the Operational Approaches

Authors: Mustapha Achibane, Imane Allam

Abstract:

Since the nineties, all Moroccan banking institutions have to respect an arsenal of prudential ratios. The respect of these prudential measures aims to ensure the financial system stability. In order to do so, regulatory authorities tried to reduce the financial and operational risks incurred by the banking entities. Meanwhile, regulatory authorities demanded a balance sheet management work from banks. They also asked them to establish a management control system to manage operational risk, as well as an effort in terms of incurred risk-based commitments. Therefore, the prudential approach has a macroeconomic nature and it is presented as a determinant of the operational, microeconomic approach. This operational approach takes the form of a strategy that each banking entity must develop to manage the different banking risks. This study seeks to analyze the problem of risk management between the prudential and the operational approaches. It was processed through a literature review followed by an analysis of the Moroccan banking sector’s performance. At first, we will reconcile the inductive logic and then, the analytical one. The first approach consists of analyzing the phenomenon from a normative and conceptual perspective, while the second one will consist of considering the Moroccan banking system and analyzing the behavior of Moroccan banking entities in terms of risk management and performance. The results identified a favorable growth in terms of performance, despite the huge provisioning effort made to meet the international standards and the harmonization of the regulations.

Keywords: Banking performance, financial intermediation, operational approach, prudential standards, risk management.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 637
1040 Identifying Neighborhoods at Potential Risk of Food Insecurity in Rural British Columbia

Authors: Amirmohsen Behjat, Aleck Ostry, Christina Miewald, Bernie Pauly

Abstract:

Substantial research has indicated that socioeconomic and demographic characteristics’ of neighborhoods are strong determinants of food security. The aim of this study was to develop a Food Insecurity Neighborhood Index (FINI) based on the associated socioeconomic and demographic variables to identify the areas at potential risk of food insecurity in rural British Columbia (BC). Principle Component Analysis (PCA) technique was used to calculate the FINI for each rural Dissemination Area (DA) using the food security determinant variables from Canadian Census data. Using ArcGIS, the neighborhoods with the top quartile FINI values were classified as food insecure. The results of this study indicated that the most food insecure neighborhood with the highest FINI value of 99.1 was in the Bulkley-Nechako (central BC) area whereas the lowest FINI with the value of 2.97 was for a rural neighborhood in the Cowichan Valley area. In total, 98.049 (19%) of the rural population of British Columbians reside in high food insecure areas. Moreover, the distribution of food insecure neighborhoods was found to be strongly dependent on the degree of rurality in BC. In conclusion, the cluster of food insecure neighbourhoods was more pronounced in Central Coast, Mount Wadington, Peace River, Kootenay Boundary, and the Alberni-Clayoqout Regional Districts.

Keywords: Neighbourhood food insecurity index, socioeconomic and demographic determinants, principal component analysis, Canada Census, ArcGIS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 897
1039 Portfolio Management for Construction Company during Covid-19 Using AHP Technique

Authors: Sareh Rajabi, Salwa Bheiry

Abstract:

In general, Covid-19 created many financial and non-financial damages to the economy and community. Level and severity of covid-19 as pandemic case varies over the region and due to different types of the projects. Covid-19 virus emerged as one of the most imperative risk management factors word-wide recently. Therefore, as part of portfolio management assessment, it is essential to evaluate severity of such risk on the project and program in portfolio management level to avoid any risky portfolio. Covid-19 appeared very effectively in South America, part of Europe and Middle East. Such pandemic infection affected the whole universe, due to lock down, interruption in supply chain management, health and safety requirements, transportations and commercial impacts. Therefore, this research proposes Analytical Hierarchy Process (AHP) to analyze and assess such pandemic case like Covid-19 and its impacts on the construction projects. The AHP technique uses four sub-criteria: Health and safety, commercial risk, completion risk and contractual risk to evaluate the project and program. The result will provide the decision makers with information which project has higher or lower risk in case of Covid-19 and pandemic scenario. Therefore, the decision makers can have most feasible solution based on effective weighted criteria for project selection within their portfolio to match with the organization’s strategies.

Keywords: Portfolio management, risk management, COVID-19, analytical hierarchy process technique.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 832
1038 AI-Based Approaches for Task Offloading, ‎Resource ‎Allocation and Service Placement of ‎IoT Applications: State of the Art

Authors: Fatima Z. Cherhabil, Mammar Sedrati, Sonia-Sabrina Bendib‎

Abstract:

In order to support the continued growth, critical latency of ‎IoT ‎applications and ‎various obstacles of traditional data centers, ‎Mobile Edge ‎Computing (MEC) has ‎emerged as a promising solution that extends the cloud data-processing and decision-making to edge devices. ‎By adopting a MEC structure, IoT applications could be executed ‎locally, on ‎an edge server, different fog nodes or distant cloud ‎data centers. However, we are ‎often ‎faced with wanting to optimize conflicting criteria such as ‎minimizing energy ‎consumption of limited local capabilities (in terms of CPU, RAM, storage, bandwidth) of mobile edge ‎devices and trying to ‎keep ‎high performance (reducing ‎response time, increasing throughput and service availability) ‎at the same ‎time‎. Achieving one goal may affect the other making Task Offloading (TO), ‎Resource Allocation (RA) and Service Placement (SP) complex ‎processes. ‎It is a nontrivial multi-objective optimization ‎problem ‎to study the trade-off between conflicting criteria. ‎The paper provides a survey on different TO, SP and RA recent Multi-‎Objective Optimization (MOO) approaches used in edge computing environments, particularly Artificial Intelligent (AI) ones, to satisfy various objectives, constraints and dynamic conditions related to IoT applications‎.

Keywords: Mobile Edge Computing, Multi-Objective Optimization, Artificial Intelligence ‎Approaches, Task Offloading, Resource Allocation, Service Placement‎.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 514
1037 Geophysical Investigation for Pre-Engineering Construction Works in Part of Ilorin, Northcentral Nigeria

Authors: O. Ologe, A. I. Augie

Abstract:

A geophysical investigation involving geoelectric depths sounding has been conducted as pre-foundation study in part of Ilorin, Nigeria. The area is underlain by the Precambrian basement complex rocks. 15 sounding stations were established along five traverses. The Vertical Electrical Sounding (VES) (three-five) conducted along each of the traverses was subjected to computer iteration using IP2Win software. Three -five subsurface geologic layers were delineated in the study area. These include the topsoil with resistivity and thickness values ranging from 103 Ωm-210 Ωm and 0 m-1 m; lateritic (117 Ωm-590 Ωm and 1 m-4.7 m); sandy clay (137 – 859 Ωm and 2.9 m – 4.3 m); weathered (60.5 Ωm to 2539 Ωm and 3,2 m-10 m) and fresh basement (2253-∞ and 7.1 m-∞) respectively. The resistivity pseudosection shows continuous high resistivity zone on the surface. Resistivity of this layer from depth 0-5 m varies from 300-800 Ωm along traverse 1 and 2. Hence, this layer is rated competent as it has the ability to support engineering structure. However, along traverse 1, very low resistive layer occurs between VES 5 and 15 with resistivity values ranging from 30 Ωm-70 Ωm. This layer was rated incompetent based on the competence rating. This study revealed the importance of geophysical survey as a pre-construction engineering survey at any civil engineering site since it can reliably evaluate the competence of the subsurface geomaterials.

Keywords: Competence rating, geoelectric, pseudosection, soil, vertical electrical sounding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 559
1036 A Novel Method to Manufacture Superhydrophobic and Insulating Polyester Nanofibers via a Meso-Porous Aerogel Powder

Authors: Z. Mazrouei-Sebdani, A. Khoddami, H. Hadadzadeh, M. Zarrebini

Abstract:

In this research, waterglass based aerogel powder was prepared by sol–gel process and ambient pressure drying. Inspired by limited dust releasing, aerogel powder was introduced to the PET electrospinning solution in an attempt to create required bulk and surface structure for the nanofibers to improve their hydrophobic and insulation properties. The samples evaluation was carried out by measuring density, porosity, contact angle, heat transfer, FTIR, BET, and SEM. According to the results, porous silica aerogel powder was fabricated with mean pore diameter of 24 nm and contact angle of 145.9º. The results indicated the usefulness of the aerogel powder confined into nanofibers to control surface roughness for manipulating superhydrophobic nanowebs with water contact angle of 147º. It can be due to a multi-scale surface roughness which was created by nanowebs structure itself and nanofibers surface irregularity in presence of the aerogels while a layer of fluorocarbon created low surface energy. The wettability of a solid substrate is an important property that is controlled by both the chemical composition and geometry of the surface. Also, a decreasing trend in the heat transfer was observed from 22% for the nanofibers without any aerogel powder to 8% for the nanofibers with 4% aerogel powder. The development of thermal insulating materials has become increasingly more important than ever in view of the fossil energy depletion and global warming that call for more demanding energysaving practices.

Keywords: Superhydrophobicity, Insulation, Sol-gel, Surface energy, Roughness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2968
1035 Fluorescence Quenching as an Efficient Tool for Sensing Application: Study on the Fluorescence Quenching of Naphthalimide Dye by Graphene Oxide

Authors: Sanaz Seraj, Shohre Rouhani

Abstract:

Recently, graphene has gained much attention because of its unique optical, mechanical, electrical, and thermal properties. Graphene has been used as a key material in the technological applications in various areas such as sensors, drug delivery, super capacitors, transparent conductor, and solar cell. It has a superior quenching efficiency for various fluorophores. Based on these unique properties, the optical sensors with graphene materials as the energy acceptors have demonstrated great success in recent years. During quenching, the emission of a fluorophore is perturbed by a quencher which can be a substrate or biomolecule, and due to this phenomenon, fluorophore-quencher has been used for selective detection of target molecules. Among fluorescence dyes, 1,8-naphthalimide is well known for its typical intramolecular charge transfer (ICT) and photo-induced charge transfer (PET) fluorophore, strong absorption and emission in the visible region, high photo stability, and large Stokes shift. Derivatives of 1,8-naphthalimides have found applications in some areas, especially fluorescence sensors. Herein, the fluorescence quenching of graphene oxide has been carried out on a naphthalimide dye as a fluorescent probe model. The quenching ability of graphene oxide on naphthalimide dye was studied by UV-VIS and fluorescence spectroscopy. This study showed that graphene is an efficient quencher for fluorescent dyes. Therefore, it can be used as a suitable candidate sensing platform. To the best of our knowledge, studies on the quenching and absorption of naphthalimide dyes by graphene oxide are rare.

Keywords: Fluorescence, graphene oxide, naphthalimide dye, quenching.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 757
1034 Received Signal Strength Indicator Based Localization of Bluetooth Devices Using Trilateration: An Improved Method for the Visually Impaired People

Authors: Muhammad Irfan Aziz, Thomas Owens, Uzair Khaleeq uz Zaman

Abstract:

The instantaneous and spatial localization for visually impaired people in dynamically changing environments with unexpected hazards and obstacles, is the most demanding and challenging issue faced by the navigation systems today. Since Bluetooth cannot utilize techniques like Time Difference of Arrival (TDOA) and Time of Arrival (TOA), it uses received signal strength indicator (RSSI) to measure Receive Signal Strength (RSS). The measurements using RSSI can be improved significantly by improving the existing methodologies related to RSSI. Therefore, the current paper focuses on proposing an improved method using trilateration for localization of Bluetooth devices for visually impaired people. To validate the method, class 2 Bluetooth devices were used along with the development of a software. Experiments were then conducted to obtain surface plots that showed the signal interferences and other environmental effects. Finally, the results obtained show the surface plots for all Bluetooth modules used along with the strong and weak points depicted as per the color codes in red, yellow and blue. It was concluded that the suggested improved method of measuring RSS using trilateration helped to not only measure signal strength affectively but also highlighted how the signal strength can be influenced by atmospheric conditions such as noise, reflections, etc.

Keywords: Bluetooth, indoor/outdoor localization, received signal strength indicator, visually impaired.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 783
1033 Performance Assessment of Multi-Level Ensemble for Multi-Class Problems

Authors: Rodolfo Lorbieski, Silvia Modesto Nassar

Abstract:

Many supervised machine learning tasks require decision making across numerous different classes. Multi-class classification has several applications, such as face recognition, text recognition and medical diagnostics. The objective of this article is to analyze an adapted method of Stacking in multi-class problems, which combines ensembles within the ensemble itself. For this purpose, a training similar to Stacking was used, but with three levels, where the final decision-maker (level 2) performs its training by combining outputs from the tree-based pair of meta-classifiers (level 1) from Bayesian families. These are in turn trained by pairs of base classifiers (level 0) of the same family. This strategy seeks to promote diversity among the ensembles forming the meta-classifier level 2. Three performance measures were used: (1) accuracy, (2) area under the ROC curve, and (3) time for three factors: (a) datasets, (b) experiments and (c) levels. To compare the factors, ANOVA three-way test was executed for each performance measure, considering 5 datasets by 25 experiments by 3 levels. A triple interaction between factors was observed only in time. The accuracy and area under the ROC curve presented similar results, showing a double interaction between level and experiment, as well as for the dataset factor. It was concluded that level 2 had an average performance above the other levels and that the proposed method is especially efficient for multi-class problems when compared to binary problems.

Keywords: Stacking, multi-layers, ensemble, multi-class.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1093
1032 Fiber Braggs Grating Sensor Based Instrumentation to Evaluate Postural Balance and Stability on an Unstable Platform

Authors: Chethana K., Guru Prasad A. S., Vikranth H. N., Varun H., Omkar S. N., Asokan S.

Abstract:

This paper describes a novel application of Fiber Braggs Grating (FBG) sensors in the assessment of human postural stability and balance on an unstable platform. In this work, FBG sensor Stability Analyzing Device (FBGSAD) is developed for measurement of plantar strain to assess the postural stability of subjects on unstable platforms during different stances in eyes open and eyes closed conditions on a rocker board. The studies are validated by comparing the Centre of Gravity (CG) variations measured on the lumbar vertebra of subjects using a commercial accelerometer. The results obtained from the developed FBGSAD depict qualitative similarities with the data recorded by commercial accelerometer. The advantage of the FBGSAD is that it measures simultaneously plantar strain distribution and postural stability of the subject along with its inherent benefits like non-requirement of energizing voltage to the sensor, electromagnetic immunity and simple design which suits its applicability in biomechanical applications. The developed FBGSAD can serve as a tool/yardstick to mitigate space motion sickness, identify individuals who are susceptible to falls and to qualify subjects for balance and stability, which are important factors in the selection of certain unique professionals such as aircraft pilots, astronauts, cosmonauts etc.

Keywords: Biomechanics, Fiber Bragg Gratings, Plantar Strain Measurement, Postural Stability Analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2847
1031 A Software Framework for Predicting Oil-Palm Yield from Climate Data

Authors: Mohd. Noor Md. Sap, A. Majid Awan

Abstract:

Intelligent systems based on machine learning techniques, such as classification, clustering, are gaining wide spread popularity in real world applications. This paper presents work on developing a software system for predicting crop yield, for example oil-palm yield, from climate and plantation data. At the core of our system is a method for unsupervised partitioning of data for finding spatio-temporal patterns in climate data using kernel methods which offer strength to deal with complex data. This work gets inspiration from the notion that a non-linear data transformation into some high dimensional feature space increases the possibility of linear separability of the patterns in the transformed space. Therefore, it simplifies exploration of the associated structure in the data. Kernel methods implicitly perform a non-linear mapping of the input data into a high dimensional feature space by replacing the inner products with an appropriate positive definite function. In this paper we present a robust weighted kernel k-means algorithm incorporating spatial constraints for clustering the data. The proposed algorithm can effectively handle noise, outliers and auto-correlation in the spatial data, for effective and efficient data analysis by exploring patterns and structures in the data, and thus can be used for predicting oil-palm yield by analyzing various factors affecting the yield.

Keywords: Pattern analysis, clustering, kernel methods, spatial data, crop yield

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1979
1030 Automatic Tuning for a Systemic Model of Banking Originated Losses (SYMBOL) Tool on Multicore

Authors: Ronal Muresano, Andrea Pagano

Abstract:

Nowadays, the mathematical/statistical applications are developed with more complexity and accuracy. However, these precisions and complexities have brought as result that applications need more computational power in order to be executed faster. In this sense, the multicore environments are playing an important role to improve and to optimize the execution time of these applications. These environments allow us the inclusion of more parallelism inside the node. However, to take advantage of this parallelism is not an easy task, because we have to deal with some problems such as: cores communications, data locality, memory sizes (cache and RAM), synchronizations, data dependencies on the model, etc. These issues are becoming more important when we wish to improve the application’s performance and scalability. Hence, this paper describes an optimization method developed for Systemic Model of Banking Originated Losses (SYMBOL) tool developed by the European Commission, which is based on analyzing the application's weakness in order to exploit the advantages of the multicore. All these improvements are done in an automatic and transparent manner with the aim of improving the performance metrics of our tool. Finally, experimental evaluations show the effectiveness of our new optimized version, in which we have achieved a considerable improvement on the execution time. The time has been reduced around 96% for the best case tested, between the original serial version and the automatic parallel version.

Keywords: Algorithm optimization, Bank Failures, OpenMP, Parallel Techniques, Statistical tool.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1900
1029 A Review of the Characteristics and Optimization of Optical Properties of Zirconia Ceramics for Aesthetic Dental Restorations

Authors: R. A. Shahmiri, O. C. Standard, J. N. Hart, C. C. Sorrell

Abstract:

The ceramic yttria-stabilized tetragonal zirconia polycrystal (Y-TZP) has been used as a dental biomaterial for several decades. The strength and toughness of this material can be accounted for by its toughening mechanisms, which include transformation toughening, crack deflection, zone shielding, contact shielding, and crack bridging. Prevention of crack propagation is of critical importance in high-fatigue situations, such as those encountered in mastication and para-function. However, the poor translucence of Y-TZP in polycrystalline form is such that it may not meet the aesthetic requirements due to its white/grey appearance. To improve the optical properties of Y-TZP, more detailed study of the optical properties is required; in particular, precise evaluation of the refractive index, absorption coefficient, and scattering coefficient are necessary. The measurement of the optical parameters has been based on the assumption that light scattered from biological media is isotropically distributed over all angles. In fact, the optical behavior of real biological materials depends on the angular scattering of light due to the anisotropic nature of the materials. The purpose of the present work is to evaluate the optical properties (including color, opacity/translucence, scattering, and fluorescence) of zirconia dental ceramics and their control through modification of the chemical composition, phase composition, and surface microstructure.

Keywords: Optical properties, opacity/translucence, scattering, fluorescence, chemical composition, phase composition, surface microstructure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1512
1028 Design, Fabrication and Performance Evaluation of Mobile Engine-Driven Pneumatic Paddy Collector

Authors: Sony P. Aquino, Helen F. Gavino, Victorino T. Taylan, Teresito G. Aguinaldo

Abstract:

A simple mobile engine-driven pneumatic paddy collector made of locally available materials using local manufacturing technology was designed, fabricated, and tested for collecting and bagging of paddy dried on concrete pavement. The pneumatic paddy collector had the following major components: radial flat bladed type centrifugal fan, power transmission system, bagging area, frame and the conveyance system. Results showed significant differences on the collecting capacity, noise level, and fuel consumption when rotational speed of the air mover shaft was varied. Other parameters such as collecting efficiency, air velocity, augmented cracked grain percentage, and germination rate were not significantly affected by varying rotational speed of the air mover shaft. The pneumatic paddy collector had a collecting efficiency of 99.33 % with a collecting capacity of 2685.00 kg/h at maximum rotational speed of centrifugal fan shaft of about 4200 rpm. The machine entailed an investment cost of P 62,829.25. The break-even weight of paddy was 510,606.75 kg/yr at a collecting cost of 0.11 P/kg of paddy. Utilizing the machine for 400 hours per year generated an income of P 23,887.73. The projected time needed to recover cost of the machine based on 2685 kg/h collecting capacity was 2.63 year.

Keywords: Mobile engine-driven pneumatic paddy collector, collecting capacity and efficiency, simple cost analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5534
1027 A Framework for Improving Trade Contractors’ Productivity Tracking Methods

Authors: Sophia Hayes, Kenny L. Liang, Sahil Sharma, Austin Shema, Mahmoud Bader, Mohamed Elbarkouky

Abstract:

Despite being one of the most significant economic contributors of the country, Canada’s construction industry is lagging behind other sectors when it comes to labor productivity improvements. The construction industry is very collaborative as a general contractor, will hire trade contractors to perform most of a project’s work; meaning low productivity from one contractor can have a domino effect on the shared success of a project. To address this issue and encourage trade contractors to improve their productivity tracking methods, an investigative study was done on the productivity views and tracking methods of various trade contractors. Additionally, an in-depth review was done on four standard tracking methods used in the construction industry: cost codes, benchmarking, the job productivity measurement (JPM) standard, and WorkFace Planning (WFP). The four tracking methods were used as a baseline in comparing the trade contractors’ responses, determining gaps within their current tracking methods, and for making improvement recommendations. 15 interviews were conducted with different trades to analyze how contractors value productivity. The results of these analyses indicated that there seem to be gaps within the construction industry when it comes to an understanding of the purpose and value in productivity tracking. The trade contractors also shared their current productivity tracking systems; which were then compared to the four standard tracking methods used in the construction industry. Gaps were identified in their various tracking methods and using a framework; recommendations were made based on the type of trade on how to improve how they track productivity.

Keywords: Trade contractors’ productivity, productivity tracking, cost codes, benchmarking, job productivity measurement, JPM, workface planning WFP.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 888
1026 Safe and Efficient Deep Reinforcement Learning Control Model: A Hydroponics Case Study

Authors: Almutasim Billa A. Alanazi, Hal S. Tharp

Abstract:

Safe performance and efficient energy consumption are essential factors for designing a control system. This paper presents a reinforcement learning (RL) model that can be applied to control applications to improve safety and reduce energy consumption. As hardware constraints and environmental disturbances are imprecise and unpredictable, conventional control methods may not always be effective in optimizing control designs. However, RL has demonstrated its value in several artificial intelligence (AI) applications, especially in the field of control systems. The proposed model intelligently monitors a system's success by observing the rewards from the environment, with positive rewards counting as a success when the controlled reference is within the desired operating zone. Thus, the model can determine whether the system is safe to continue operating based on the designer/user specifications, which can be adjusted as needed. Additionally, the controller keeps track of energy consumption to improve energy efficiency by enabling the idle mode when the controlled reference is within the desired operating zone, thus reducing the system energy consumption during the controlling operation. Water temperature control for a hydroponic system is taken as a case study for the RL model, adjusting the variance of disturbances to show the model’s robustness and efficiency. On average, the model showed safety improvement by up to 15% and energy efficiency improvements by 35%-40% compared to a traditional RL model.

Keywords: Control system, hydroponics, machine learning, reinforcement learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 207
1025 Mathematical Description of Functional Motion and Application as a Feeding Mode for General Purpose Assistive Robots

Authors: Martin Leroux, Sylvain Brisebois

Abstract:

Eating a meal is among the Activities of Daily Living, but it takes a lot of time and effort for people with physical or functional limitations. Dedicated technologies are cumbersome and not portable, while general-purpose assistive robots such as wheelchair-based manipulators are too hard to control for elaborate continuous motion like eating. Eating with such devices has not previously been automated, since there existed no description of a feeding motion for uncontrolled environments. In this paper, we introduce a feeding mode for assistive manipulators, including a mathematical description of trajectories for motions that are difficult to perform manually such as gathering and scooping food at a defined/desired pace. We implement these trajectories in a sequence of movements for a semi-automated feeding mode which can be controlled with a very simple 3-button interface, allowing the user to have control over the feeding pace. Finally, we demonstrate the feeding mode with a JACO robotic arm and compare the eating speed, measured in bites per minute of three eating methods: a healthy person eating unaided, a person with upper limb limitations or disability using JACO with manual control, and a person with limitations using JACO with the feeding mode. We found that the feeding mode allows eating about 5 bites per minute, which should be sufficient to eat a meal under 30min.

Keywords: Assistive robotics, Automated feeding, Elderly care, Trajectory design, Human-Robot Interaction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1117
1024 Investigating the Demand for Short-shelf Life Food Products for SME Wholesalers

Authors: Yamini Raju, Parminder S. Kang, Adam Moroz, Ross Clement, Ashley Hopwell, Alistair Duffy

Abstract:

Accurate forecasting of fresh produce demand is one the challenges faced by Small Medium Enterprise (SME) wholesalers. This paper is an attempt to understand the cause for the high level of variability such as weather, holidays etc., in demand of SME wholesalers. Therefore, understanding the significance of unidentified factors may improve the forecasting accuracy. This paper presents the current literature on the factors used to predict demand and the existing forecasting techniques of short shelf life products. It then investigates a variety of internal and external possible factors, some of which is not used by other researchers in the demand prediction process. The results presented in this paper are further analysed using a number of techniques to minimize noise in the data. For the analysis past sales data (January 2009 to May 2014) from a UK based SME wholesaler is used and the results presented are limited to product ‘Milk’ focused on café’s in derby. The correlation analysis is done to check the dependencies of variability factor on the actual demand. Further PCA analysis is done to understand the significance of factors identified using correlation. The PCA results suggest that the cloud cover, weather summary and temperature are the most significant factors that can be used in forecasting the demand. The correlation of the above three factors increased relative to monthly and becomes more stable compared to the weekly and daily demand.

Keywords: Demand Forecasting, Deteriorating Products, Food Wholesalers, Principal Component Analysis and Variability Factors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3368
1023 Online Signature Verification Using Angular Transformation for e-Commerce Services

Authors: Peerapong Uthansakul, Monthippa Uthansakul

Abstract:

The rapid growth of e-Commerce services is significantly observed in the past decade. However, the method to verify the authenticated users still widely depends on numeric approaches. A new search on other verification methods suitable for online e-Commerce is an interesting issue. In this paper, a new online signature-verification method using angular transformation is presented. Delay shifts existing in online signatures are estimated by the estimation method relying on angle representation. In the proposed signature-verification algorithm, all components of input signature are extracted by considering the discontinuous break points on the stream of angular values. Then the estimated delay shift is captured by comparing with the selected reference signature and the error matching can be computed as a main feature used for verifying process. The threshold offsets are calculated by two types of error characteristics of the signature verification problem, False Rejection Rate (FRR) and False Acceptance Rate (FAR). The level of these two error rates depends on the decision threshold chosen whose value is such as to realize the Equal Error Rate (EER; FAR = FRR). The experimental results show that through the simple programming, employed on Internet for demonstrating e-Commerce services, the proposed method can provide 95.39% correct verifications and 7% better than DP matching based signature-verification method. In addition, the signature verification with extracting components provides more reliable results than using a whole decision making.

Keywords: Online signature verification, e-Commerce services, Angular transformation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1582
1022 X-Ray Intensity Measurement Using Frequency Output Sensor for Computed Tomography

Authors: R. M. Siddiqui, D. Z. Moghaddam, T. R. Turlapati, S. H. Khan, I. Ul Ahad

Abstract:

Quality of 2D and 3D cross-sectional images produce by Computed Tomography primarily depend upon the degree of precision of primary and secondary X-Ray intensity detection. Traditional method of primary intensity detection is apt to errors. Recently the X-Ray intensity measurement system along with smart X-Ray sensors is developed by our group which is able to detect primary X-Ray intensity unerringly. In this study a new smart X-Ray sensor is developed using Light-to-Frequency converter TSL230 from Texas Instruments which has numerous advantages in terms of noiseless data acquisition and transmission. TSL230 construction is based on a silicon photodiode which converts incoming X-Ray radiation into the proportional current signal. A current to frequency converter is attached to this photodiode on a single monolithic CMOS integrated circuit which provides proportional frequency count to incoming current signal in the form of the pulse train. The frequency count is delivered to the center of PICDEM FS USB board with PIC18F4550 microcontroller mounted on it. With highly compact electronic hardware, this Demo Board efficiently read the smart sensor output data. The frequency output approaches overcome nonlinear behavior of sensors with analog output thus un-attenuated X-Ray intensities could be measured precisely and better normalization could be acquired in order to attain high resolution.

Keywords: Computed tomography, detector technology, X-Ray intensity measurement

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2609
1021 Resilient Manufacturing: Use of Augmented Reality to Advance Training and Operating Practices in Manual Assembly

Authors: L. C. Moreira, M. Kauffman

Abstract:

This paper outlines the results of an experimental research on deploying an emerging augmented reality (AR) system for real-time task assistance (or work instructions) of highly customised and high-risk manual operations. The focus is on human operators’ training effectiveness and performance and the aim is to test if such technologies can support enhancing the knowledge retention levels and accuracy of task execution to improve health and safety (H&S). An AR enhanced assembly method is proposed and experimentally tested using a real industrial process as case study for electric vehicles’ (EV) battery module assembly. The experimental results revealed that the proposed method improved the training practices and performance through increases in the knowledge retention levels from 40% to 84%, and accuracy of task execution from 20% to 71%, when compared to the traditional paper-based method. The results of this research validate and demonstrate how emerging technologies are advancing the choice for manual, hybrid or fully automated processes by promoting the XR-assisted processes, and the connected worker (a vision for Industry 4 and 5.0), and supporting manufacturing become more resilient in times of constant market changes.

Keywords: Augmented reality, extended reality, connected worker, XR-assisted operator, manual assembly 4.0, industry 5.0, smart training, battery assembly.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 379