Search results for: Jaccard’s similarity measure.
167 Design, Modeling and Fabrication of a Tactile Sensor and Display System for Application in Laparoscopic Surgery
Authors: M. Ramezanifard, J. Dargahi, S. Najarian, N. Narayanan
Abstract:
One of the major disadvantages of the minimally invasive surgery (MIS) is the lack of tactile feedback to the surgeon. In order to identify and avoid any damage to the grasped complex tissue by endoscopic graspers, it is important to measure the local softness of tissue during MIS. One way to display the measured softness to the surgeon is a graphical method. In this paper, a new tactile sensor has been reported. The tactile sensor consists of an array of four softness sensors, which are integrated into the jaws of a modified commercial endoscopic grasper. Each individual softness sensor consists of two piezoelectric polymer Polyvinylidene Fluoride (PVDF) films, which are positioned below a rigid and a compliant cylinder. The compliant cylinder is fabricated using a micro molding technique. The combination of output voltages from PVDF films is used to determine the softness of the grasped object. The theoretical analysis of the sensor is also presented. A method has been developed with the aim of reproducing the tactile softness to the surgeon by using a graphical method. In this approach, the proposed system, including the interfacing and the data acquisition card, receives signals from the array of softness sensors. After the signals are processed, the tactile information is displayed by means of a color coding method. It is shown that the degrees of softness of the grasped objects/tissues can be visually differentiated and displayed on a monitor.Keywords: Minimally invasive surgery, Robotic surgery, Sensor, Softness, Tactile.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1711166 Basic Research for Electroretinogram Moving the Center of the Multifocal Hexagonal Stimulus Array
Authors: Naoto Suzuki
Abstract:
Many ophthalmologists can examine declines in visual sensitivity at arbitrary points on the retina using a precise perimetry device with a fundus camera function. However, the retinal layer causing the decline in visual sensitivity cannot be identified by this method. We studied an electroretinogram (ERG) function that can move the center of the multifocal hexagonal stimulus array in order to investigate cryptogenic diseases, such as macular dystrophy, acute zonal occult outer retinopathy, and multiple evanescent white dot syndrome. An electroretinographic optical system, specifically a perimetric optical system, was added to an experimental device carrying the same optical system as a fundus camera. We also added an infrared camera, a cold mirror, a halogen lamp, and a monitor. The software was generated to show the multifocal hexagonal stimulus array on the monitor using C++Builder XE8 and to move the center of the array up and down as well as back and forth. We used a multifunction I/O device and its design platform LabVIEW for data retrieval. The plate electrodes were used to measure electrodermal activities around the eyes. We used a multifocal hexagonal stimulus array with 37 elements in the software. The center of the multifocal hexagonal stimulus array could be adjusted to the same position as the examination target of the precise perimetry. We successfully added the moving ERG function to the experimental ophthalmologic device.
Keywords: Moving ERG, precise perimetry, retinal layers, visual sensitivity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 783165 Ontology of Collaborative Supply Chain for Quality Management
Authors: Jiaqi Yan, Sherry Sun, Huaiqing Wang, Zhongsheng Hua
Abstract:
In the highly competitive and rapidly changing global marketplace, independent organizations and enterprises often come together and form a temporary alignment of virtual enterprise in a supply chain to better provide products or service. As firms adopt the systems approach implicit in supply chain management, they must manage the quality from both internal process control and external control of supplier quality and customer requirements. How to incorporate quality management of upstream and downstream supply chain partners into their own quality management system has recently received a great deal of attention from both academic and practice. This paper investigate the collaborative feature and the entities- relationship in a supply chain, and presents an ontology of collaborative supply chain from an approach of aligning service-oriented framework with service-dominant logic. This perspective facilitates the segregation of material flow management from manufacturing capability management, which provides a foundation for the coordination and integration of the business process to measure, analyze, and continually improve the quality of products, services, and process. Further, this approach characterizes the different interests of supply chain partners, providing an innovative approach to analyze the collaborative features of supply chain. Furthermore, this ontology is the foundation to develop quality management system which internalizes the quality management in upstream and downstream supply chain partners and manages the quality in supply chain systematically.Keywords: Ontology, supply chain quality management, service-oriented architecture, service-dominant logic.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1855164 Non-Methane Hydrocarbons Emission during the Photocopying Process
Authors: Kiurski S. Jelena, Aksentijević M. Snežana, Kecić S. Vesna, Oros B. Ivana
Abstract:
Prosperity of electronic equipment in photocopying environment not only has improved work efficiency, but also has changed indoor air quality. Considering the number of photocopying employed, indoor air quality might be worse than in general office environments. Determining the contribution from any type of equipment to indoor air pollution is a complex matter. Non-methane hydrocarbons are known to have an important role on air quality due to their high reactivity. The presence of hazardous pollutants in indoor air has been detected in one photocopying shop in Novi Sad, Serbia. Air samples were collected and analyzed for five days, during 8-hr working time in three time intervals, whereas three different sampling points were determined. Using multiple linear regression model and software package STATISTICA 10 the concentrations of occupational hazards and microclimates parameters were mutually correlated. Based on the obtained multiple coefficients of determination (0.3751, 0.2389 and 0.1975), a weak positive correlation between the observed variables was determined. Small values of parameter F indicated that there was no statistically significant difference between the concentration levels of nonmethane hydrocarbons and microclimates parameters. The results showed that variable could be presented by the general regression model: y = b0 + b1xi1+ b2xi2. Obtained regression equations allow to measure the quantitative agreement between the variables and thus obtain more accurate knowledge of their mutual relations.Keywords: Indoor air quality, multiple regression analysis, nonmethane hydrocarbons, photocopying process.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1974163 Urban Greenery in the Greatest Polish Cities: Analysis of Spatial Concentration
Authors: Elżbieta Antczak
Abstract:
Cities offer important opportunities for economic development and for expanding access to basic services, including health care and education, for large numbers of people. Moreover, green areas (as an integral part of sustainable urban development) present a major opportunity for improving urban environments, quality of lives and livelihoods. This paper examines, using spatial concentration and spatial taxonomic measures, regional diversification of greenery in the cities of Poland. The analysis includes location quotients, Lorenz curve, Locational Gini Index, and the synthetic index of greenery and spatial statistics tools: (1) To verify the occurrence of strong concentration or dispersion of the phenomenon in time and space depending on the variable category, and, (2) To study if the level of greenery depends on the spatial autocorrelation. The data includes the greatest Polish cities, categories of the urban greenery (parks, lawns, street greenery, and green areas on housing estates, cemeteries, and forests) and the time span 2004-2015. According to the obtained estimations, most of cites in Poland are already taking measures to become greener. However, in the country there are still many barriers to well-balanced urban greenery development (e.g. uncontrolled urban sprawl, poor management as well as lack of spatial urban planning systems).
Keywords: Greenery, urban areas, regional spatial diversification and concentration, spatial taxonomic measure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1252162 Six Sigma-Based Optimization of Shrinkage Accuracy in Injection Molding Processes
Authors: Sky Chou, Joseph C. Chen
Abstract:
This paper focuses on using six sigma methodologies to reach the desired shrinkage of a manufactured high-density polyurethane (HDPE) part produced by the injection molding machine. It presents a case study where the correct shrinkage is required to reduce or eliminate defects and to improve the process capability index Cp and Cpk for an injection molding process. To improve this process and keep the product within specifications, the six sigma methodology, design, measure, analyze, improve, and control (DMAIC) approach, was implemented in this study. The six sigma approach was paired with the Taguchi methodology to identify the optimized processing parameters that keep the shrinkage rate within the specifications by our customer. An L9 orthogonal array was applied in the Taguchi experimental design, with four controllable factors and one non-controllable/noise factor. The four controllable factors identified consist of the cooling time, melt temperature, holding time, and metering stroke. The noise factor is the difference between material brand 1 and material brand 2. After the confirmation run was completed, measurements verify that the new parameter settings are optimal. With the new settings, the process capability index has improved dramatically. The purpose of this study is to show that the six sigma and Taguchi methodology can be efficiently used to determine important factors that will improve the process capability index of the injection molding process.
Keywords: Injection molding, shrinkage, six sigma, Taguchi parameter design.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1381161 Performance Assessment of Multi-Level Ensemble for Multi-Class Problems
Authors: Rodolfo Lorbieski, Silvia Modesto Nassar
Abstract:
Many supervised machine learning tasks require decision making across numerous different classes. Multi-class classification has several applications, such as face recognition, text recognition and medical diagnostics. The objective of this article is to analyze an adapted method of Stacking in multi-class problems, which combines ensembles within the ensemble itself. For this purpose, a training similar to Stacking was used, but with three levels, where the final decision-maker (level 2) performs its training by combining outputs from the tree-based pair of meta-classifiers (level 1) from Bayesian families. These are in turn trained by pairs of base classifiers (level 0) of the same family. This strategy seeks to promote diversity among the ensembles forming the meta-classifier level 2. Three performance measures were used: (1) accuracy, (2) area under the ROC curve, and (3) time for three factors: (a) datasets, (b) experiments and (c) levels. To compare the factors, ANOVA three-way test was executed for each performance measure, considering 5 datasets by 25 experiments by 3 levels. A triple interaction between factors was observed only in time. The accuracy and area under the ROC curve presented similar results, showing a double interaction between level and experiment, as well as for the dataset factor. It was concluded that level 2 had an average performance above the other levels and that the proposed method is especially efficient for multi-class problems when compared to binary problems.Keywords: Stacking, multi-layers, ensemble, multi-class.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1093160 Cultural Practices as a Coping Measure for Women who Terminated a Pregnancy in Adolescence: A Qualitative Study
Authors: Botshelo R. Sebola
Abstract:
Unintended pregnancy often results in pregnancy termination. Most countries have legalised the termination of a pregnancy and pregnant adolescents can visit designated clinics without their parents’ consent. In most African and Asian countries, certain cultural practices are performed following any form of childbirth, including abortion, and such practices are ingrained in societies. The aim of this paper was to understand how women who terminated a pregnancy during adolescence coped by embracing cultural practices. A descriptive multiple case study design was adopted for the study. In-depth, semi-structured interviews and reflective diaries were used for data collection. Participants were 13 women aged 20 to 35 years who had terminated a pregnancy in adolescence. Three women kept their soiled sanitary pads, burned them to ash and waited for the rainy season to scatter the ash in a flowing stream. This ritual was performed to appease the ancestors, ask them for forgiveness and as a send-off for the aborted foetus. Five women secretly consulted Sangoma (traditional healers) to perform certain rituals. Three women isolated themselves to perform herbal cleansings, and the last two chose not to engage in any sexual activity for one year, which led to the loss of their partners. This study offers a unique contribution to understanding the solitary journey of women who terminate a pregnancy. The study challenges healthcare professionals who work in clinics that offer pregnancy termination services to look beyond releasing the foetus to advocating and providing women with the necessary care and support in performing cultural practices.
Keywords: Adolescence, case study, cultural rituals, pregnancy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 400159 Indoor and Outdoor Concentration of Particulate Matter at Domestic Homes
Authors: B. Karakas, S. Lakestani, C. Guler, B. Guciz Dogan, S. Acar Vaizoglu, A. Taner, B. Sekerel, R. Tıpırdamaz, G. Gullu
Abstract:
Particulate matter (PM) in ambient air is responsible for adverse health effects in adults and children. Relatively little is known about the concentrations, sources and health effects of PM in indoor air. A monitoring study was conducted in Ankara by three campaigns in order to measure PM levels in indoor and outdoor environments to identify and quantify associations between sources and concentrations. Approximately 82 homes (1st campaign for 42, 2nd campaign for 12, and 3rd campaign for 28), three rooms (living room, baby-s room and living room used as a baby-s room) and outdoor ambient at each home were sampled with Grimm Environmental Dust Monitoring (EDM) 107, during different seasonal periods of 2011 and 2012. In this study, the relationship between indoor and outdoor PM levels for particulate matter less than 10 micrometer (.m) (PM10), particulate matter less than 2.5.m (PM2.5) and particulate matter less than 1.0.m (PM1) were investigated. The mean concentration of PM10, PM2.5, and PM1.0 at living room used as baby-s room is higher than living and baby-s room (or bedroom) for three sampling campaigns. It is concluded that the household activities and environmental conditions are very important for PM concentrations in the indoor environments during the sampling periods. The amount of smokers, being near a main street and/or construction activities increased the PM concentration. This study is based on the assessment the relationship between indoor and outdoor PM levels and the household activities and environmental conditionsKeywords: Indoor air quality, particulate matter (PM), PM10, PM2.5, PM1.0.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3236158 Surface Modification of Titanium Alloy with Laser Treatment
Authors: Nassier A. Nassir, Robert Birch, D. Rico Sierra, S. P. Edwardson, G. Dearden, Zhongwei Guan
Abstract:
The effect of laser surface treatment parameters on the residual strength of titanium alloy has been investigated. The influence of the laser surface treatment on the bonding strength between the titanium and poly-ether-ketone-ketone (PEKK) surfaces was also evaluated and compared to those offered by titanium foils without surface treatment to optimize the laser parameters. Material characterization using an optical microscope was carried out to study the microstructure and to measure the mean roughness value of the titanium surface. The results showed that the surface roughness shows a significant dependency on the laser power parameters in which surface roughness increases with the laser power increment. Moreover, the results of the tensile tests have shown that there is no significant dropping in tensile strength for the treated samples comparing to the virgin ones. In order to optimize the laser parameter as well as the corresponding surface roughness, single-lap shear tests were conducted on pairs of the laser treated titanium stripes. The results showed that the bonding shear strength between titanium alloy and PEKK film increased with the surface roughness increment to a specific limit. After this point, it is interesting to note that there was no significant effect for the laser parameter on the bonding strength. This evidence suggests that it is not necessary to use very high power of laser to treat titanium surface to achieve a good bonding strength between titanium alloy and the PEKK film.
Keywords: Bonding strength, laser surface treatment, PEKK, titanium alloy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 859157 Modelling Phytoremediation Rates of Aquatic Macrophytes in Aquaculture Effluent
Authors: E. A. Kiridi, A. O. Ogunlela
Abstract:
Pollutants from aquacultural practices constitute environmental problems and phytoremediation could offer cheaper environmentally sustainable alternative since equipment using advanced treatment for fish tank effluent is expensive to import, install, operate and maintain, especially in developing countries. The main objective of this research was, therefore, to develop a mathematical model for phytoremediation by aquatic plants in aquaculture wastewater. Other objectives were to evaluate the retention times on phytoremediation rates using the model and to measure the nutrient level of the aquaculture effluent and phytoremediation rates of three aquatic macrophytes, namely; water hyacinth (Eichornia crassippes), water lettuce (Pistial stratoites) and morning glory (Ipomea asarifolia). A completely randomized experimental design was used in the study. Approximately 100 g of each macrophyte were introduced into the hydroponic units and phytoremediation indices monitored at 8 different intervals from the first to the 28th day. The water quality parameters measured were pH and electrical conductivity (EC). Others were concentration of ammonium–nitrogen (NH4+ -N), nitrite- nitrogen (NO2- -N), nitrate- nitrogen (NO3- -N), phosphate –phosphorus (PO43- -P), and biomass value. The biomass produced by water hyacinth was 438.2 g, 600.7 g, 688.2 g and 725.7 g at four 7–day intervals. The corresponding values for water lettuce were 361.2 g, 498.7 g, 561.2 g and 623.7 g and for morning glory were 417.0 g, 567.0 g, 642.0 g and 679.5g. Coefficient of determination was greater than 80% for EC, TDS, NO2- -N, NO3- -N and 70% for NH4+ -N using any of the macrophytes and the predicted values were within the 95% confidence interval of measured values. Therefore, the model is valuable in the design and operation of phytoremediation systems for aquaculture effluent.
Keywords: Phytoremediation, macrophytes, hydroponic unit, aquaculture effluent, mathematical model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1622156 Ice Load Measurements on Known Structures Using Image Processing Methods
Authors: Azam Fazelpour, Saeed R. Dehghani, Vlastimil Masek, Yuri S. Muzychka
Abstract:
This study employs a method based on image analyses and structure information to detect accumulated ice on known structures. The icing of marine vessels and offshore structures causes significant reductions in their efficiency and creates unsafe working conditions. Image processing methods are used to measure ice loads automatically. Most image processing methods are developed based on captured image analyses. In this method, ice loads on structures are calculated by defining structure coordinates and processing captured images. A pyramidal structure is designed with nine cylindrical bars as the known structure of experimental setup. Unsymmetrical ice accumulated on the structure in a cold room represents the actual case of experiments. Camera intrinsic and extrinsic parameters are used to define structure coordinates in the image coordinate system according to the camera location and angle. The thresholding method is applied to capture images and detect iced structures in a binary image. The ice thickness of each element is calculated by combining the information from the binary image and the structure coordinate. Averaging ice diameters from different camera views obtains ice thicknesses of structure elements. Comparison between ice load measurements using this method and the actual ice loads shows positive correlations with an acceptable range of error. The method can be applied to complex structures defining structure and camera coordinates.
Keywords: Camera calibration, Ice detection, ice load measurements, image processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1257155 Rapid Determination of Biochemical Oxygen Demand
Authors: Mayur Milan Kale, Indu Mehrotra
Abstract:
Biochemical Oxygen Demand (BOD) is a measure of the oxygen used in bacteria mediated oxidation of organic substances in water and wastewater. Theoretically an infinite time is required for complete biochemical oxidation of organic matter, but the measurement is made over 5-days at 20 0C or 3-days at 27 0C test period with or without dilution. Researchers have worked to further reduce the time of measurement. The objective of this paper is to review advancement made in BOD measurement primarily to minimize the time and negate the measurement difficulties. Survey of literature review in four such techniques namely BOD-BARTTM, Biosensors, Ferricyanidemediated approach, luminous bacterial immobilized chip method. Basic principle, method of determination, data validation and their advantage and disadvantages have been incorporated of each of the methods. In the BOD-BARTTM method the time lag is calculated for the system to change from oxidative to reductive state. BIOSENSORS are the biological sensing element with a transducer which produces a signal proportional to the analyte concentration. Microbial species has its metabolic deficiencies. Co-immobilization of bacteria using sol-gel biosensor increases the range of substrate. In ferricyanidemediated approach, ferricyanide has been used as e-acceptor instead of oxygen. In Luminous bacterial cells-immobilized chip method, bacterial bioluminescence which is caused by lux genes was observed. Physiological responses is measured and correlated to BOD due to reduction or emission. There is a scope to further probe into the rapid estimation of BOD.Keywords: BOD, Four methods, Rapid estimation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3641154 Improving Quality of Business Networks for Information Systems
Authors: Hazem M. El-Bakry, Ahmed Atwan
Abstract:
Computer networks are essential part in computerbased information systems. The performance of these networks has a great influence on the whole information system. Measuring the usability criteria and customers satisfaction on small computer network is very important. In this article, an effective approach for measuring the usability of business network in an information system is introduced. The usability process for networking provides us with a flexible and a cost-effective way to assess the usability of a network and its products. In addition, the proposed approach can be used to certify network product usability late in the development cycle. Furthermore, it can be used to help in developing usable interfaces very early in the cycle and to give a way to measure, track, and improve usability. Moreover, a new approach for fast information processing over computer networks is presented. The entire data are collected together in a long vector and then tested as a one input pattern. Proposed fast time delay neural networks (FTDNNs) use cross correlation in the frequency domain between the tested data and the input weights of neural networks. It is proved mathematically and practically that the number of computation steps required for the presented time delay neural networks is less than that needed by conventional time delay neural networks (CTDNNs). Simulation results using MATLAB confirm the theoretical computations.Keywords: Usability Criteria, Computer Networks, Fast Information Processing, Cross Correlation, Frequency Domain.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2034153 A Case Study on the Value of Corporate Social Responsibility Systems
Authors: José M. Brotons, Manuel E. Sansalvador
Abstract:
The relationship between Corporate Social Responsibility (CSR) and financial performance (FP) is a subject of great interest that has not yet been resolved. In this work, we have developed a new and original tool to measure this relation. The tool quantifies the value contributed to companies that are committed to CSR. The theoretical model used is the fuzzy discounted cash flow method. Two assumptions have been considered, the first, the company has implemented the IQNet SR10 certification, and the second, the company has not implemented that certification. For the first one, the growth rate used for the time horizon is the rate maintained by the company after obtaining the IQNet SR10 certificate. For the second one, both, the growth rates company prior to the implementation of the certification, and the evolution of the sector will be taken into account. By using triangular fuzzy numbers, it is possible to deal adequately with each company’s forecasts as well as the information corresponding to the sector. Once the annual growth rate of the sales is obtained, the profit and loss accounts are generated from the annual estimate sales. For the remaining elements of this account, their regression with the nets sales has been considered. The difference between these two valuations, made in a fuzzy environment, allows obtaining the value of the IQNet SR10 certification. Although this study presents an innovative methodology to quantify the relation between CSR and FP, the authors are aware that only one company has been analyzed. This is precisely the main limitation of this study which in turn opens up an interesting line for future research: to broaden the sample of companies.
Keywords: Corporate social responsibility, case study, financial performance, company valuation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 789152 A Distributed Cognition Framework to Compare E-Commerce Websites Using Data Envelopment Analysis
Authors: C. lo Storto
Abstract:
This paper presents an approach based on the adoption of a distributed cognition framework and a non parametric multicriteria evaluation methodology (DEA) designed specifically to compare e-commerce websites from the consumer/user viewpoint. In particular, the framework considers a website relative efficiency as a measure of its quality and usability. A website is modelled as a black box capable to provide the consumer/user with a set of functionalities. When the consumer/user interacts with the website to perform a task, he/she is involved in a cognitive activity, sustaining a cognitive cost to search, interpret and process information, and experiencing a sense of satisfaction. The degree of ambiguity and uncertainty he/she perceives and the needed search time determine the effort size – and, henceforth, the cognitive cost amount – he/she has to sustain to perform his/her task. On the contrary, task performing and result achievement induce a sense of gratification, satisfaction and usefulness. In total, 9 variables are measured, classified in a set of 3 website macro-dimensions (user experience, site navigability and structure). The framework is implemented to compare 40 websites of businesses performing electronic commerce in the information technology market. A questionnaire to collect subjective judgements for the websites in the sample was purposely designed and administered to 85 university students enrolled in computer science and information systems engineering undergraduate courses.Keywords: Website, e-commerce, DEA, distributed cognition, evaluation, comparison.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1706151 A Study of Dose Distribution and Image Quality under an Automatic Tube Current Modulation (ATCM) System for a Toshiba Aquilion 64 CT Scanner Using a New Design of Phantom
Authors: S. Sookpeng, C. J. Martin, D. J. Gentle
Abstract:
Automatic tube current modulation (ATCM) systems are available for all CT manufacturers and are used for the majority of patients. Understanding how the systems work and their influence on patient dose and image quality is important for CT users, in order to gain the most effective use of the systems. In the present study, a new phantom was used for evaluating dose distribution and image quality under the ATCM operation for the Toshiba Aquilion 64 CT scanner using different ATCM options and a fixed mAs technique. A routine chest, abdomen and pelvis (CAP) protocol was selected for study and Gafchromic film was used to measure entrance surface dose (ESD), peripheral dose and central axis dose in the phantom. The results show the dose reductions achievable with various ATCM options, in relation with the target noise. The doses and image noise distribution were more uniform when the ATCM system was implemented compared with the fixed mAs technique. The lower limit set for the tube current will affect the modulations especially for the lower dose option. This limit prevented the tube current being reduced further and therefore the lower dose ATCM setting resembled a fixed mAs technique. Selection of a lower tube current limit is likely to reduce doses for smaller patients in scans of chest and neck regions.
Keywords: Computed Tomography (CT), Automatic Tube Current Modulation (ATCM), Automatic Exposure Control (AEC).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2623150 Speech Enhancement Using Wavelet Coefficients Masking with Local Binary Patterns
Authors: Christian Arcos, Marley Vellasco, Abraham Alcaim
Abstract:
In this paper, we present a wavelet coefficients masking based on Local Binary Patterns (WLBP) approach to enhance the temporal spectra of the wavelet coefficients for speech enhancement. This technique exploits the wavelet denoising scheme, which splits the degraded speech into pyramidal subband components and extracts frequency information without losing temporal information. Speech enhancement in each high-frequency subband is performed by binary labels through the local binary pattern masking that encodes the ratio between the original value of each coefficient and the values of the neighbour coefficients. This approach enhances the high-frequency spectra of the wavelet transform instead of eliminating them through a threshold. A comparative analysis is carried out with conventional speech enhancement algorithms, demonstrating that the proposed technique achieves significant improvements in terms of PESQ, an international recommendation of objective measure for estimating subjective speech quality. Informal listening tests also show that the proposed method in an acoustic context improves the quality of speech, avoiding the annoying musical noise present in other speech enhancement techniques. Experimental results obtained with a DNN based speech recognizer in noisy environments corroborate the superiority of the proposed scheme in the robust speech recognition scenario.Keywords: Binary labels, local binary patterns, mask, wavelet coefficients, speech enhancement, speech recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1017149 Hypertension and Its Association with Oral Health Status in Adults: A Pilot Study in Padusunan Adults Community
Authors: Murniwati, Nurul Khairiyah, Putri Ovieza Maizar
Abstract:
The association between general and oral health is clearly important, particularly in adults with medical conditions. Many of the medical systemic conditions are either caused or aggravated by poor oral hygiene and vice versa. Hypertension is one of common medical systemic problem which has been a public health concern worldwide due to its known consequences. Those consequences must be related to oral health status as well, whether it may cause or worsen the oral health conditions. The objective of this study was to find out the association between hypertension and oral health status in adults. This study was an analytical observational study by using cross-sectional method. A total of 42 adults both male and female in Padusunan Village, Pariaman, West Sumatra, Indonesia were selected as subjects by using purposive sampling. Manual sphygmomanometer was used to measure blood pressure and dental examination was performed to calculate the decayed, missing, and filled teeth (DMFT) scores in order to represent oral health status. The data obtained was analyzed statistically using One Way ANOVA to determine the association between hypertensive adults and their oral health status. The result showed that majority age of the subjects was ranging from 51-70 years (40.5%). Based on blood pressure examination, 57.1% of subjects were classified to prehypertension. Overall, the mean of DMFT score calculated in normal, prehypertension and hypertension group was not considered statistically significant. There was no significant association (p>0.05) between hypertension and oral health status in adults.Keywords: Blood pressure, hypertension, DMFT, oral health status.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1773148 Comparing Machine Learning Estimation of Fuel Consumption of Heavy-Duty Vehicles
Authors: Victor Bodell, Lukas Ekstrom, Somayeh Aghanavesi
Abstract:
Fuel consumption (FC) is one of the key factors in determining expenses of operating a heavy-duty vehicle. A customer may therefore request an estimate of the FC of a desired vehicle. The modular design of heavy-duty vehicles allows their construction by specifying the building blocks, such as gear box, engine and chassis type. If the combination of building blocks is unprecedented, it is unfeasible to measure the FC, since this would first r equire the construction of the vehicle. This paper proposes a machine learning approach to predict FC. This study uses around 40,000 vehicles specific and o perational e nvironmental c onditions i nformation, such as road slopes and driver profiles. A ll v ehicles h ave d iesel engines and a mileage of more than 20,000 km. The data is used to investigate the accuracy of machine learning algorithms Linear regression (LR), K-nearest neighbor (KNN) and Artificial n eural n etworks (ANN) in predicting fuel consumption for heavy-duty vehicles. Performance of the algorithms is evaluated by reporting the prediction error on both simulated data and operational measurements. The performance of the algorithms is compared using nested cross-validation and statistical hypothesis testing. The statistical evaluation procedure finds that ANNs have the lowest prediction error compared to LR and KNN in estimating fuel consumption on both simulated and operational data. The models have a mean relative prediction error of 0.3% on simulated data, and 4.2% on operational data.Keywords: Artificial neural networks, fuel consumption, machine learning, regression, statistical tests.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 830147 An Investigation of Performance versus Security in Cognitive Radio Networks with Supporting Cloud Platforms
Authors: Kurniawan D. Irianto, Demetres D. Kouvatsos
Abstract:
The growth of wireless devices affects the availability of limited frequencies or spectrum bands as it has been known that spectrum bands are a natural resource that cannot be added. Meanwhile, the licensed frequencies are idle most of the time. Cognitive radio is one of the solutions to solve those problems. Cognitive radio is a promising technology that allows the unlicensed users known as secondary users (SUs) to access licensed bands without making interference to licensed users or primary users (PUs). As cloud computing has become popular in recent years, cognitive radio networks (CRNs) can be integrated with cloud platform. One of the important issues in CRNs is security. It becomes a problem since CRNs use radio frequencies as a medium for transmitting and CRNs share the same issues with wireless communication systems. Another critical issue in CRNs is performance. Security has adverse effect to performance and there are trade-offs between them. The goal of this paper is to investigate the performance related to security trade-off in CRNs with supporting cloud platforms. Furthermore, Queuing Network Models with preemptive resume and preemptive repeat identical priority are applied in this project to measure the impact of security to performance in CRNs with or without cloud platform. The generalized exponential (GE) type distribution is used to reflect the bursty inter-arrival and service times at the servers. The results show that the best performance is obtained when security is disabled and cloud platform is enabled.
Keywords: Cloud Platforms, Cognitive Radio Networks, GEtype Distribution, Performance Vs Security.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2521146 Juvenile Delinquency of Senior High School Students in Surabaya, Indonesia
Authors: Herdina Indrijati
Abstract:
This research aims to describe teenager delinquency behavior (Juvenile Delinquency) of senior high school students in Surabaya, Indonesia. Juvenile Delinquency is a broad range of behaviors start from socially unacceptable behavior (overreact in school), violation (escape from home) to crimes (like stealing). This research uses quantitative descriptive method using 498 students who come from 8 different schools in Surabaya as subjects. Juvenile Delinquency behavior form questionnaire has been completed by subjects and was used to measure and describe the behavior. The result of this research is presented in statistic descriptive forms. Result shows that 169 subjects skip school, 55 subjects get out of home without parent’s permission, 110 subjects engage in smoking behavior, 74 subjects damage other people properties, 32 subjects steal, 16 subjects exploit others and 7 subjects engage in drug abuse. Frequency of the top five mentioned behavior are 1-10 times. It is also found that subject’s peers are most likely to be the victim of Juvenile Delinquency. The reasons teenagers engage in Juvenile Delinquency include (1) feeling tired, bored or lazy – that contributes to their skip school behavior (2) Having a lot of problem with parents - contrives them to run away from home, (3) accidentally damage other people’s properties, (4) financial problems – force them to steal and exploit, (5) feeling like having a lot of life problems – that makes them do drugs (6) trying smoking for experience.Keywords: Juvenile delinquency, senior high school, student.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1598145 Technological Advancement in Fashion Online Retailing: A Comparative Study of Pakistan and UK Fashion E-Commerce
Authors: Sadia Idrees, Gianpaolo Vignali, Simeon Gill
Abstract:
The study aims to establish the virtual size and fit technology features to enhance fashion online retailing platforms, utilising digital human measurements to provide customised style and function to consumers. A few firms in the UK have launched advanced interactive fashion shopping domains for personalised shopping globally, aided by the latest internet technology. Virtual size and fit interfaces have a great potential to provide a personalised better-fitted garment to promote mass customisation globally. Made-to-measure clothing, consuming unstitched fabric is a common practice offered by fashion brands in Pakistan. This product is regarded as economical and sustainable to be utilised by consumers in Pakistan. Although the manual sizing system is practiced to sell garments online, virtual size and fit visualisation and recommendation technologies are uncommon in Pakistani fashion interfaces. A comparative assessment of Pakistani fashion brand websites and UK technology-driven fashion interfaces was conducted to highlight the vast potential of the virtual size and fit technology. The results indicated that web 2.0 technology adopted by Pakistani apparel brands has limited features, whereas companies practicing web 3.0 technology provide interactive online real-store shopping experience leading to enhanced customer satisfaction and globalisation of brands.
Keywords: E-commerce, mass customization, virtual size and fit, web 3.0 technology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1151144 Environmental Management of the Tanning Industry's Supply Chain: An Integration Model from Lean Supply Chain, Green Supply Chain, Cleaner Production and ISO 14001:2004
Authors: N. Clavijo Buriticá, L. M. Correa Lópezand J. R., Sánchez Rodríguez
Abstract:
The environmental impact caused by industries is an issue that, in the last 20 years, has become very important in terms of society, economics and politics in Colombia. Particularly, the tannery process is extremely polluting because of uneffective treatments and regulations given to the dumping process and atmospheric emissions. Considering that, this investigation is intended to propose a management model based on the integration of Lean Supply Chain, Green Supply Chain, Cleaner Production and ISO 14001-2004, that prioritizes the strategic components of the organizations. As a result, a management model will be obtained and it will provide a strategic perspective through a systemic approach to the tanning process. This will be achieved through the use of Multicriteria Decision tools, along with Quality Function Deployment and Fuzzy Logic. The strategic approach that embraces the management model using the alignment of Lean Supply Chain, Green Supply Chain, Cleaner Production and ISO 14001-2004, is an integrated perspective that allows a gradual frame of the tactical and operative elements through the correct setting of the information flow, improving the decision making process. In that way, Small Medium Enterprises (SMEs) could improve their productivity, competitiveness and as an added value, the minimization of the environmental impact. This improvement is expected to be controlled through a Dashboard that helps the Organization measure its performance along the implementation of the model in its productive process.
Keywords: Integration, environmental impact, management, systemic organization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2040143 Information Filtering using Index Word Selection based on the Topics
Authors: Takeru YOKOI, Hidekazu YANAGIMOTO, Sigeru OMATU
Abstract:
We have proposed an information filtering system using index word selection from a document set based on the topics included in a set of documents. This method narrows down the particularly characteristic words in a document set and the topics are obtained by Sparse Non-negative Matrix Factorization. In information filtering, a document is often represented with the vector in which the elements correspond to the weight of the index words, and the dimension of the vector becomes larger as the number of documents is increased. Therefore, it is possible that useless words as index words for the information filtering are included. In order to address the problem, the dimension needs to be reduced. Our proposal reduces the dimension by selecting index words based on the topics included in a document set. We have applied the Sparse Non-negative Matrix Factorization to the document set to obtain these topics. The filtering is carried out based on a centroid of the learning document set. The centroid is regarded as the user-s interest. In addition, the centroid is represented with a document vector whose elements consist of the weight of the selected index words. Using the English test collection MEDLINE, thus, we confirm the effectiveness of our proposal. Hence, our proposed selection can confirm the improvement of the recommendation accuracy from the other previous methods when selecting the appropriate number of index words. In addition, we discussed the selected index words by our proposal and we found our proposal was able to select the index words covered some minor topics included in the document set.Keywords: Information Filtering, Sparse NMF, Index wordSelection, User Profile, Chi-squared Measure
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1456142 Dental Students’ Attitude towards Problem-Based Learning before and after Implementing 3D Electronic Dental Models
Authors: Hai Ming Wong, Kuen Wai Ma, Lavender Yu Xin Yang, Yanqi Yang
Abstract:
Objectives: In recent years, the Faculty of Dentistry of the University of Hong Kong have extended the implementation of 3D electronic models (e-models) into problem-based learning (PBL) of the Bachelor of Dental Surgery (BDS) curriculum, aiming at mutual enhancement of PBL teaching quality and the students’ skills in using e-models. This study focuses on the effectiveness of e-models serving as a tool to enhance the students’ skills and competences in PBL. Methods: The questionnaire surveys are conducted to measure 50 fourth-year BDS students’ attitude change between beginning and end of blended PBL tutorials. The response rate of this survey is 100%. Results: The results of this study show the students’ agreement on enhancement of their learning experience after e-model implementation and their expectation to have more blended PBL courses in the future. The potential of e-models in cultivating students’ self-learning skills reduces their dependence on others, while improving their communication skills to argue about pros and cons of different treatment options. The students’ independent thinking ability and problem solving skills are promoted by e-model implementation, resulting in better decision making in treatment planning. Conclusion: It is important for future dental education curriculum planning to cope with the students’ needs, and offer support in the form of software, hardware and facilitators’ assistance for better e-model implementation.
Keywords: Problem-Based learning, curriculum, dental education, 3-D electronic models.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6532141 Regression Approach for Optimal Purchase of Hosts Cluster in Fixed Fund for Hadoop Big Data Platform
Authors: Haitao Yang, Jianming Lv, Fei Xu, Xintong Wang, Yilin Huang, Lanting Xia, Xuewu Zhu
Abstract:
Given a fixed fund, purchasing fewer hosts of higher capability or inversely more of lower capability is a must-be-made trade-off in practices for building a Hadoop big data platform. An exploratory study is presented for a Housing Big Data Platform project (HBDP), where typical big data computing is with SQL queries of aggregate, join, and space-time condition selections executed upon massive data from more than 10 million housing units. In HBDP, an empirical formula was introduced to predict the performance of host clusters potential for the intended typical big data computing, and it was shaped via a regression approach. With this empirical formula, it is easy to suggest an optimal cluster configuration. The investigation was based on a typical Hadoop computing ecosystem HDFS+Hive+Spark. A proper metric was raised to measure the performance of Hadoop clusters in HBDP, which was tested and compared with its predicted counterpart, on executing three kinds of typical SQL query tasks. Tests were conducted with respect to factors of CPU benchmark, memory size, virtual host division, and the number of element physical host in cluster. The research has been applied to practical cluster procurement for housing big data computing.
Keywords: Hadoop platform planning, optimal cluster scheme at fixed-fund, performance empirical formula, typical SQL query tasks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 837140 Multi-Temporal Mapping of Built-up Areas Using Daytime and Nighttime Satellite Images Based on Google Earth Engine Platform
Authors: S. Hutasavi, D. Chen
Abstract:
The built-up area is a significant proxy to measure regional economic growth and reflects the Gross Provincial Product (GPP). However, an up-to-date and reliable database of built-up areas is not always available, especially in developing countries. The cloud-based geospatial analysis platform such as Google Earth Engine (GEE) provides an opportunity with accessibility and computational power for those countries to generate the built-up data. Therefore, this study aims to extract the built-up areas in Eastern Economic Corridor (EEC), Thailand using day and nighttime satellite imagery based on GEE facilities. The normalized indices were generated from Landsat 8 surface reflectance dataset, including Normalized Difference Built-up Index (NDBI), Built-up Index (BUI), and Modified Built-up Index (MBUI). These indices were applied to identify built-up areas in EEC. The result shows that MBUI performs better than BUI and NDBI, with the highest accuracy of 0.85 and Kappa of 0.82. Moreover, the overall accuracy of classification was improved from 79% to 90%, and error of total built-up area was decreased from 29% to 0.7%, after night-time light data from the Visible and Infrared Imaging Suite (VIIRS) Day Night Band (DNB). The results suggest that MBUI with night-time light imagery is appropriate for built-up area extraction and be utilize for further study of socioeconomic impacts of regional development policy over the EEC region.
Keywords: Built-up area extraction, Google earth engine, adaptive thresholding method, rapid mapping.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 610139 Accuracy of Small Field of View CBCT in Determining Endodontic Working Length
Authors: N. L. S. Ahmad, Y. L. Thong, P. Nambiar
Abstract:
An in vitro study was carried out to evaluate the feasibility of small field of view (FOV) cone beam computed tomography (CBCT) in determining endodontic working length. The objectives were to determine the accuracy of CBCT in measuring the estimated preoperative working lengths (EPWL), endodontic working lengths (EWL) and file lengths. Access cavities were prepared in 27 molars. For each root canal, the baseline electronic working length was determined using an EAL (Raypex 5). The teeth were then divided into overextended, non-modified and underextended groups and the lengths were adjusted accordingly. Imaging and measurements were made using the respective software of the RVG (Kodak RVG 6100) and CBCT units (Kodak 9000 3D). Root apices were then shaved and the apical constrictions viewed under magnification to measure the control working lengths. The paired t-test showed a statistically significant difference between CBCT EPWL and control length but the difference was too small to be clinically significant. From the Bland Altman analysis, the CBCT method had the widest range of 95% limits of agreement, reflecting its greater potential of error. In measuring file lengths, RVG had a bigger window of 95% limits of agreement compared to CBCT. Conclusions: (1) The clinically insignificant underestimation of the preoperative working length using small FOV CBCT showed that it is acceptable for use in the estimation of preoperative working length. (2) Small FOV CBCT may be used in working length determination but it is not as accurate as the currently practiced method of using the EAL. (3) It is also more accurate than RVG in measuring file lengths.Keywords: Accuracy, CBCT, endodontic, measurement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1618138 An Adaptive Memetic Algorithm With Dynamic Population Management for Designing HIV Multidrug Therapies
Authors: Hassan Zarei, Ali Vahidian Kamyad, Sohrab Effati
Abstract:
In this paper, a mathematical model of human immunodeficiency virus (HIV) is utilized and an optimization problem is proposed, with the final goal of implementing an optimal 900-day structured treatment interruption (STI) protocol. Two type of commonly used drugs in highly active antiretroviral therapy (HAART), reverse transcriptase inhibitors (RTI) and protease inhibitors (PI), are considered. In order to solving the proposed optimization problem an adaptive memetic algorithm with population management (AMAPM) is proposed. The AMAPM uses a distance measure to control the diversity of population in genotype space and thus preventing the stagnation and premature convergence. Moreover, the AMAPM uses diversity parameter in phenotype space to dynamically set the population size and the number of crossovers during the search process. Three crossover operators diversify the population, simultaneously. The progresses of crossover operators are utilized to set the number of each crossover per generation. In order to escaping the local optima and introducing the new search directions toward the global optima, two local searchers assist the evolutionary process. In contrast to traditional memetic algorithms, the activation of these local searchers is not random and depends on both the diversity parameters in genotype space and phenotype space. The capability of AMAPM in finding optimal solutions compared with three popular metaheurestics is introduced.Keywords: HIV therapy design, memetic algorithms, adaptivealgorithms, nonlinear integer programming.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1627