Search results for: Lee metric
126 Energy Efficient Clustering with Reliable and Load-Balanced Multipath Routing for Wireless Sensor Networks
Authors: Alamgir Naushad, Ghulam Abbas, Shehzad Ali Shah, Ziaul Haq Abbas
Abstract:
Unlike conventional networks, it is particularly challenging to manage resources efficiently in Wireless Sensor Networks (WSNs) due to their inherent characteristics, such as dynamic network topology and limited bandwidth and battery power. To ensure energy efficiency, this paper presents a routing protocol for WSNs, namely, Enhanced Hybrid Multipath Routing (EHMR), which employs hierarchical clustering and proposes a next hop selection mechanism between nodes according to a maximum residual energy metric together with a minimum hop count. Load-balancing of data traffic over multiple paths is achieved for a better packet delivery ratio and low latency rate. Reliability is ensured in terms of higher data rate and lower end-to-end delay. EHMR also enhances the fast-failure recovery mechanism to recover a failed path. Simulation results demonstrate that EHMR achieves a higher packet delivery ratio, reduced energy consumption per-packet delivery, lower end-to-end latency, and reduced effect of data rate on packet delivery ratio when compared with eminent WSN routing protocols.Keywords: energy efficiency, load-balancing, hierarchical clustering, multipath routing, wireless sensor networks
Procedia PDF Downloads 84125 Simplified Linear Regression Model to Quantify the Thermal Resilience of Office Buildings in Three Different Power Outage Day Times
Authors: Nagham Ismail, Djamel Ouahrani
Abstract:
Thermal resilience in the built environment reflects the building's capacity to adapt to extreme climate changes. In hot climates, power outages in office buildings pose risks to the health and productivity of workers. Therefore, it is of interest to quantify the thermal resilience of office buildings by developing a user-friendly simplified model. This simplified model begins with creating an assessment metric of thermal resilience that measures the duration between the power outage and the point at which the thermal habitability condition is compromised, considering different power interruption times (morning, noon, and afternoon). In this context, energy simulations of an office building are conducted for Qatar's summer weather by changing different parameters that are related to the (i) wall characteristics, (ii) glazing characteristics, (iii) load, (iv) orientation and (v) air leakage. The simulation results are processed using SPSS to derive linear regression equations, aiding stakeholders in evaluating the performance of commercial buildings during different power interruption times. The findings reveal the significant influence of glazing characteristics on thermal resilience, with the morning power outage scenario posing the most detrimental impact in terms of the shortest duration before compromising thermal resilience.Keywords: thermal resilience, thermal envelope, energy modeling, building simulation, thermal comfort, power disruption, extreme weather
Procedia PDF Downloads 75124 Visualization and Performance Measure to Determine Number of Topics in Twitter Data Clustering Using Hybrid Topic Modeling
Authors: Moulana Mohammed
Abstract:
Topic models are widely used in building clusters of documents for more than a decade, yet problems occurring in choosing optimal number of topics. The main problem is the lack of a stable metric of the quality of topics obtained during the construction of topic models. The authors analyzed from previous works, most of the models used in determining the number of topics are non-parametric and quality of topics determined by using perplexity and coherence measures and concluded that they are not applicable in solving this problem. In this paper, we used the parametric method, which is an extension of the traditional topic model with visual access tendency for visualization of the number of topics (clusters) to complement clustering and to choose optimal number of topics based on results of cluster validity indices. Developed hybrid topic models are demonstrated with different Twitter datasets on various topics in obtaining the optimal number of topics and in measuring the quality of clusters. The experimental results showed that the Visual Non-negative Matrix Factorization (VNMF) topic model performs well in determining the optimal number of topics with interactive visualization and in performance measure of the quality of clusters with validity indices.Keywords: interactive visualization, visual mon-negative matrix factorization model, optimal number of topics, cluster validity indices, Twitter data clustering
Procedia PDF Downloads 134123 Proposed Framework based on Classification of Vertical Handover Decision Strategies in Heterogeneous Wireless Networks
Authors: Shidrokh Goudarzi, Wan Haslina Hassan
Abstract:
Heterogeneous wireless networks are converging towards an all-IP network as part of the so-called next-generation network. In this paradigm, different access technologies need to be interconnected; thus, vertical handovers or vertical handoffs are necessary for seamless mobility. In this paper, we conduct a review of existing vertical handover decision-making mechanisms that aim to provide ubiquitous connectivity to mobile users. To offer a systematic comparison, we categorize these vertical handover measurement and decision structures based on their respective methodology and parameters. Subsequently, we analyze several vertical handover approaches in the literature and compare them according to their advantages and weaknesses. The paper compares the algorithms based on the network selection methods, complexity of the technologies used and efficiency in order to introduce our vertical handover decision framework. We find that vertical handovers on heterogeneous wireless networks suffer from the lack of a standard and efficient method to satisfy both user and network quality of service requirements at different levels including architectural, decision-making and protocols. Also, the consolidation of network terminal, cross-layer information, multi packet casting and intelligent network selection algorithm appears to be an optimum solution for achieving seamless service continuity in order to facilitate seamless connectivity.Keywords: heterogeneous wireless networks, vertical handovers, vertical handover metric, decision-making algorithms
Procedia PDF Downloads 393122 Non Interferometric Quantitative Phase Imaging of Yeast Cells
Authors: P. Praveen Kumar, P. Vimal Prabhu, Renu John
Abstract:
In biology most microscopy specimens, in particular living cells are transparent. In cell imaging, it is hard to create an image of a cell which is transparent with a very small refractive index change with respect to the surrounding media. Various techniques like addition of staining and contrast agents, markers have been applied in the past for creating contrast. Many of the staining agents or markers are not applicable to live cell imaging as they are toxic. In this paper, we report theoretical and experimental results from quantitative phase imaging of yeast cells with a commercial bright field microscope. We reconstruct the phase of cells non-interferometrically based on the transport of intensity equations (TIE). This technique estimates the axial derivative from positive through-focus intensity measurements. This technique allows phase imaging using a regular microscope with white light illumination. We demonstrate nano-metric depth sensitivity in imaging live yeast cells using this technique. Experimental results will be shown in the paper demonstrating the capability of the technique in 3-D volume estimation of living cells. This real-time imaging technique would be highly promising in real-time digital pathology applications, screening of pathogens and staging of diseases like malaria as it does not need any pre-processing of samples.Keywords: axial derivative, non-interferometric imaging, quantitative phase imaging, transport of intensity equation
Procedia PDF Downloads 384121 Identifying Knowledge Gaps in Incorporating Toxicity of Particulate Matter Constituents for Developing Regulatory Limits on Particulate Matter
Authors: Ananya Das, Arun Kumar, Gazala Habib, Vivekanandan Perumal
Abstract:
Regulatory bodies has proposed limits on Particulate Matter (PM) concentration in air; however, it does not explicitly indicate the incorporation of effects of toxicities of constituents of PM in developing regulatory limits. This study aimed to provide a structured approach to incorporate toxic effects of components in developing regulatory limits on PM. A four-step human health risk assessment framework consists of - (1) hazard identification (parameters: PM and its constituents and their associated toxic effects on health), (2) exposure assessment (parameters: concentrations of PM and constituents, information on size and shape of PM; fate and transport of PM and constituents in respiratory system), (3) dose-response assessment (parameters: reference dose or target toxicity dose of PM and its constituents), and (4) risk estimation (metric: hazard quotient and/or lifetime incremental risk of cancer as applicable). Then parameters required at every step were obtained from literature. Using this information, an attempt has been made to determine limits on PM using component-specific information. An example calculation was conducted for exposures of PM2.5 and its metal constituents from Indian ambient environment to determine limit on PM values. Identified data gaps were: (1) concentrations of PM and its constituents and their relationship with sampling regions, (2) relationship of toxicity of PM with its components.Keywords: air, component-specific toxicity, human health risks, particulate matter
Procedia PDF Downloads 311120 Enhancement of Underwater Haze Image with Edge Reveal Using Pixel Normalization
Authors: M. Dhana Lakshmi, S. Sakthivel Murugan
Abstract:
As light passes from source to observer in the water medium, it is scattered by the suspended particulate matter. This scattering effect will plague the captured images with non-uniform illumination, blurring details, halo artefacts, weak edges, etc. To overcome this, pixel normalization with an Amended Unsharp Mask (AUM) filter is proposed to enhance the degraded image. To validate the robustness of the proposed technique irrespective of atmospheric light, the considered datasets are collected on dual locations. For those images, the maxima and minima pixel intensity value is computed and normalized; then the AUM filter is applied to strengthen the blurred edges. Finally, the enhanced image is obtained with good illumination and contrast. Thus, the proposed technique removes the effect of scattering called de-hazing and restores the perceptual information with enhanced edge detail. Both qualitative and quantitative analyses are done on considering the standard non-reference metric called underwater image sharpness measure (UISM), and underwater image quality measure (UIQM) is used to measure color, sharpness, and contrast for both of the location images. It is observed that the proposed technique has shown overwhelming performance compared to other deep-based enhancement networks and traditional techniques in an adaptive manner.Keywords: underwater drone imagery, pixel normalization, thresholding, masking, unsharp mask filter
Procedia PDF Downloads 194119 Optimization of Topology-Aware Job Allocation on a High-Performance Computing Cluster by Neural Simulated Annealing
Authors: Zekang Lan, Yan Xu, Yingkun Huang, Dian Huang, Shengzhong Feng
Abstract:
Jobs on high-performance computing (HPC) clusters can suffer significant performance degradation due to inter-job network interference. Topology-aware job allocation problem (TJAP) is such a problem that decides how to dedicate nodes to specific applications to mitigate inter-job network interference. In this paper, we study the window-based TJAP on a fat-tree network aiming at minimizing the cost of communication hop, a defined inter-job interference metric. The window-based approach for scheduling repeats periodically, taking the jobs in the queue and solving an assignment problem that maps jobs to the available nodes. Two special allocation strategies are considered, i.e., static continuity assignment strategy (SCAS) and dynamic continuity assignment strategy (DCAS). For the SCAS, a 0-1 integer programming is developed. For the DCAS, an approach called neural simulated algorithm (NSA), which is an extension to simulated algorithm (SA) that learns a repair operator and employs them in a guided heuristic search, is proposed. The efficacy of NSA is demonstrated with a computational study against SA and SCIP. The results of numerical experiments indicate that both the model and algorithm proposed in this paper are effective.Keywords: high-performance computing, job allocation, neural simulated annealing, topology-aware
Procedia PDF Downloads 116118 Phosphate Sludge Ceramics: Effects of Firing Cycle Parameters on Technological Properties and Ceramic Suitability
Authors: Mohamed Loutou, Mohamed Hajjaji, Mohamed Ait Babram, Mohammed Mansori, Rachid Hakkou, Claude Favotto
Abstract:
More than 26,4 million tons of phosphates are produced by the phosphates industries in Morocco (2010), generating huge amounts of sludge by flocculation during the ore beneficiation. They way are stored at the end of the process in open air ponds. Its accumulation and storage may have an impact on several scales such as ground water and human being. For this purpose, an efficient way to use it the field of the ceramic is proposed. The as received sludge and a clay-rich sediment have been studied in terms of chemical, mineralogical and micro-structural side using various analytical methods. Several formulations have been performed by mixing the sludge with the binder shaped in the form of granules. After being dried at 105 °C, the samples were heated in the range of 900-1200 °C. As well as the ceramic properties (firing shrinkage, water absorption, total porosity and compressive strength) the micro structure has been investigated using X-ray diffraction, scanning electron microscopy and Fourier transform infrared spectroscopy. The relations between properties and the operating factors were formulated using the design of experiments (DOE). Gehlenite was the only phase neo-formed in the sintering samples. SEM micrographs revealed the presence of nano metric stains. Based on RSM results, all factors had positive effects on Firing shrinkage, compressive strength and total porosity. However, they manifested opposite effects on density and water absorption.Keywords: phosphate sludge, clay, ceramic properties, granule
Procedia PDF Downloads 505117 Systematic Examination of Methods Supporting the Social Innovation Process
Authors: Mariann Veresne Somosi, Zoltan Nagy, Krisztina Varga
Abstract:
Innovation is the key element of economic development and a key factor in social processes. Technical innovations can be identified as prerequisites and causes of social change and cannot be created without the renewal of society. The study of social innovation can be characterised as one of the significant research areas of our day. The study’s aim is to identify the process of social innovation, which can be defined by input, transformation, and output factors. This approach divides the social innovation process into three parts: situation analysis, implementation, follow-up. The methods associated with each stage of the process are illustrated by the chronological line of social innovation. In this study, we have sought to present methodologies that support long- and short-term decision-making that is easy to apply, have different complementary content, and are well visualised for different user groups. When applying the methods, the reference objects are different: county, district, settlement, specific organisation. The solution proposed by the study supports the development of a methodological combination adapted to different situations. Having reviewed metric and conceptualisation issues, we wanted to develop a methodological combination along with a change management logic suitable for structured support to the generation of social innovation in the case of a locality or a specific organisation. In addition to a theoretical summary, in the second part of the study, we want to give a non-exhaustive picture of the two counties located in the north-eastern part of Hungary through specific analyses and case descriptions.Keywords: factors of social innovation, methodological combination, social innovation process, supporting decision-making
Procedia PDF Downloads 155116 Synthesis of Amine Functionalized MOF-74 for Carbon Dioxide Capture
Authors: Ghulam Murshid, Samil Ullah
Abstract:
Scientific studies suggested that the incremented greenhouse gas concentration in the atmosphere, particularly of carbon dioxide (CO2) is one of the major factors in global warming. The concentration of CO2 in our climate has crossed the milestone level of 400 parts per million (ppm) hence breaking the record of human history. A report by 49 researchers from 10 countries said, 'Global CO2 emissions from burning fossil fuels will rise to a record 36 billion metric tons (39.683 billion tons) this year.' Main contributors of CO2 in to the atmosphere are usage of fossil fuel, transportation sector and power generation plants. Among all available technologies, which include; absorption via chemicals, membrane separation, cryogenic and adsorption are in practice around the globe. Adsorption of CO2 using metal organic frameworks (MOF) is getting interest of researcher around the globe. In the current work, MOF-74 as well as modified MOF-74 with a sterically hindered amine (AMP) was synthesized and characterized. The modification was carried out using a sterically hindered amine in order to study the effect on its adsorption capacity. Resulting samples were characterized by using Fourier Transform Infrared Spectroscopy (FTIR), Field Emission Scanning Electron Microscope (FESEM), Thermal Gravimetric Analyser (TGA) and Brunauer-Emmett-Teller (BET). The FTIR results clearly confirmed the formation of MOF-74 structure and the presence of AMP. FESEM and TEM revealed the topography and morphology of the both MOF-74 and amine modified MOF. BET isotherm result shows that due to the addition of AMP in to the structure, significant enhancement of CO2 adsorption was observed.Keywords: adsorbents, amine, CO2, global warming
Procedia PDF Downloads 422115 Spino-Pelvic Alignment with SpineCor Brace Use in Adolescent Idiopathic Scoliosis
Authors: Reham H. Diab, Amira A. A. Abdallah, Eman A. Embaby
Abstract:
Background: The effectiveness of bracing on preventing spino-pelvic alignment deterioration in idiopathic scoliosis has been extensively studied especially in the frontal plane. Yet, there is lack of knowledge regarding the effect of soft braces on spino-pelvic alignment in the sagittal plane. Achieving harmonious sagittal plane spino-pelvic balance is critical for the preservation of physiologic posture and spinal health. Purpose: This study examined the kyphotic angle, lordotic angle and pelvic inclination in the sagittal plane and trunk imbalance in the frontal plane before and after a six-month rehabilitation period. Methods: Nineteen patients with idiopathic scoliosis participated in the study. They were divided into two groups; experimental and control. The experimental group (group I) used the SpineCor brace in addition to a rehabilitation exercise program while the control group (group II) had the exercise program only. The mean ±SD age, weight and height were 16.89±2.15 vs. 15.3±2.5 years; 59.78±6.85 vs. 62.5±8.33 Kg and 162.78±5.76 vs. 159±5.72 cm for group I vs. group II. Data were collected using for metric Π system. Results: Mixed design MANOVA showed that there were significant (p < 0.05) decreases in all the tested variables after the six-month period compared with “before” in both groups. Moreover, there was a significant decrease in the kyphotic angle in group I compared with group II after the six-month period. Interpretation and conclusion: SpineCor brace is beneficial in reducing spino-pelvic alignment deterioration in both sagittal and frontal planes.Keywords: adolescent idiopathic scoliosis, SpineCor, spino-pelvic alignment, biomechanics
Procedia PDF Downloads 340114 Tomato-Weed Classification by RetinaNet One-Step Neural Network
Authors: Dionisio Andujar, Juan lópez-Correa, Hugo Moreno, Angela Ri
Abstract:
The increased number of weeds in tomato crops highly lower yields. Weed identification with the aim of machine learning is important to carry out site-specific control. The last advances in computer vision are a powerful tool to face the problem. The analysis of RGB (Red, Green, Blue) images through Artificial Neural Networks had been rapidly developed in the past few years, providing new methods for weed classification. The development of the algorithms for crop and weed species classification looks for a real-time classification system using Object Detection algorithms based on Convolutional Neural Networks. The site study was located in commercial corn fields. The classification system has been tested. The procedure can detect and classify weed seedlings in tomato fields. The input to the Neural Network was a set of 10,000 RGB images with a natural infestation of Cyperus rotundus l., Echinochloa crus galli L., Setaria italica L., Portulaca oeracea L., and Solanum nigrum L. The validation process was done with a random selection of RGB images containing the aforementioned species. The mean average precision (mAP) was established as the metric for object detection. The results showed agreements higher than 95 %. The system will provide the input for an online spraying system. Thus, this work plays an important role in Site Specific Weed Management by reducing herbicide use in a single step.Keywords: deep learning, object detection, cnn, tomato, weeds
Procedia PDF Downloads 103113 Regression Approach for Optimal Purchase of Hosts Cluster in Fixed Fund for Hadoop Big Data Platform
Authors: Haitao Yang, Jianming Lv, Fei Xu, Xintong Wang, Yilin Huang, Lanting Xia, Xuewu Zhu
Abstract:
Given a fixed fund, purchasing fewer hosts of higher capability or inversely more of lower capability is a must-be-made trade-off in practices for building a Hadoop big data platform. An exploratory study is presented for a Housing Big Data Platform project (HBDP), where typical big data computing is with SQL queries of aggregate, join, and space-time condition selections executed upon massive data from more than 10 million housing units. In HBDP, an empirical formula was introduced to predict the performance of host clusters potential for the intended typical big data computing, and it was shaped via a regression approach. With this empirical formula, it is easy to suggest an optimal cluster configuration. The investigation was based on a typical Hadoop computing ecosystem HDFS+Hive+Spark. A proper metric was raised to measure the performance of Hadoop clusters in HBDP, which was tested and compared with its predicted counterpart, on executing three kinds of typical SQL query tasks. Tests were conducted with respect to factors of CPU benchmark, memory size, virtual host division, and the number of element physical host in cluster. The research has been applied to practical cluster procurement for housing big data computing.Keywords: Hadoop platform planning, optimal cluster scheme at fixed-fund, performance predicting formula, typical SQL query tasks
Procedia PDF Downloads 232112 Improving Similarity Search Using Clustered Data
Authors: Deokho Kim, Wonwoo Lee, Jaewoong Lee, Teresa Ng, Gun-Ill Lee, Jiwon Jeong
Abstract:
This paper presents a method for improving object search accuracy using a deep learning model. A major limitation to provide accurate similarity with deep learning is the requirement of huge amount of data for training pairwise similarity scores (metrics), which is impractical to collect. Thus, similarity scores are usually trained with a relatively small dataset, which comes from a different domain, causing limited accuracy on measuring similarity. For this reason, this paper proposes a deep learning model that can be trained with a significantly small amount of data, a clustered data which of each cluster contains a set of visually similar images. In order to measure similarity distance with the proposed method, visual features of two images are extracted from intermediate layers of a convolutional neural network with various pooling methods, and the network is trained with pairwise similarity scores which is defined zero for images in identical cluster. The proposed method outperforms the state-of-the-art object similarity scoring techniques on evaluation for finding exact items. The proposed method achieves 86.5% of accuracy compared to the accuracy of the state-of-the-art technique, which is 59.9%. That is, an exact item can be found among four retrieved images with an accuracy of 86.5%, and the rest can possibly be similar products more than the accuracy. Therefore, the proposed method can greatly reduce the amount of training data with an order of magnitude as well as providing a reliable similarity metric.Keywords: visual search, deep learning, convolutional neural network, machine learning
Procedia PDF Downloads 215111 Back to Basics: Redefining Quality Measurement for Hybrid Software Development Organizations
Authors: Satya Pradhan, Venky Nanniyur
Abstract:
As the software industry transitions from a license-based model to a subscription-based Software-as-a-Service (SaaS) model, many software development groups are using a hybrid development model that incorporates Agile and Waterfall methodologies in different parts of the organization. The traditional metrics used for measuring software quality in Waterfall or Agile paradigms do not apply to this new hybrid methodology. In addition, to respond to higher quality demands from customers and to gain a competitive advantage in the market, many companies are starting to prioritize quality as a strategic differentiator. As a result, quality metrics are included in the decision-making activities all the way up to the executive level, including board of director reviews. This paper presents key challenges associated with measuring software quality in organizations using the hybrid development model. We introduce a framework called Prevention-Inspection-Evaluation-Removal (PIER) to provide a comprehensive metric definition for hybrid organizations. The framework includes quality measurements, quality enforcement, and quality decision points at different organizational levels and project milestones. The metrics framework defined in this paper is being used for all Cisco systems products used in customer premises. We present several field metrics for one product portfolio (enterprise networking) to show the effectiveness of the proposed measurement system. As the results show, this metrics framework has significantly improved in-process defect management as well as field quality.Keywords: quality management system, quality metrics framework, quality metrics, agile, waterfall, hybrid development system
Procedia PDF Downloads 174110 Relay-Augmented Bottleneck Throughput Maximization for Correlated Data Routing: A Game Theoretic Perspective
Authors: Isra Elfatih Salih Edrees, Mehmet Serdar Ufuk Türeli
Abstract:
In this paper, an energy-aware method is presented, integrating energy-efficient relay-augmented techniques for correlated data routing with the goal of optimizing bottleneck throughput in wireless sensor networks. The system tackles the dual challenge of throughput optimization while considering sensor network energy consumption. A unique routing metric has been developed to enable throughput maximization while minimizing energy consumption by utilizing data correlation patterns. The paper introduces a game theoretic framework to address the NP-complete optimization problem inherent in throughput-maximizing correlation-aware routing with energy limitations. By creating an algorithm that blends energy-aware route selection strategies with the best reaction dynamics, this framework provides a local solution. The suggested technique considerably raises the bottleneck throughput for each source in the network while reducing energy consumption by choosing the best routes that strike a compromise between throughput enhancement and energy efficiency. Extensive numerical analyses verify the efficiency of the method. The outcomes demonstrate the significant decrease in energy consumption attained by the energy-efficient relay-augmented bottleneck throughput maximization technique, in addition to confirming the anticipated throughput benefits.Keywords: correlated data aggregation, energy efficiency, game theory, relay-augmented routing, throughput maximization, wireless sensor networks
Procedia PDF Downloads 82109 Measuring Corporate Brand Loyalties in Business Markets: A Case for Caution
Authors: Niklas Bondesson
Abstract:
Purpose: This paper attempts to examine how different facets of attitudinal brand loyalty are determined by different brand image elements in business markets. Design/Methodology/Approach: Statistical analysis is employed to data from a web survey, covering 226 professional packaging buyers in eight countries. Findings: The results reveal that different brand loyalty facets have different antecedents. Affective brand loyalties (or loyalty 'feelings') are mainly driven by customer associations to service relationships, whereas customers’ loyalty intentions (to purchase and recommend a brand) are triggered by associations to the general reputation of the company. The findings also indicate that willingness to pay a price premium is a distinct form of loyalty, with unique determinants. Research implications: Theoretically, the paper suggests that corporate B2B brand loyalty needs to be conceptualised with more refinement than has been done in extant B2B branding work. Methodologically, the paper highlights that single-item approaches can be fruitful when measuring B2B brand loyalty, and that multi-item scales can conceal important nuances in terms of understanding why customers are loyal. Practical implications: The idea of a loyalty 'silver metric' is an attractive idea, but this study indicates that firms who rely too much on one single type of brand loyalty risk to miss important building blocks. Originality/Value/Contribution: The major contribution is a more multi-faceted conceptualisation, and measurement, of corporate B2B brand loyalty and its brand image determinants than extant work has provided.Keywords: brand equity, business-to-business branding, industrial marketing, buying behaviour
Procedia PDF Downloads 413108 Deep Learning Approach for Chronic Kidney Disease Complications
Authors: Mario Isaza-Ruget, Claudia C. Colmenares-Mejia, Nancy Yomayusa, Camilo A. González, Andres Cely, Jossie Murcia
Abstract:
Quantification of risks associated with complications development from chronic kidney disease (CKD) through accurate survival models can help with patient management. A retrospective cohort that included patients diagnosed with CKD from a primary care program and followed up between 2013 and 2018 was carried out. Time-dependent and static covariates associated with demographic, clinical, and laboratory factors were included. Deep Learning (DL) survival analyzes were developed for three CKD outcomes: CKD stage progression, >25% decrease in Estimated Glomerular Filtration Rate (eGFR), and Renal Replacement Therapy (RRT). Models were evaluated and compared with Random Survival Forest (RSF) based on concordance index (C-index) metric. 2.143 patients were included. Two models were developed for each outcome, Deep Neural Network (DNN) model reported C-index=0.9867 for CKD stage progression; C-index=0.9905 for reduction in eGFR; C-index=0.9867 for RRT. Regarding the RSF model, C-index=0.6650 was reached for CKD stage progression; decreased eGFR C-index=0.6759; RRT C-index=0.8926. DNN models applied in survival analysis context with considerations of longitudinal covariates at the start of follow-up can predict renal stage progression, a significant decrease in eGFR and RRT. The success of these survival models lies in the appropriate definition of survival times and the analysis of covariates, especially those that vary over time.Keywords: artificial intelligence, chronic kidney disease, deep neural networks, survival analysis
Procedia PDF Downloads 134107 Spatial Interpolation of Aerosol Optical Depth Pollution: Comparison of Methods for the Development of Aerosol Distribution
Authors: Sahabeh Safarpour, Khiruddin Abdullah, Hwee San Lim, Mohsen Dadras
Abstract:
Air pollution is a growing problem arising from domestic heating, high density of vehicle traffic, electricity production, and expanding commercial and industrial activities, all increasing in parallel with urban population. Monitoring and forecasting of air quality parameters are important due to health impact. One widely available metric of aerosol abundance is the aerosol optical depth (AOD). The AOD is the integrated light extinction coefficient over a vertical atmospheric column of unit cross section, which represents the extent to which the aerosols in that vertical profile prevent the transmission of light by absorption or scattering. Seasonal aerosol optical depth (AOD) values at 550 nm derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor onboard NASA’s Terra satellites, for the 10 years period of 2000-2010 were used to test 7 different spatial interpolation methods in the present study. The accuracy of estimations was assessed through visual analysis as well as independent validation based on basic statistics, such as root mean square error (RMSE) and correlation coefficient. Based on the RMSE and R values of predictions made using measured values from 2000 to 2010, Radial Basis Functions (RBFs) yielded the best results for spring, summer, and winter and ordinary kriging yielded the best results for fall.Keywords: aerosol optical depth, MODIS, spatial interpolation techniques, Radial Basis Functions
Procedia PDF Downloads 407106 Description of Anthracotheriidae Remains from the Middle and Upper Siwaliks of Punjab, Pakistan
Authors: Abdul M. Khan, Ayesha Iqbal
Abstract:
In this paper, new dental remains of Merycopotamus (Anthracotheriidae) are described. The specimens were collected during field work by the authors from the well dated fossiliferous locality 'Hasnot' belonging to the Dhok Pathan Formation, and from 'Tatrot' village belonging to Tatrot Formation of the Potwar Plateau, Pakistan. The stratigraphic age of the Neogene deposits around Hasnot is 7 - 5 Ma; whereas the age of the Tatrot Formation is from 3.4 - 2.6 Ma. The newly discovered material when compared with the previous records of the genus Merycopotamus from the Siwaliks led us to identify all the three reported species of this genus from the Siwaliks of Pakistan. As the sample comprises only the dental remains so the identification of the specimens is solely based upon the morpho-metric analysis. The occlusal pattern of the upper molar in Merycopotamus dissimilis is different from Merycopotamus medioximus and Merycopotamus nanus in having a mesostyle fully divided, forming two prominent cusps, while mesostyle in M. medioximus is partly divided and small lateral crests are present on the mesostyle. A continuous loop like mesostyle is present in Merycopotamus nanus. The entoconid fold is present in Merycopotamus dissimilis on the lower molars whereas it is absent in Merycopotamus medioximus and Merycopotamus nanus. The hypoconulid in M. dissimilis is relatively simple but a loop like hypoconulid is present in M. medioximus and M. nanus. The results of the present findings are in line with the previous records of the genus Merycopotamus, with M. nanus, M. medioximus and M. dissimilis in the Late Miocene – Early Pliocene Dhok Pathan Formation, and M. dissimilis in the Late Pliocene Tatrot sediments of Pakistan.Keywords: Dhok Pathan, late miocene, merycopotamus, pliocene, Tatrot
Procedia PDF Downloads 242105 Quantifying Meaning in Biological Systems
Authors: Richard L. Summers
Abstract:
The advanced computational analysis of biological systems is becoming increasingly dependent upon an understanding of the information-theoretic structure of the materials, energy and interactive processes that comprise those systems. The stability and survival of these living systems are fundamentally contingent upon their ability to acquire and process the meaning of information concerning the physical state of its biological continuum (biocontinuum). The drive for adaptive system reconciliation of a divergence from steady-state within this biocontinuum can be described by an information metric-based formulation of the process for actionable knowledge acquisition that incorporates the axiomatic inference of Kullback-Leibler information minimization driven by survival replicator dynamics. If the mathematical expression of this process is the Lagrangian integrand for any change within the biocontinuum then it can also be considered as an action functional for the living system. In the direct method of Lyapunov, such a summarizing mathematical formulation of global system behavior based on the driving forces of energy currents and constraints within the system can serve as a platform for the analysis of stability. As the system evolves in time in response to biocontinuum perturbations, the summarizing function then conveys information about its overall stability. This stability information portends survival and therefore has absolute existential meaning for the living system. The first derivative of the Lyapunov energy information function will have a negative trajectory toward a system's steady state if the driving force is dissipating. By contrast, system instability leading to system dissolution will have a positive trajectory. The direction and magnitude of the vector for the trajectory then serves as a quantifiable signature of the meaning associated with the living system’s stability information, homeostasis and survival potential.Keywords: meaning, information, Lyapunov, living systems
Procedia PDF Downloads 131104 A Comparative Study Mechanical Properties of Polytetrafluoroethylene Materials Synthesized by Non-Conventional and Conventional Techniques
Authors: H. Lahlali F. El Haouzi, A.M.Al-Baradi, I. El Aboudi, M. El Azhari, A. Mdarhri
Abstract:
Polytetrafluoroethylene (PTFE) is a high performance thermoplastic polymer with exceptional physical and chemical properties, such as a high melting temperature, high thermal stability, and very good chemical resistance. Nevertheless, manufacturing PTFE is problematic due to its high melt viscosity (10 12 Pa.s). In practice, it is by now well established that this property presents a serious problem when the classical methods are used to synthesized the dense PTFE materials in particularly hot pressing, high temperature extrusion. In this framework, we use here a new process namely spark plasma sintering (SPS) to elaborate PTFE samples from the micro metric particles powder. It consists in applying simultaneous electric current and pressure directly on the sample powder. By controlling the processing parameters of this technique, a series of PTFE samples are easy obtained and associated to remarkably short time as is reported in an early work. Our central goal in the present study is to understand how the non conventional SPS affects the mechanical properties at room temperature. For this end, a second commercially series of PTFE synthesized by using the extrusion method is investigated. The first data according to the tensile mechanical properties are found to be superior for the first set samples (SPS). However, this trend is not observed for the results obtained from the compression testing. The observed macro-behaviors are correlated to some physical properties of the two series of samples such as their crystallinity or density. Upon a close examination of these properties, we believe the SPS technique can be seen as a promising way to elaborate the polymer having high molecular mass without compromising their mechanical properties.Keywords: PTFE, extrusion, Spark Plasma Sintering, physical properties, mechanical behavior
Procedia PDF Downloads 307103 Implementing a Strategy of Reliability Centred Maintenance (RCM) in the Libyan Cement Industry
Authors: Khalid M. Albarkoly, Kenneth S. Park
Abstract:
The substantial development of the construction industry has forced the cement industry, its major support, to focus on achieving maximum productivity to meet the growing demand for this material. Statistics indicate that the demand for cement rose from 1.6 billion metric tons (bmt) in 2000 to 4bmt in 2013. This means that the reliability of a production system needs to be at the highest level that can be achieved by good maintenance. This paper studies the extent to which the implementation of RCM is needed as a strategy for increasing the reliability of the production systems component can be increased, thus ensuring continuous productivity. In a case study of four Libyan cement factories, 80 employees were surveyed and 12 top and middle managers interviewed. It is evident that these factories usually breakdown more often than once per month which has led to a decline in productivity, they cannot produce more than 50% of their designed capacity. This has resulted from the poor reliability of their production systems as a result of poor or insufficient maintenance. It has been found that most of the factories’ employees misunderstand maintenance and its importance. The main cause of this problem is the lack of qualified and trained staff, but in addition, it has been found that most employees are not found to be motivated as a result of a lack of management support and interest. In response to these findings, it has been suggested that the RCM strategy should be implemented in the four factories. The paper shows the importance of considering the development of maintenance strategies through the implementation of RCM in these factories. The purpose of it would be to overcome the problems that could reduce the level of reliability of the production systems. This study could be a useful source of information for academic researchers and the industrial organisations which are still experiencing problems in maintenance practices.Keywords: Libyan cement industry, reliability centred maintenance, maintenance, production, reliability
Procedia PDF Downloads 389102 Barriers to Public Innovation in Colombia: Case Study in Central Administrative Region
Authors: Yessenia Parrado, Ana Barbosa, Daniela Mahe, Sebastian Toro, Jhon Garcia
Abstract:
Public innovation has gained strength in recent years in response to the need to find new strategies or mechanisms to interact between government entities and citizens. In this way, the Colombian government has been promoting policies aimed at strengthening innovation as a fundamental aspect in the work of public entities. However, in order to potentiate the capacities of public servants and therefore of the institutions and organizations to which they belong, it is necessary to be able to understand the context under which they operate in their daily work. This article aims to compile the work developed by the laboratory of innovation, creativity, and new technologies LAB101 of the National University of Colombia for the National Department of Planning. A case study was developed in the central region of Colombia made up of five departments, through the construction of instruments based on quantitative techniques in response to the item combined with qualitative analysis through semi-structured interviews to understand the perception of possible barriers to innovation and the obstacles that have prevented the acceleration of transformation within public organizations. From the information collected, different analyzes are carried out that allows a more robust explanation to be given to the results obtained, and a set of categories are established to group different characteristics associated with possible difficulties that officials perceive to innovate and that are later conceived as barriers. Finally, a proposal for an indicator was built to measure the degree of innovation within public entities in order to be able to carry a metric in future opportunities. The main findings of this study show three key components to be strengthened in public entities and organizations: governance, knowledge management, and the promotion of collaborative workspaces.Keywords: barriers, enablers, management, public innovation
Procedia PDF Downloads 114101 Performance Comparison of Microcontroller-Based Optimum Controller for Fruit Drying System
Authors: Umar Salisu
Abstract:
This research presents the development of a hot air tomatoes drying system. To provide a more efficient and continuous temperature control, microcontroller-based optimal controller was developed. The system is based on a power control principle to achieve smooth power variations depending on a feedback temperature signal of the process. An LM35 temperature sensor and LM399 differential comparator were used to measure the temperature. The mathematical model of the system was developed and the optimal controller was designed and simulated and compared with the PID controller transient response. A controlled environment suitable for fruit drying is developed within a closed chamber and is a three step process. First, the infrared light is used internally to preheated the fruit to speedily remove the water content inside the fruit for fast drying. Second, hot air of a specified temperature is blown inside the chamber to maintain the humidity below a specified level and exhaust the humid air of the chamber. Third, the microcontroller disconnects the power to the chamber after the moisture content of the fruits is removed to minimal. Experiments were conducted with 1kg of fresh tomatoes at three different temperatures (40, 50 and 60 °C) at constant relative humidity of 30%RH. The results obtained indicate that the system is significantly reducing the drying time without affecting the quality of the fruits. In the context of temperature control, the results obtained showed that the response of the optimal controller has zero overshoot whereas the PID controller response overshoots to about 30% of the set-point. Another performance metric used is the rising time; the optimal controller rose without any delay while the PID controller delayed for more than 50s. It can be argued that the optimal controller performance is preferable than that of the PID controller since it does not overshoot and it starts in good time.Keywords: drying, microcontroller, optimum controller, PID controller
Procedia PDF Downloads 301100 X-Ray Diffraction, Microstructure, and Mössbauer Studies of Nanostructured Materials Obtained by High-Energy Ball Milling
Authors: N. Boudinar, A. Djekoun, A. Otmani, B. Bouzabata, J. M. Greneche
Abstract:
High-energy ball milling is a solid-state powder processing technique that allows synthesizing a variety of equilibrium and non-equilibrium alloy phases starting from elemental powders. The advantage of this process technology is that the powder can be produced in large quantities and the processing parameters can be easily controlled, thus it is a suitable method for commercial applications. It can also be used to produce amorphous and nanocrystalline materials in commercially relevant amounts and is also amenable to the production of a variety of alloy compositions. Mechanical alloying (high-energy ball milling) provides an inter-dispersion of elements through a repeated cold welding and fracture of free powder particles; the grain size decreases to nano metric scale and the element mix together. Progressively, the concentration gradients disappear and eventually the elements are mixed at the atomic scale. The end products depend on many parameters such as the milling conditions and the thermodynamic properties of the milled system. Here, the mechanical alloying technique has been used to prepare nano crystalline Fe_50 and Fe_64 wt.% Ni alloys from powder mixtures. Scanning electron microscopy (SEM) with energy-dispersive, X-ray analyses and Mössbauer spectroscopy were used to study the mixing at nanometric scale. The Mössbauer Spectroscopy confirmed the ferromagnetic ordering and was use to calculate the distribution of hyperfin field. The Mössbauer spectrum for both alloys shows the existence of a ferromagnetic phase attributed to γ-Fe-Ni solid solution.Keywords: nanocrystalline, mechanical alloying, X-ray diffraction, Mössbauer spectroscopy, phase transformations
Procedia PDF Downloads 43799 Environmental Protection by Optimum Utilization of Car Air Conditioners
Authors: Sanchita Abrol, Kunal Rana, Ankit Dhir, S. K. Gupta
Abstract:
According to N.R.E.L.’s findings, 700 crore gallons of petrol is used annually to run the air conditioners of passenger vehicles (nearly 6% of total fuel consumption in the USA). Beyond fuel use, the Environmental Protection Agency reported that refrigerant leaks from auto air conditioning units add an additional 5 crore metric tons of carbon emissions to the atmosphere each year. The objective of our project is to deal with this vital issue by carefully modifying the interiors of a car thereby increasing its mileage and the efficiency of its engine. This would consequently result in a decrease in tail emission and generated pollution along with improved car performance. An automatic mechanism, deployed between the front and the rear seats, consisting of transparent thermal insulating sheet/curtain, would roll down as per the requirement of the driver in order to optimize the volume for effective air conditioning, when travelling alone or with a person. The reduction in effective volume will yield favourable results. Even on a mild sunny day, the temperature inside a parked car can quickly spike to life-threatening levels. For a stationary parked car, insulation would be provided beneath its metal body so as to reduce the rate of heat transfer and increase the transmissivity. As a result, the car would not require a large amount of air conditioning for maintaining lower temperature, which would provide us similar benefits. Authors established the feasibility studies, system engineering and primarily theoretical and experimental results confirming the idea and motivation to fabricate and test the actual product.Keywords: automation, car, cooling insulating curtains, heat optimization, insulation, reduction in tail emission, mileage
Procedia PDF Downloads 27798 Evidence Theory Enabled Quickest Change Detection Using Big Time-Series Data from Internet of Things
Authors: Hossein Jafari, Xiangfang Li, Lijun Qian, Alexander Aved, Timothy Kroecker
Abstract:
Traditionally in sensor networks and recently in the Internet of Things, numerous heterogeneous sensors are deployed in distributed manner to monitor a phenomenon that often can be model by an underlying stochastic process. The big time-series data collected by the sensors must be analyzed to detect change in the stochastic process as quickly as possible with tolerable false alarm rate. However, sensors may have different accuracy and sensitivity range, and they decay along time. As a result, the big time-series data collected by the sensors will contain uncertainties and sometimes they are conflicting. In this study, we present a framework to take advantage of Evidence Theory (a.k.a. Dempster-Shafer and Dezert-Smarandache Theories) capabilities of representing and managing uncertainty and conflict to fast change detection and effectively deal with complementary hypotheses. Specifically, Kullback-Leibler divergence is used as the similarity metric to calculate the distances between the estimated current distribution with the pre- and post-change distributions. Then mass functions are calculated and related combination rules are applied to combine the mass values among all sensors. Furthermore, we applied the method to estimate the minimum number of sensors needed to combine, so computational efficiency could be improved. Cumulative sum test is then applied on the ratio of pignistic probability to detect and declare the change for decision making purpose. Simulation results using both synthetic data and real data from experimental setup demonstrate the effectiveness of the presented schemes.Keywords: CUSUM, evidence theory, kl divergence, quickest change detection, time series data
Procedia PDF Downloads 33497 Causes for the Precession of the Perihelion in the Planetary Orbits
Authors: Kwan U. Kim, Jin Sim, Ryong Jin Jang, Sung Duk Kim
Abstract:
It is Leverrier that discovered the precession of the perihelion in the planetary orbits for the first time in the world, while it is Einstein that explained the astronomical phenomenom for the first time in the world. The amount of the precession of the perihelion for Einstein’s theory of gravitation has been explained by means of the inverse fourth power force(inverse third power potential) introduced totheory of gravitation through Schwarzschild metric However, the methodology has a serious shortcoming that it is impossible to explain the cause for the precession of the perihelion in the planetary orbits. According to our study, without taking the cause for the precession of the perihelion, 6 methods can explain the amount of the precession of the perihelion discovered by Leverrier. Therefore, the problem of what caused the perihelion to precess in the planetary orbits must be solved for physics because it is a profound scientific and technological problem for a basic experiment in construction of relativistic theory of gravitation. The scientific solution to the problem proved that Einstein’s explanation for the planetary orbits is a magic made by the numerical expressions obtained from fictitious gravitation introduced to theory of gravitation and wrong definition of proper time The problem of the precession of the perihelion seems solved already by means of general theory of relativity, but, in essence, the cause for the astronomical phenomenon has not been successfully explained for astronomy yet. The right solution to the problem comes from generalized theory of gravitation. Therefore, in this paper, it has been shown that by means of Schwarzschild field and the physical quantities of relativistic Lagrangian redflected in it, fictitious gravitation is not the main factor which can cause the perihelion to precess in the planetary orbits. In addition to it, it has been shown that the main factor which can cause the perihelion to precess in the planetary orbits is the inverse third power force existing really in the relativistic region in the Solar system.Keywords: inverse third power force, precession of the perihelion, fictitious gravitation, planetary orbits
Procedia PDF Downloads 11