Search results for: Network Time Protocol
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 21715

Search results for: Network Time Protocol

19195 Aromatic Medicinal Plant Classification Using Deep Learning

Authors: Tsega Asresa Mengistu, Getahun Tigistu

Abstract:

Computer vision is an artificial intelligence subfield that allows computers and systems to retrieve meaning from digital images. It is applied in various fields of study self-driving cars, video surveillance, agriculture, Quality control, Health care, construction, military, and everyday life. Aromatic and medicinal plants are botanical raw materials used in cosmetics, medicines, health foods, and other natural health products for therapeutic and Aromatic culinary purposes. Herbal industries depend on these special plants. These plants and their products not only serve as a valuable source of income for farmers and entrepreneurs, and going to export not only industrial raw materials but also valuable foreign exchange. There is a lack of technologies for the classification and identification of Aromatic and medicinal plants in Ethiopia. The manual identification system of plants is a tedious, time-consuming, labor, and lengthy process. For farmers, industry personnel, academics, and pharmacists, it is still difficult to identify parts and usage of plants before ingredient extraction. In order to solve this problem, the researcher uses a deep learning approach for the efficient identification of aromatic and medicinal plants by using a convolutional neural network. The objective of the proposed study is to identify the aromatic and medicinal plant Parts and usages using computer vision technology. Therefore, this research initiated a model for the automatic classification of aromatic and medicinal plants by exploring computer vision technology. Morphological characteristics are still the most important tools for the identification of plants. Leaves are the most widely used parts of plants besides the root, flower and fruit, latex, and barks. The study was conducted on aromatic and medicinal plants available in the Ethiopian Institute of Agricultural Research center. An experimental research design is proposed for this study. This is conducted in Convolutional neural networks and Transfer learning. The Researcher employs sigmoid Activation as the last layer and Rectifier liner unit in the hidden layers. Finally, the researcher got a classification accuracy of 66.4 in convolutional neural networks and 67.3 in mobile networks, and 64 in the Visual Geometry Group.

Keywords: aromatic and medicinal plants, computer vision, deep convolutional neural network

Procedia PDF Downloads 402
19194 A Hybrid Fuzzy Clustering Approach for Fertile and Unfertile Analysis

Authors: Shima Soltanzadeh, Mohammad Hosain Fazel Zarandi, Mojtaba Barzegar Astanjin

Abstract:

Diagnosis of male infertility by the laboratory tests is expensive and, sometimes it is intolerable for patients. Filling out the questionnaire and then using classification method can be the first step in decision-making process, so only in the cases with a high probability of infertility we can use the laboratory tests. In this paper, we evaluated the performance of four classification methods including naive Bayesian, neural network, logistic regression and fuzzy c-means clustering as a classification, in the diagnosis of male infertility due to environmental factors. Since the data are unbalanced, the ROC curves are most suitable method for the comparison. In this paper, we also have selected the more important features using a filtering method and examined the impact of this feature reduction on the performance of each methods; generally, most of the methods had better performance after applying the filter. We have showed that using fuzzy c-means clustering as a classification has a good performance according to the ROC curves and its performance is comparable to other classification methods like logistic regression.

Keywords: classification, fuzzy c-means, logistic regression, Naive Bayesian, neural network, ROC curve

Procedia PDF Downloads 318
19193 Threshold (K, P) Quantum Distillation

Authors: Shashank Gupta, Carlos Cid, William John Munro

Abstract:

Quantum distillation is the task of concentrating quantum correlations present in N imperfect copies to M perfect copies (M < N) using free operations by involving all P the parties sharing the quantum correlation. We present a threshold quantum distillation task where the same objective is achieved but using lesser number of parties (K < P). In particular, we give an exact local filtering operations by the participating parties sharing high dimension multipartite entangled state to distill the perfect quantum correlation. Later, we bridge a connection between threshold quantum entanglement distillation and quantum steering distillation and show that threshold distillation might work in the scenario where general distillation protocol like DEJMPS does not work.

Keywords: quantum networks, quantum distillation, quantum key distribution, entanglement distillation

Procedia PDF Downloads 25
19192 Theoretical Approach and Proof of Concept Implementation of Adaptive Partition Scheduling Module for Linux

Authors: Desislav Andreev, Veselin Stanev

Abstract:

Linux operating system continues to gain popularity with every passed year. This is due to its open-source license and a great number of distributions, covering users’ needs. At first glance it seems that Linux can be integrated in every type of systems – it is already present in personal computers, smartphones and even in some embedded systems like Raspberry Pi. However, Linux still does not meet the performance and security requirements to run effectively on a real-time system. Real-time systems are very time-restricted – their processes have to execute and finish at strict time intervals. The Completely Fair Scheduler present in Linux does not have such scheduling capabilities and it is not able to ensure that critical-time processes will execute on time. One of the ways to solve this problem is implementing an Adaptive Partition Scheduler solution similar to that present in QNX Neutrino operating system. This type of scheduling divides the CPU in multiple adaptive partitions where each partition holds a percentage of CPU usage called budget, which allows optimal usage of the CPU resources and also provides protection against cyber attacks such as Denial of Service. This approach will also benefit systems, where functional safety is highly demanded, such as the instrumental clusters in the Automotive industry. The purpose of this paper is to present a concept of Adaptive Partition Scheduler designed for Linux-based operating systems.

Keywords: adaptive partitions, Linux kernel modules, real-time systems, scheduling

Procedia PDF Downloads 83
19191 Privacy-Preserving Location Sharing System with Client/Server Architecture in Mobile Online Social Network

Authors: Xi Xiao, Chunhui Chen, Xinyu Liu, Guangwu Hu, Yong Jiang

Abstract:

Location sharing is a fundamental service in mobile Online Social Networks (mOSNs), which raises significant privacy concerns in recent years. Now, most location-based service applications adopt client/server architecture. In this paper, a location sharing system, named CSLocShare, is presented to provide flexible privacy-preserving location sharing with client/server architecture in mOSNs. CSLocShare enables location sharing between both trusted social friends and untrusted strangers without the third-party server. In CSLocShare, Location-Storing Social Network Server (LSSNS) provides location-based services but do not know the users’ real locations. The thorough analysis indicates that the users’ location privacy is protected. Meanwhile, the storage and the communication cost are saved. CSLocShare is more suitable and effective in reality.

Keywords: mobile online social networks, client/server architecture, location sharing, privacy-preserving

Procedia PDF Downloads 304
19190 Research on Intercity Travel Mode Choice Behavior Considering Traveler’s Heterogeneity and Psychological Latent Variables

Authors: Yue Huang, Hongcheng Gan

Abstract:

The new urbanization pattern has led to a rapid growth in demand for short-distance intercity travel, and the emergence of new travel modes has also increased the variety of intercity travel options. In previous studies on intercity travel mode choice behavior, the impact of functional amenities of travel mode and travelers’ long-term personality characteristics has rarely been considered, and empirical results have typically been calibrated using revealed preference (RP) or stated preference (SP) data. This study designed a questionnaire that combines the RP and SP experiment from the perspective of a trip chain combining inner-city and intercity mobility, with consideration for the actual condition of the Huainan-Hefei traffic corridor. On the basis of RP/SP fusion data, a hybrid choice model considering both random taste heterogeneity and psychological characteristics was established to investigate travelers’ mode choice behavior for traditional train, high-speed rail, intercity bus, private car, and intercity online car-hailing. The findings show that intercity time and cost exert the greatest influence on mode choice, with significant heterogeneity across the population. Although inner-city cost does not demonstrate a significant influence, inner-city time plays an important role. Service attributes of travel mode, such as catering and hygiene services, as well as free wireless network supply, only play a minor role in mode selection. Finally, our study demonstrates that safety-seeking tendency, hedonism, and introversion all have differential and significant effects on intercity travel mode choice.

Keywords: intercity travel mode choice, stated preference survey, hybrid choice model, RP/SP fusion data, psychological latent variable, heterogeneity

Procedia PDF Downloads 90
19189 Analysis of Ionospheric Variations over Japan during 23rd Solar Cycle Using Wavelet Techniques

Authors: C. S. Seema, P. R. Prince

Abstract:

The characterization of spatio-temporal inhomogeneities occurring in the ionospheric F₂ layer is remarkable since these variations are direct consequences of electrodynamical coupling between magnetosphere and solar events. The temporal and spatial variations of the F₂ layer, which occur with a period of several days or even years, mainly owe to geomagnetic and meteorological activities. The hourly F₂ layer critical frequency (foF2) over 23rd solar cycle (1996-2008) of three ionosonde stations (Wakkanai, Kokunbunji, and Okinawa) in northern hemisphere, which falls within same longitudinal span, is analyzed using continuous wavelet techniques. Morlet wavelet is used to transform continuous time series data of foF2 to a two dimensional time-frequency space, quantifying the time evolution of the oscillatory modes. The presence of significant time patterns (periodicities) at a particular time period and the time location of each periodicity are detected from the two-dimensional representation of the wavelet power, in the plane of scale and period of the time series. The mean strength of each periodicity over the entire period of analysis is studied using global wavelet spectrum. The quasi biennial, annual, semiannual, 27 day, diurnal and 12 hour variations of foF2 are clearly evident in the wavelet power spectra in all the three stations. Critical frequency oscillations with multi-day periods (2-3 days and 9 days in the low latitude station, 6-7 days in all stations and 15 days in mid-high latitude station) are also superimposed over large time scaled variations.

Keywords: continuous wavelet analysis, critical frequency, ionosphere, solar cycle

Procedia PDF Downloads 195
19188 Fast Terminal Synergetic Converter Control

Authors: Z. Bouchama, N. Essounbouli, A. Hamzaoui, M. N. Harmas

Abstract:

A new robust finite time synergetic controller is presented based on recently developed synergetic control methodology and a terminal attractor technique. A Fast Terminal Synergetic Control (FTSC) is proposed for controlling DC-DC buck converter. Unlike Synergetic Control (SC) and sliding mode control, the proposed control scheme has the characteristics of finite time convergence and chattering free phenomena. Simulation of stabilization and reference tracking for buck converter systems illustrates the approach effectiveness while stability is assured in the Lyapunov sense and converse Lyapunov results involving scalar differential inequalities are given for finite-time stability.

Keywords: dc-dc buck converter, synergetic control, finite time convergence, terminal synergetic control, fast terminal synergetic control, Lyapunov

Procedia PDF Downloads 444
19187 A Review of Current Practices in Tattooing of Colonic Lesion at Endoscopy

Authors: Dhanashree Moghe, Roberta Bullingham, Rizwan Ahmed, Tarun Singhal

Abstract:

Aim: The NHS Bowel Screening Programme recommends the use of endoscopic tattooing for suspected malignant lesions that later require surgical or endoscopic localisation, using local protocols as guidance. This is in accordance with guidance from the BSG (The British Society of Gastroenterologists). We used a well-recognised local protocol as a standard to audit current tattooing practice in a large district general hospital with no current local guidelines. Method: A retrospective quantitative analysis of 50 patients who underwent segmental colonic resection for cancer over a 6-month period in 2021. We reviewed historic electronic endoscopy reports recording relevant data on tattoo indication and placement. Secondly, we carried out an anonymous survey of 16 independent lower GI endoscopists on self-reported details of their practice. Results: In our study, 28 patients (56%) had a tattoo placed at the time of their colonoscopy. Of these, only 53% (n=15) had the tattoo distal to the lesion, with the measured distance of the tattoo from the lesion only being documented in 8 reports. Only seven patients (25%) had a circumferential (4 quadrant) placement of the tattoo. 13 patients had lesions either in the caecum or rectum, locations deemed unnecessary as per BSG guidelines. Of the survey responses collected, there were four different protocols being used to guide practice. Only 50% of respondents placed tattoos at the correct distance from the lesion, and 83% placed the correct number of tattoos. Conclusion: There is a lack of standardisation of practices in colonic tattooing demonstrated in our study with incomplete compliance to our standard. The inadequate documentation of tattoo location can contribute to confusion and inaccuracy in the intraoperative localisation of lesions. This has the potential to increase operation length and morbidity. There is a need to standardise both technique and documentation in colonoscopic tattooing practice.

Keywords: colorectal cancer, endoscopic tattooing, colonoscopy, NHS BSCP

Procedia PDF Downloads 102
19186 Experimental Study of Impregnated Diamond Bit Wear During Sharpening

Authors: Rui Huang, Thomas Richard, Masood Mostofi

Abstract:

The lifetime of impregnated diamond bits and their drilling efficiency are in part governed by the bit wear conditions, not only the extent of the diamonds’ wear but also their exposure or protrusion out of the matrix bonding. As much as individual diamonds wear, the bonding matrix does also wear through two-body abrasion (direct matrix-rock contact) and three-body erosion (cuttings trapped in the space between rock and matrix). Although there is some work dedicated to the study of diamond bit wear, there is still a lack of understanding on how matrix erosion and diamond exposure relate to the bit drilling response and drilling efficiency, as well as no literature on the process that governs bit sharpening a procedure commonly implemented by drillers when the extent of diamond polishing yield extremely low rate of penetration. The aim of this research is (i) to derive a correlation between the wear state of the bit and the drilling performance but also (ii) to gain a better understanding of the process associated with tool sharpening. The research effort combines specific drilling experiments and precise mapping of the tool-cutting face (impregnated diamond bits and segments). Bit wear is produced by drilling through a rock sample at a fixed rate of penetration for a given period of time. Before and after each wear test, the bit drilling response and thus efficiency is mapped out using a tailored design experimental protocol. After each drilling test, the bit or segment cutting face is scanned with an optical microscope. The test results show that, under the fixed rate of penetration, diamond exposure increases with drilling distance but at a decreasing rate, up to a threshold exposure that corresponds to the optimum drilling condition for this feed rate. The data further shows that the threshold exposure scale with the rate of penetration up to a point where exposure reaches a maximum beyond which no more matrix can be eroded under normal drilling conditions. The second phase of this research focuses on the wear process referred as bit sharpening. Drillers rely on different approaches (increase feed rate or decrease flow rate) with the aim of tearing worn diamonds away from the bit matrix, wearing out some of the matrix, and thus exposing fresh sharp diamonds and recovering a higher rate of penetration. Although a common procedure, there is no rigorous methodology to sharpen the bit and avoid excessive wear or bit damage. This paper aims to gain some insight into the mechanisms that accompany bit sharpening by carefully tracking diamond fracturing, matrix wear, and erosion and how they relate to drilling parameters recorded while sharpening the tool. The results show that there exist optimal conditions (operating parameters and duration of the procedure) for sharpening that minimize overall bit wear and that the extent of bit sharpening can be monitored in real-time.

Keywords: bit sharpening, diamond exposure, drilling response, impregnated diamond bit, matrix erosion, wear rate

Procedia PDF Downloads 81
19185 Mitigation of Electromagnetic Interference Generated by GPIB Control-Network in AC-DC Transfer Measurement System

Authors: M. M. Hlakola, E. Golovins, D. V. Nicolae

Abstract:

The field of instrumentation electronics is undergoing an explosive growth, due to its wide range of applications. The proliferation of electrical devices in a close working proximity can negatively influence each other’s performance. The degradation in the performance is due to electromagnetic interference (EMI). This paper investigates the negative effects of electromagnetic interference originating in the General Purpose Interface Bus (GPIB) control-network of the ac-dc transfer measurement system. Remedial measures of reducing measurement errors and failure of range of industrial devices due to EMI have been explored. The ac-dc transfer measurement system was analyzed for the common-mode (CM) EMI effects. Further investigation of coupling path as well as more accurate identification of noise propagation mechanism has been outlined. To prevent the occurrence of common-mode (ground loops) which was identified between the GPIB system control circuit and the measurement circuit, a microcontroller-driven GPIB switching isolator device was designed, prototyped, programmed and validated. This mitigation technique has been explored to reduce EMI effectively.

Keywords: CM, EMI, GPIB, ground loops

Procedia PDF Downloads 277
19184 Exploring the Situational Approach to Decision Making: User eConsent on a Health Social Network

Authors: W. Rowan, Y. O’Connor, L. Lynch, C. Heavin

Abstract:

Situation Awareness can offer the potential for conscious dynamic reflection. In an era of online health data sharing, it is becoming increasingly important that users of health social networks (HSNs) have the information necessary to make informed decisions as part of the registration process and in the provision of eConsent. This research aims to leverage an adapted Situation Awareness (SA) model to explore users’ decision making processes in the provision of eConsent. A HSN platform was used to investigate these behaviours. A mixed methods approach was taken. This involved the observation of registration behaviours followed by a questionnaire and focus group/s. Early results suggest that users are apt to automatically accept eConsent, and only later consider the long-term implications of sharing their personal health information. Further steps are required to continue developing knowledge and understanding of this important eConsent process. The next step in this research will be to develop a set of guidelines for the improved presentation of eConsent on the HSN platform.

Keywords: eConsent, health social network, mixed methods, situation awareness

Procedia PDF Downloads 265
19183 The Study about the New Monitoring System of Signal Equipment of Railways Using Radio Communication

Authors: Masahiko Suzuki, Takashi Kato , Masahiro Kobayashi

Abstract:

In our company, the monitoring system for signal equipment has already implemented. So, we can know the state of signal equipment, sitting in the control room or the maintenance center. But this system was installed over 20 years ago, so it cannot stand the needs such as 'more stable operation', 'broadband data transfer', 'easy construction and easy maintenance'. To satisfy these needs, we studied the monitoring system using radio communication as a new method which can realize the operation in the terrible environment along railroads. In these studies, we have developed the terminals and repeaters based on the ZigBee protocol and have implemented the application using two different radio bands simultaneously. At last, we got the good results from the fundamental examinations using the developed equipment.

Keywords: monitoring, radio communication, 2 bands, ZigBee

Procedia PDF Downloads 570
19182 Investigation of Wind Farm Interaction with Ethiopian Electric Power’s Grid: A Case Study at Ashegoda Wind Farm

Authors: Fikremariam Beyene, Getachew Bekele

Abstract:

Ethiopia is currently on the move with various projects to raise the amount of power generated in the country. The progress observed in recent years indicates this fact clearly and indisputably. The rural electrification program, the modernization of the power transmission system, the development of wind farm is some of the main accomplishments worth mentioning. As it is well known, currently, wind power is globally embraced as one of the most important sources of energy mainly for its environmentally friendly characteristics, and also that once it is installed, it is a source available free of charge. However, integration of wind power plant with an existing network has many challenges that need to be given serious attention. In Ethiopia, a number of wind farms are either installed or are under construction. A series of wind farm is planned to be installed in the near future. Ashegoda Wind farm (13.2°, 39.6°), which is the subject of this study, is the first large scale wind farm under construction with the capacity of 120 MW. The first phase of 120 MW (30 MW) has been completed and is expected to be connected to the grid soon. This paper is concerned with the investigation of the wind farm interaction with the national grid under transient operating condition. The main concern is the fault ride through (FRT) capability of the system when the grid voltage drops to exceedingly low values because of short circuit fault and also the active and reactive power behavior of wind turbines after the fault is cleared. On the wind turbine side, a detailed dynamic modelling of variable speed wind turbine of a 1 MW capacity running with a squirrel cage induction generator and full-scale power electronics converters is done and analyzed using simulation software DIgSILENT PowerFactory. On the Ethiopian electric power corporation side, after having collected sufficient data for the analysis, the grid network is modeled. In the model, a fault ride-through (FRT) capability of the plant is studied by applying 3-phase short circuit on the grid terminal near the wind farm. The results show that the Ashegoda wind farm can ride from voltage deep within a short time and the active and reactive power performance of the wind farm is also promising.

Keywords: squirrel cage induction generator, active and reactive power, DIgSILENT PowerFactory, fault ride-through capability, 3-phase short circuit

Procedia PDF Downloads 146
19181 Organizational Resilience in the Perspective of Supply Chain Risk Management: A Scholarly Network Analysis

Authors: William Ho, Agus Wicaksana

Abstract:

Anecdotal evidence in the last decade shows that the occurrence of disruptive events and uncertainties in the supply chain is increasing. The coupling of these events with the nature of an increasingly complex and interdependent business environment leads to devastating impacts that quickly propagate within and across organizations. For example, the recent COVID-19 pandemic increased the global supply chain disruption frequency by at least 20% in 2020 and is projected to have an accumulative cost of $13.8 trillion by 2024. This crisis raises attention to organizational resilience to weather business uncertainty. However, the concept has been criticized for being vague and lacking a consistent definition, thus reducing the significance of the concept for practice and research. This study is intended to solve that issue by providing a comprehensive review of the conceptualization, measurement, and antecedents of operational resilience that have been discussed in the supply chain risk management literature (SCRM). We performed a Scholarly Network Analysis, combining citation-based and text-based approaches, on 252 articles published from 2000 to 2021 in top-tier journals based on three parameters: AJG ranking and ABS ranking, UT Dallas and FT50 list, and editorial board review. We utilized a hybrid scholarly network analysis by combining citation-based and text-based approaches to understand the conceptualization, measurement, and antecedents of operational resilience in the SCRM literature. Specifically, we employed a Bibliographic Coupling Analysis in the research cluster formation stage and a Co-words Analysis in the research cluster interpretation and analysis stage. Our analysis reveals three major research clusters of resilience research in the SCRM literature, namely (1) supply chain network design and optimization, (2) organizational capabilities, and (3) digital technologies. We portray the research process in the last two decades in terms of the exemplar studies, problems studied, commonly used approaches and theories, and solutions provided in each cluster. We then provide a conceptual framework on the conceptualization and antecedents of resilience based on studies in these clusters and highlight potential areas that need to be studied further. Finally, we leverage the concept of abnormal operating performance to propose a new measurement strategy for resilience. This measurement overcomes the limitation of most current measurements that are event-dependent and focus on the resistance or recovery stage - without capturing the growth stage. In conclusion, this study provides a robust literature review through a scholarly network analysis that increases the completeness and accuracy of research cluster identification and analysis to understand conceptualization, antecedents, and measurement of resilience. It also enables us to perform a comprehensive review of resilience research in SCRM literature by including research articles published during the pandemic and connects this development with a plethora of articles published in the last two decades. From the managerial perspective, this study provides practitioners with clarity on the conceptualization and critical success factors of firm resilience from the SCRM perspective.

Keywords: supply chain risk management, organizational resilience, scholarly network analysis, systematic literature review

Procedia PDF Downloads 54
19180 Development of Alternative Fuels Technologies: Compressed Natural Gas Home Refueling Station

Authors: Szymon Kuczynski, Krystian Liszka, Mariusz Laciak, Andrii Oliinyk, Adam Szurlej

Abstract:

Compressed natural gas (CNG) represents an excellent compromise between the availability of a technology that is proven and relatively easy to use in many areas of the automotive industry and incurred costs. This fuel causes a lower corrosion effect due to the lower content of products causing the potential difference on the walls of the engine system. Natural gas powered vehicles (NGVs) do not emit any substances that can contaminate water or land. The absence of carcinogenic substances in gaseous fuel extends the life of the engine. In the longer term, it contributes positively to waste management as well as waste disposal. Popularization of propulsion systems powered by natural gas CNG positively affects the reduction of heavy duty transport. For these reasons, CNG as a fuel stimulates considerable interest around the world. Over the last few years, technologies related to use of natural gas as an engine fuel have been developed and improved. These solutions have evolved from the prototype phase to the industrial scale implementation. The widespread availability of gaseous fuels has led to the development of a technology that allows the CNG fuel to be refueled directly from the urban gas network to the vehicle tank (ie. HYGEN - CNGHRS). Home refueling installations, although they have been known for many years, are becoming increasingly important in the present day. The major obstacle in the sale of this technology was, until recently, quite high capital expenditure compared to the later benefits. Home refueling systems allow refueling vehicle tank, with full control of fuel costs and refueling time. CNG Home Refueling Stations (such as HYGEN) allow gas value chain to overcome the dogma that there is a lack of refueling infrastructure allowing companies in gas value chain to participate in transportation market. Technology is based on one stage hydraulic compressor (instead of multistage mechanical compressor technology) which provides the possibility to compress low pressure gas from distribution gas network to 200 bar for its further usage as a fuel for NGVs. This boosts revenues and profits of gas companies by expanding its presence in higher margin of energy sector.

Keywords: alternative fuels, CNG (compressed natural gas), CNG stations, NGVs (natural gas vehicles), gas value chain

Procedia PDF Downloads 181
19179 A Real-time Classification of Lying Bodies for Care Application of Elderly Patients

Authors: E. Vazquez-Santacruz, M. Gamboa-Zuniga

Abstract:

In this paper, we show a methodology for bodies classification in lying state using HOG descriptors and pressures sensors positioned in a matrix form (14 x 32 sensors) on the surface where bodies lie down. it will be done in real time. Our system is embedded in a care robot that can assist the elderly patient and medical staff around to get a better quality of life in and out of hospitals. Due to current technology a limited number of sensors is used, wich results in low-resolution data array, that will be used as image of 14 x 32 pixels. Our work considers the problem of human posture classification with few information (sensors), applying digital process to expand the original data of the sensors and so get more significant data for the classification, however, this is done with low-cost algorithms to ensure the real-time execution.

Keywords: real-time classification, sensors, robots, health care, elderly patients, artificial intelligence

Procedia PDF Downloads 845
19178 Transport Mode Selection under Lead Time Variability and Emissions Constraint

Authors: Chiranjit Das, Sanjay Jharkharia

Abstract:

This study is focused on transport mode selection under lead time variability and emissions constraint. In order to reduce the carbon emissions generation due to transportation, organization has often faced a dilemmatic choice of transport mode selection since logistic cost and emissions reduction are complementary with each other. Another important aspect of transportation decision is lead-time variability which is least considered in transport mode selection problem. Thus, in this study, we provide a comprehensive mathematical based analytical model to decide transport mode selection under emissions constraint. We also extend our work through analysing the effect of lead time variability in the transport mode selection by a sensitivity analysis. In order to account lead time variability into the model, two identically normally distributed random variables are incorporated in this study including unit lead time variability and lead time demand variability. Therefore, in this study, we are addressing following questions: How the decisions of transport mode selection will be affected by lead time variability? How lead time variability will impact on total supply chain cost under carbon emissions? To accomplish these objectives, a total transportation cost function is developed including unit purchasing cost, unit transportation cost, emissions cost, holding cost during lead time, and penalty cost for stock out due to lead time variability. A set of modes is available to transport each node, in this paper, we consider only four transport modes such as air, road, rail, and water. Transportation cost, distance, emissions level for each transport mode is considered as deterministic and static in this paper. Each mode is having different emissions level depending on the distance and product characteristics. Emissions cost is indirectly affected by the lead time variability if there is any switching of transport mode from lower emissions prone transport mode to higher emissions prone transport mode in order to reduce penalty cost. We provide a numerical analysis in order to study the effectiveness of the mathematical model. We found that chances of stock out during lead time will be higher due to the higher variability of lead time and lad time demand. Numerical results show that penalty cost of air transport mode is negative that means chances of stock out zero, but, having higher holding and emissions cost. Therefore, air transport mode is only selected when there is any emergency order to reduce penalty cost, otherwise, rail and road transport is the most preferred mode of transportation. Thus, this paper is contributing to the literature by a novel approach to decide transport mode under emissions cost and lead time variability. This model can be extended by studying the effect of lead time variability under some other strategic transportation issues such as modal split option, full truck load strategy, and demand consolidation strategy etc.

Keywords: carbon emissions, inventory theoretic model, lead time variability, transport mode selection

Procedia PDF Downloads 410
19177 The Impact of Community Settlement on Leisure Time Use and Body Composition in Determining Physical Lifestyles among Women

Authors: Mawarni Mohamed, Sharifah Shahira A. Hamid

Abstract:

Leisure time is an important component to offset the sedentary lifestyle of the people. Women tend to benefit from leisure activities not only to reduce stress but also to provide opportunities for well-being and self-satisfaction. This study was conducted to investigate body composition and leisure time use among women in Selangor from the influences of community settlement. A total of 419 women aged 18-65 years were selected to participate in this study. Descriptive statistics, t-test and ANOVA were used to analyze the level of physical activity and the relationship between leisure-time use and body composition were made to analyze the physical lifestyles. The results showed that women with normal body composition seem to be involved in more passive activities than women with less weight gain and obesity. Thus, the study recommended that the government and other health and recreational agencies should develop more places and activities suitable for leisure preference for women in their community settlement so they become more interested to engage in more active recreational and physical activities.

Keywords: body composition, community settlement, leisure time, physical lifestyles

Procedia PDF Downloads 435
19176 Improving the Residence Time of a Rectangular Contact Tank by Varying the Geometry Using Numerical Modeling

Authors: Yamileth P. Herrera, Ronald R. Gutierrez, Carlos, Pacheco-Bustos

Abstract:

This research aims at the numerical modeling of a rectangular contact tank in order to improve the hydrodynamic behavior and the retention time of the water to be treated with the disinfecting agent. The methodology to be followed includes a hydraulic analysis of the tank to observe the fluid velocities, which will allow evidence of low-speed areas that may generate pathogenic agent incubation or high-velocity areas, which may decrease the optimal contact time between the disinfecting agent and the microorganisms to be eliminated. Based on the results of the numerical model, the efficiency of the tank under the geometric and hydraulic conditions considered will be analyzed. This would allow the performance of the tank to be improved before starting a construction process, thus avoiding unnecessary costs.

Keywords: contact tank, numerical models, hydrodynamic modeling, residence time

Procedia PDF Downloads 151
19175 Optimizing Privacy, Accuracy and Calibration in Deep Learning Models

Authors: Rizwan Rizwan

Abstract:

Differentially private ({DP}) training preserves the data privacy but often leads to slower convergence and lower accuracy, along with notable mis-calibration compared to non-private training. Analyzing {DP} training through a continuous-time approach with the neural tangent kernel ({NTK}). The {NTK} helps characterize per sample {(PS)} gradient clipping and the incorporation of noise during {DP} training across arbitrary network architectures as well as loss functions. Our analysis reveals that noise addition impacts privacy risk exclusively, leaving convergence and calibration unaffected. In contrast, {PS} gradient clipping (flat styles, layerwise styles) influences convergence as well as calibration but not privacy risk. Models with a small clipping norm generally achieve optimal accuracy but exhibit poor calibration, making them less reliable. Conversely, {DP} models that are trained with a large clipping norm maintain the similar accuracy and same privacy guarantee, yet they demonstrate notably improved calibration.

Keywords: deep learning, convergence, differential privacy, calibration

Procedia PDF Downloads 21
19174 Comparison of Quality of Life One Year after Bariatric Intervention: Systematic Review of the Literature with Bayesian Network Meta-Analysis

Authors: Piotr Tylec, Alicja Dudek, Grzegorz Torbicz, Magdalena Mizera, Natalia Gajewska, Michael Su, Tanawat Vongsurbchart, Tomasz Stefura, Magdalena Pisarska, Mateusz Rubinkiewicz, Piotr Malczak, Piotr Major, Michal Pedziwiatr

Abstract:

Introduction: Quality of life after bariatric surgery is an important factor when evaluating the final result of the treatment. Considering the vast surgical options, we tried to globally compare available methods in terms of quality of following the surgery. The aim of the study is to compare the quality of life a year after bariatric intervention using network meta-analysis methods. Material and Methods: We performed a systematic review according to PRISMA guidelines with Bayesian network meta-analysis. Inclusion criteria were: studies comparing at least two methods of weight loss treatment of which at least one is surgical, assessment of the quality of life one year after surgery by validated questionnaires. Primary outcomes were quality of life one year after bariatric procedure. The following aspects of quality of life were analyzed: physical, emotional, general health, vitality, role physical, social, mental, and bodily pain. All questionnaires were standardized and pooled to a single scale. Lifestyle intervention was considered as a referenced point. Results: An initial reference search yielded 5636 articles. 18 studies were evaluated. In comparison of total score of quality of life, we observed that laparoscopic sleeve gastrectomy (LSG) (median (M): 3.606, Credible Interval 97.5% (CrI): 1.039; 6.191), laparoscopic Roux en-Y gastric by-pass (LRYGB) (M: 4.973, CrI: 2.627; 7.317) and open Roux en-Y gastric by-pass (RYGB) (M: 9.735, CrI: 6.708; 12.760) had better results than other bariatric intervention in relation to lifestyle interventions. In the analysis of the physical aspects of quality of life, we notice better results in LSG (M: 3.348, CrI: 0.548; 6.147) and in LRYGB procedure (M: 5.070, CrI: 2.896; 7.208) than control intervention, and worst results in open RYGB (M: -9.212, CrI: -11.610; -6.844). Analyzing emotional aspects, we found better results than control intervention in LSG, in LRYGB, in open RYGB, and laparoscopic gastric plication. In general health better results were in LSG (M: 9.144, CrI: 4.704; 13.470), in LRYGB (M: 6.451, CrI: 10.240; 13.830) and in single-anastomosis gastric by-pass (M: 8.671, CrI: 1.986; 15.310), and worst results in open RYGB (M: -4.048, CrI: -7.984; -0.305). In social and vital aspects of quality of life, better results were observed in LSG and LRYGB than control intervention. We did not find any differences between bariatric interventions in physical role, mental and bodily aspects of quality of life. Conclusion: The network meta-analysis revealed that better quality of life in total score one year after bariatric interventions were after LSG, LRYGB, open RYGB. In physical and general health aspects worst quality of life was in open RYGB procedure. Other interventions did not significantly affect the quality of life after a year compared to dietary intervention.

Keywords: bariatric surgery, network meta-analysis, quality of life, one year follow-up

Procedia PDF Downloads 146
19173 Knowledge Management Strategies within a Corporate Environment of Papers

Authors: Daniel J. Glauber

Abstract:

Knowledge transfer between personnel could benefit an organization’s improved competitive advantage in the marketplace from a strategic approach to knowledge management. The lack of information sharing between personnel could create knowledge transfer gaps while restricting the decision-making processes. Knowledge transfer between personnel can potentially improve information sharing based on an implemented knowledge management strategy. An organization’s capacity to gain more knowledge is aligned with the organization’s prior or existing captured knowledge. This case study attempted to understand the overall influence of a KMS within the corporate environment and knowledge exchange between personnel. The significance of this study was to help understand how organizations can improve the Return on Investment (ROI) of a knowledge management strategy within a knowledge-centric organization. A qualitative descriptive case study was the research design selected for this study. The lack of information sharing between personnel may create knowledge transfer gaps while restricting the decision-making processes. Developing a knowledge management strategy acceptable at all levels of the organization requires cooperation in support of a common organizational goal. Working with management and executive members to develop a protocol where knowledge transfer becomes a standard practice in multiple tiers of the organization. The knowledge transfer process could be measurable when focusing on specific elements of the organizational process, including personnel transition to help reduce time required understanding the job. The organization studied in this research acknowledged the need for improved knowledge management activities within the organization to help organize, retain, and distribute information throughout the workforce. Data produced from the study indicate three main themes including information management, organizational culture, and knowledge sharing within the workforce by the participants. These themes indicate a possible connection between an organizations KMS, the organizations culture, knowledge sharing, and knowledge transfer.

Keywords: knowledge transfer, management, knowledge management strategies, organizational learning, codification

Procedia PDF Downloads 427
19172 Convolutional Neural Network Based on Random Kernels for Analyzing Visual Imagery

Authors: Ja-Keoung Koo, Kensuke Nakamura, Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Byung-Woo Hong

Abstract:

The machine learning techniques based on a convolutional neural network (CNN) have been actively developed and successfully applied to a variety of image analysis tasks including reconstruction, noise reduction, resolution enhancement, segmentation, motion estimation, object recognition. The classical visual information processing that ranges from low level tasks to high level ones has been widely developed in the deep learning framework. It is generally considered as a challenging problem to derive visual interpretation from high dimensional imagery data. A CNN is a class of feed-forward artificial neural network that usually consists of deep layers the connections of which are established by a series of non-linear operations. The CNN architecture is known to be shift invariant due to its shared weights and translation invariance characteristics. However, it is often computationally intractable to optimize the network in particular with a large number of convolution layers due to a large number of unknowns to be optimized with respect to the training set that is generally required to be large enough to effectively generalize the model under consideration. It is also necessary to limit the size of convolution kernels due to the computational expense despite of the recent development of effective parallel processing machinery, which leads to the use of the constantly small size of the convolution kernels throughout the deep CNN architecture. However, it is often desired to consider different scales in the analysis of visual features at different layers in the network. Thus, we propose a CNN model where different sizes of the convolution kernels are applied at each layer based on the random projection. We apply random filters with varying sizes and associate the filter responses with scalar weights that correspond to the standard deviation of the random filters. We are allowed to use large number of random filters with the cost of one scalar unknown for each filter. The computational cost in the back-propagation procedure does not increase with the larger size of the filters even though the additional computational cost is required in the computation of convolution in the feed-forward procedure. The use of random kernels with varying sizes allows to effectively analyze image features at multiple scales leading to a better generalization. The robustness and effectiveness of the proposed CNN based on random kernels are demonstrated by numerical experiments where the quantitative comparison of the well-known CNN architectures and our models that simply replace the convolution kernels with the random filters is performed. The experimental results indicate that our model achieves better performance with less number of unknown weights. The proposed algorithm has a high potential in the application of a variety of visual tasks based on the CNN framework. Acknowledgement—This work was supported by the MISP (Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by IITP, and NRF-2014R1A2A1A11051941, NRF2017R1A2B4006023.

Keywords: deep learning, convolutional neural network, random kernel, random projection, dimensionality reduction, object recognition

Procedia PDF Downloads 266
19171 The Analysis of Changes in Urban Hierarchy of Isfahan Province in the Fifty-Year Period (1956-2006)

Authors: Hamidreza Joudaki, Yousefali Ziari

Abstract:

The appearance of city and urbanism is one of the important processes which have affected social communities. Being industrialized urbanism developed along with each other in the history. In addition, they have had simple relationship for more than six thousand years, that is, from the appearance of the first cities. In 18th century by coming out of industrial capitalism, progressive development took place in urbanism in the world. In Iran, the city of each region made its decision by itself and the capital of region (downtown) was the only central part and also the regional city without any hierarchy, controlled its realm. However, this method of ruling during these three decays, because of changing in political, social and economic issues that have caused changes in rural and urban relationship. Moreover, it has changed the variety of performance of cities and systematic urban network in Iran. Today, urban system has very vast imbalanced apace and performance. In Isfahan, the trend of urbanism is like the other part of Iran and systematic urban hierarchy is not suitable and normal. This article is a quantitative and analytical. The statistical communities are Isfahan Province cities and the changes in urban network and its hierarchy during the period of fifty years (1956 -2006) has been surveyed. In addition, those data have been analyzed by model of Rank and size and Entropy index. In this article Iran cities and also the factor of entropy of primate city and urban hierarchy of Isfahan Province have been introduced. Urban residents of this Province have been reached from 55 percent to 83% (2006). As we see the analytical data reflects that there is mismatching and imbalance between cities. Because the entropy index was.91 in 1956.And it decreased to.63 in 2006. Isfahan city is the primate city in the whole of these periods. Moreover, the second and the third cities have population gap with regard to the other cities and finally, they do not follow the system of rank-size.

Keywords: urban network, urban hierarchy, primate city, Isfahan province, urbanism, first cities

Procedia PDF Downloads 236
19170 Structural Protein-Protein Interactions Network of Breast Cancer Lung and Brain Metastasis Corroborates Conformational Changes of Proteins Lead to Different Signaling

Authors: Farideh Halakou, Emel Sen, Attila Gursoy, Ozlem Keskin

Abstract:

Protein–Protein Interactions (PPIs) mediate major biological processes in living cells. The study of PPIs as networks and analyze the network properties contribute to the identification of genes and proteins associated with diseases. In this study, we have created the sub-networks of brain and lung metastasis from primary tumor in breast cancer. To do so, we used seed genes known to cause metastasis, and produced their interactions through a network-topology based prioritization method named GUILDify. In order to have the experimental support for the sub-networks, we further curated them using STRING database. We proceeded by modeling structures for the interactions lacking complex forms in Protein Data Bank (PDB). The functional enrichment analysis shows that KEGG pathways associated with the immune system and infectious diseases, particularly the chemokine signaling pathway, are important for lung metastasis. On the other hand, pathways related to genetic information processing are more involved in brain metastasis. The structural analyses of the sub-networks vividly demonstrated their difference in terms of using specific interfaces in lung and brain metastasis. Furthermore, the topological analysis identified genes such as RPL5, MMP2, CCR5 and DPP4, which are already known to be associated with lung or brain metastasis. Additionally, we found 6 and 9 putative genes that are specific for lung and brain metastasis, respectively. Our analysis suggests that variations in genes and pathways contributing to these different breast metastasis types may arise due to change in tissue microenvironment. To show the benefits of using structural PPI networks instead of traditional node and edge presentation, we inspect two case studies showing the mutual exclusiveness of interactions and effects of mutations on protein conformation which lead to different signaling.

Keywords: breast cancer, metastasis, PPI networks, protein conformational changes

Procedia PDF Downloads 223
19169 Analysis of Financial Time Series by Using Ornstein-Uhlenbeck Type Models

Authors: Md Al Masum Bhuiyan, Maria C. Mariani, Osei K. Tweneboah

Abstract:

In the present work, we develop a technique for estimating the volatility of financial time series by using stochastic differential equation. Taking the daily closing prices from developed and emergent stock markets as the basis, we argue that the incorporation of stochastic volatility into the time-varying parameter estimation significantly improves the forecasting performance via Maximum Likelihood Estimation. While using the technique, we see the long-memory behavior of data sets and one-step-ahead-predicted log-volatility with ±2 standard errors despite the variation of the observed noise from a Normal mixture distribution, because the financial data studied is not fully Gaussian. Also, the Ornstein-Uhlenbeck process followed in this work simulates well the financial time series, which aligns our estimation algorithm with large data sets due to the fact that this algorithm has good convergence properties.

Keywords: financial time series, maximum likelihood estimation, Ornstein-Uhlenbeck type models, stochastic volatility model

Procedia PDF Downloads 222
19168 Joint Modeling of Longitudinal and Time-To-Event Data with Latent Variable

Authors: Xinyuan Y. Song, Kai Kang

Abstract:

Joint models for analyzing longitudinal and survival data are widely used to investigate the relationship between a failure time process and time-variant predictors. A common assumption in conventional joint models in the survival analysis literature is that all predictors are observable. However, this assumption may not always be supported because unobservable traits, namely, latent variables, which are indirectly observable and should be measured through multiple observed variables, are commonly encountered in the medical, behavioral, and financial research settings. In this study, a joint modeling approach to deal with this feature is proposed. The proposed model comprises three parts. The first part is a dynamic factor analysis model for characterizing latent variables through multiple observed indicators over time. The second part is a random coefficient trajectory model for describing the individual trajectories of latent variables. The third part is a proportional hazard model for examining the effects of time-invariant predictors and the longitudinal trajectories of time-variant latent risk factors on hazards of interest. A Bayesian approach coupled with a Markov chain Monte Carlo algorithm to perform statistical inference. An application of the proposed joint model to a study on the Alzheimer's disease neuroimaging Initiative is presented.

Keywords: Bayesian analysis, joint model, longitudinal data, time-to-event data

Procedia PDF Downloads 124
19167 Quality of Romanian Food Products on Rapid Alert System for Food and Feed Notifications

Authors: Silvius Stanciu

Abstract:

Romanian food products sold on European markets have been accused of several non-conformities of quality and safety. Most products incriminated last period were those of animal origin, especially meat and meat products. The study proposed an analysis of the notifications made by network members through Rapid Alert System for Food and Feed on products originating in Romania. As a source of information, the Rapid Alert System portal and the official communications of the National Sanitary Veterinary and Food Safety Authority were used. The research results showed that nearly a quarter of network notifications were rejected and were withdrawn by the European Authority. Although national authorities present these issues as success stories of national quality policies, the large number of notifications related to the volume of exported products is worrying. The paper is of practical and applicative importance for both the business environment and the academic environment, laying the basis for a wider research on the quality differences between Romanian and imported products.

Keywords: food, quality, RASFF, Rapid Alert System for Food and Feed, Romania

Procedia PDF Downloads 145
19166 A Medical Resource Forecasting Model for Emergency Room Patients with Acute Hepatitis

Authors: R. J. Kuo, W. C. Cheng, W. C. Lien, T. J. Yang

Abstract:

Taiwan is a hyper endemic area for the Hepatitis B virus (HBV). The estimated total number of HBsAg carriers in the general population who are more than 20 years old is more than 3 million. Therefore, a case record review is conducted from January 2003 to June 2007 for all patients with a diagnosis of acute hepatitis who were admitted to the Emergency Department (ED) of a well-known teaching hospital. The cost for the use of medical resources is defined as the total medical fee. In this study, principal component analysis (PCA) is firstly employed to reduce the number of dimensions. Support vector regression (SVR) and artificial neural network (ANN) are then used to develop the forecasting model. A total of 117 patients meet the inclusion criteria. 61% patients involved in this study are hepatitis B related. The computational result shows that the proposed PCA-SVR model has superior performance than other compared algorithms. In conclusion, the Child-Pugh score and echogram can both be used to predict the cost of medical resources for patients with acute hepatitis in the ED.

Keywords: acute hepatitis, medical resource cost, artificial neural network, support vector regression

Procedia PDF Downloads 409