Search results for: modeling accuracy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7098

Search results for: modeling accuracy

1698 Presenting a Model in the Analysis of Supply Chain Management Components by Using Statistical Distribution Functions

Authors: Ramin Rostamkhani, Thurasamy Ramayah

Abstract:

One of the most important topics of today’s industrial organizations is the challenging issue of supply chain management. In this field, scientists and researchers have published numerous practical articles and models, especially in the last decade. In this research, to our best knowledge, the discussion of data modeling of supply chain management components using well-known statistical distribution functions has been considered. The world of science owns mathematics, and showing the behavior of supply chain data based on the characteristics of statistical distribution functions is innovative research that has not been published anywhere until the moment of doing this research. In an analytical process, describing different aspects of functions including probability density, cumulative distribution, reliability, and failure function can reach the suitable statistical distribution function for each of the components of the supply chain management. It can be applied to predict the behavior data of the relevant component in the future. Providing a model to adapt the best statistical distribution function in the supply chain management components will be a big revolution in the field of the behavior of the supply chain management elements in today's industrial organizations. Demonstrating the final results of the proposed model by introducing the process capability indices before and after implementing it alongside verifying the approach through the relevant assessment as an acceptable verification is a final step. The introduced approach can save the required time and cost to achieve the organizational goals. Moreover, it can increase added value in the organization.

Keywords: analyzing, process capability indices, statistical distribution functions, supply chain management components

Procedia PDF Downloads 80
1697 Normalized Enterprises Architectures: Portugal's Public Procurement System Application

Authors: Tiago Sampaio, André Vasconcelos, Bruno Fragoso

Abstract:

The Normalized Systems Theory, which is designed to be applied to software architectures, provides a set of theorems, elements and rules, with the purpose of enabling evolution in Information Systems, as well as ensuring that they are ready for change. In order to make that possible, this work’s solution is to apply the Normalized Systems Theory to the domain of enterprise architectures, using Archimate. This application is achieved through the adaptation of the elements of this theory, making them artifacts of the modeling language. The theorems are applied through the identification of the viewpoints to be used in the architectures, as well as the transformation of the theory’s encapsulation rules into architectural rules. This way, it is possible to create normalized enterprise architectures, thus fulfilling the needs and requirements of the business. This solution was demonstrated using the Portuguese Public Procurement System. The Portuguese government aims to make this system as fair as possible, allowing every organization to have the same business opportunities. The aim is for every economic operator to have access to all public tenders, which are published in any of the 6 existing platforms, independently of where they are registered. In order to make this possible, we applied our solution to the construction of two different architectures, which are able of fulfilling the requirements of the Portuguese government. One of those architectures, TO-BE A, has a Message Broker that performs the communication between the platforms. The other, TO-BE B, represents the scenario in which the platforms communicate with each other directly. Apart from these 2 architectures, we also represent the AS-IS architecture that demonstrates the current behavior of the Public Procurement Systems. Our evaluation is based on a comparison between the AS-IS and the TO-BE architectures, regarding the fulfillment of the rules and theorems of the Normalized Systems Theory and some quality metrics.

Keywords: archimate, architecture, broker, enterprise, evolvable systems, interoperability, normalized architectures, normalized systems, normalized systems theory, platforms

Procedia PDF Downloads 342
1696 A Facile Nanocomposite of Graphene Oxide Reinforced Chitosan/Poly-Nitroaniline Polymer as a Highly Efficient Adsorbent for Extracting Polycyclic Aromatic Hydrocarbons from Tea Samples

Authors: Adel M. Al-Shutairi, Ahmed H. Al-Zahrani

Abstract:

Tea is a popular beverage drunk by millions of people throughout the globe. Tea has considerable health advantages, in-cluding antioxidant, antibacterial, antiviral, chemopreventive, and anticarcinogenic properties. As a result of environmental pollution (atmospheric deposition) and the production process, tealeaves may also include a variety of dangerous substances, such as polycyclic aromatic hydrocarbons (PAHs). In this study, graphene oxide reinforced chitosan/poly-nitroaniline polymer was prepared to develop a sensitive and reliable solid phase extraction method (SPE) for extraction of PAH7 in tea samples, followed by high-performance liquid chromatography- fluorescence detection. The prepared adsorbent was validated in terms of linearity, the limit of detection, the limit of quantification, recovery (%), accuracy (%), and precision (%) for the determination of the PAH7 (benzo[a]pyrene, benzo[a]anthracene, benzo[b]fluoranthene, chrysene, benzo[b]fluoranthene, Dibenzo[a,h]anthracene and Benzo[g,h,i]perylene) in tea samples. The concentration was determined in two types of tea commercially available in Saudi Arabia, including black tea and green tea. The maximum mean of Σ7PAHs in black tea samples was 68.23 ± 0.02 ug kg-1 and 26.68 ± 0.01 ug kg-1 in green tea samples. The minimum mean of Σ7PAHs in black tea samples was 37.93 ± 0.01 ug kg-1 and 15.26 ± 0.01 ug kg-1 in green tea samples. The mean value of benzo[a]pyrene in black tea samples ranged from 6.85 to 12.17 ug kg-1, where two samples exceeded the standard level (10 ug kg-1) established by the European Union (UE), while in green tea ranged from 1.78 to 2.81 ug kg-1. Low levels of Σ7PAHs in green tea samples were detected in comparison with black tea samples.

Keywords: polycyclic aromatic hydrocarbons, CS, PNA and GO, black/green tea, solid phase extraction, Saudi Arabia

Procedia PDF Downloads 86
1695 Exploring the Role of Data Mining in Crime Classification: A Systematic Literature Review

Authors: Faisal Muhibuddin, Ani Dijah Rahajoe

Abstract:

This in-depth exploration, through a systematic literature review, scrutinizes the nuanced role of data mining in the classification of criminal activities. The research focuses on investigating various methodological aspects and recent developments in leveraging data mining techniques to enhance the effectiveness and precision of crime categorization. Commencing with an exposition of the foundational concepts of crime classification and its evolutionary dynamics, this study details the paradigm shift from conventional methods towards approaches supported by data mining, addressing the challenges and complexities inherent in the modern crime landscape. Specifically, the research delves into various data mining techniques, including K-means clustering, Naïve Bayes, K-nearest neighbour, and clustering methods. A comprehensive review of the strengths and limitations of each technique provides insights into their respective contributions to improving crime classification models. The integration of diverse data sources takes centre stage in this research. A detailed analysis explores how the amalgamation of structured data (such as criminal records) and unstructured data (such as social media) can offer a holistic understanding of crime, enriching classification models with more profound insights. Furthermore, the study explores the temporal implications in crime classification, emphasizing the significance of considering temporal factors to comprehend long-term trends and seasonality. The availability of real-time data is also elucidated as a crucial element in enhancing responsiveness and accuracy in crime classification.

Keywords: data mining, classification algorithm, naïve bayes, k-means clustering, k-nearest neigbhor, crime, data analysis, sistematic literature review

Procedia PDF Downloads 56
1694 A Vehicle Detection and Speed Measurement Algorithm Based on Magnetic Sensors

Authors: Panagiotis Gkekas, Christos Sougles, Dionysios Kehagias, Dimitrios Tzovaras

Abstract:

Cooperative intelligent transport systems (C-ITS) can greatly improve safety and efficiency in road transport by enabling communication, not only between vehicles themselves but also between vehicles and infrastructure. For that reason, traffic surveillance systems on the road are of great importance. This paper focuses on the development of an on-road unit comprising several magnetic sensors for real-time vehicle detection, movement direction, and speed measurement calculations. Magnetic sensors can feel and measure changes in the earth’s magnetic field. Vehicles are composed of many parts with ferromagnetic properties. Depending on sensors’ sensitivity, changes in the earth’s magnetic field caused by passing vehicles can be detected and analyzed in order to extract information on the properties of moving vehicles. In this paper, we present a prototype algorithm for real-time, high-accuracy, vehicle detection, and speed measurement, which can be implemented as a portable, low-cost, and non-invasive to existing infrastructure solution with the potential to replace existing high-cost implementations. The paper describes the algorithm and presents results from its preliminary lab testing in a close to real condition environment. Acknowledgments: Work presented in this paper was co-financed by the European Regional Development Fund of the European Union and Greek national funds through the Operational Program Competitiveness, Entrepreneurship, and Innovation (call RESEARCH–CREATE–INNOVATE) under contract no. Τ1EDK-03081 (project ODOS2020).

Keywords: magnetic sensors, vehicle detection, speed measurement, traffic surveillance system

Procedia PDF Downloads 110
1693 Application of Electro-Optical Hybrid Cables in Horizontal Well Production Logging

Authors: Daofan Guo, Dong Yang

Abstract:

For decades, well logging with coiled tubing has relied solely on surface data such as pump pressure, wellhead pressure, depth counter, and weight indicator readings. While this data serves the oil industry well, modern smart logging utilizes real-time downhole information, which automatically increases operational efficiency and optimizes intervention qualities. For example, downhole pressure, temperature, and depth measurement data can be transmitted through the electro-optical hybrid cable in the coiled tubing to surface operators on a real-time base. This paper mainly introduces the unique structural features and various applications of the electro-optical hybrid cables which were deployed into downhole with the help of coiled tubing technology. Fiber optic elements in the cable enable optical communications and distributed measurements, such as distributed temperature and acoustic sensing. The electrical elements provide continuous surface power for downhole tools, eliminating the limitations of traditional batteries, such as temperature, operating time, and safety concerns. The electrical elements also enable cable telemetry operation of cable tools. Both power supply and signal transmission were integrated into an electro-optical hybrid cable, and the downhole information can be captured by downhole electrical sensors and distributed optical sensing technologies, then travels up through an optical fiber to the surface, which greatly improves the accuracy of measurement data transmission.

Keywords: electro-optical hybrid cable, underground photoelectric composite cable, seismic cable, coiled tubing, real-time monitoring

Procedia PDF Downloads 129
1692 An Approach For Evolving a Relaible Low Power Ultra Wide Band Transmitter with Capacitve Sensing

Authors: N.Revathy, C.Gomathi

Abstract:

This work aims for a tunable capacitor as a sensor which can vary the control voltage of a voltage control oscillator in a ultra wide band (UWB) transmitter. In this paper power consumption is concentrated. The reason for choosing a capacitive sensing is it give slow temperature drift, high sensitivity and robustness. Previous works report a resistive sensing in a voltage control oscillator (VCO) not aiming at power consumption. But this work aims for power consumption of a capacitive sensing in ultra wide band transmitter. The ultra wide band transmitter to be used is a direct modulation of pulses. The VCO which is the heart of pulse generator of UWB transmitter works on the principle of voltage to frequency conversion. The VCO has and odd number of inverter stages which works on the control voltage input this input is now from a variable capacitor and the buffer stages is reduced from the previous work to maintain the oscillating frequency. The VCO is also aimed to consume low power. Then the concentration in choosing a variable capacitor is aimed. A compact model of a capacitor with the transient characteristics is to be designed with a movable dielectric and multi metal membranes. Previous modeling of the capacitor transient characteristics is with a movable membrane and a fixed membrane. This work aims at a membrane with a wide tuning suitable for ultra wide band transmitter.This is used in this work because a capacitive in a ultra wide transmitter need to be tuned in such a way that all satisfies FCC regulations.

Keywords: capacitive sensing, ultra wide band transmitter, voltage control oscillator, FCC regulation

Procedia PDF Downloads 383
1691 Automated Fact-Checking by Incorporating Contextual Knowledge and Multi-Faceted Search

Authors: Wenbo Wang, Yi-Fang Brook Wu

Abstract:

The spread of misinformation and disinformation has become a major concern, particularly with the rise of social media as a primary source of information for many people. As a means to address this phenomenon, automated fact-checking has emerged as a safeguard against the spread of misinformation and disinformation. Existing fact-checking approaches aim to determine whether a news claim is true or false, and they have achieved decent veracity prediction accuracy. However, the state-of-the-art methods rely on manually verified external information to assist the checking model in making judgments, which requires significant human resources. This study introduces a framework, SAC, which focuses on 1) augmenting the representation of a claim by incorporating additional context using general-purpose, comprehensive, and authoritative data; 2) developing a search function to automatically select relevant, new, and credible references; 3) focusing on the important parts of the representations of a claim and its reference that are most relevant to the fact-checking task. The experimental results demonstrate that 1) Augmenting the representations of claims and references through the use of a knowledge base, combined with the multi-head attention technique, contributes to improved performance of fact-checking. 2) SAC with auto-selected references outperforms existing fact-checking approaches with manual selected references. Future directions of this study include I) exploring knowledge graphs in Wikidata to dynamically augment the representations of claims and references without introducing too much noise, II) exploring semantic relations in claims and references to further enhance fact-checking.

Keywords: fact checking, claim verification, deep learning, natural language processing

Procedia PDF Downloads 48
1690 Evaluation of the Need for Seismic Retrofitting of the Foundation of a Five Story Steel Building Because of Adding of a New Story

Authors: Mohammadreza Baradaran, F. Hamzezarghani

Abstract:

Every year in different points of the world it occurs with different strengths and thousands of people lose their lives because of this natural phenomenon. One of the reasons for destruction of buildings because of earthquake in addition to the passing of time and the effect of environmental conditions and the wearing-out of a building is changing the uses of the building and change the structure and skeleton of the building. A large number of structures that are located in earthquake bearing areas have been designed according to the old quake design regulations which are out dated. In addition, many of the major earthquakes which have occurred in recent years, emphasize retrofitting to decrease the dangers of quakes. Retrofitting structural quakes available is one of the most effective methods for reducing dangers and compensating lack of resistance caused by the weaknesses existing. In this article the foundation of a five-floor steel building with the moment frame system has been evaluated for quakes and the effect of adding a floor to this five-floor steel building has been evaluated and analyzed. The considered building is with a metallic skeleton and a piled roof and clayed block which after addition of a floor has increased to a six-floor foundation of 1416 square meters, and the height of the sixth floor from ground state has increased 18.95 meters. After analysis of the foundation model, the behavior of the soil under the foundation and also the behavior of the body or element of the foundation has been evaluated and the model of the foundation and its type of change in form and the amount of stress of the soil under the foundation for some of the composition has been determined many times in the SAFE software modeling and finally the need for retrofitting of the building's foundation has been determined.

Keywords: seismic, rehabilitation, steel building, foundation

Procedia PDF Downloads 266
1689 StockTwits Sentiment Analysis on Stock Price Prediction

Authors: Min Chen, Rubi Gupta

Abstract:

Understanding and predicting stock market movements is a challenging problem. It is believed stock markets are partially driven by public sentiments, which leads to numerous research efforts to predict stock market trend using public sentiments expressed on social media such as Twitter but with limited success. Recently a microblogging website StockTwits is becoming increasingly popular for users to share their discussions and sentiments about stocks and financial market. In this project, we analyze the text content of StockTwits tweets and extract financial sentiment using text featurization and machine learning algorithms. StockTwits tweets are first pre-processed using techniques including stopword removal, special character removal, and case normalization to remove noise. Features are extracted from these preprocessed tweets through text featurization process using bags of words, N-gram models, TF-IDF (term frequency-inverse document frequency), and latent semantic analysis. Machine learning models are then trained to classify the tweets' sentiment as positive (bullish) or negative (bearish). The correlation between the aggregated daily sentiment and daily stock price movement is then investigated using Pearson’s correlation coefficient. Finally, the sentiment information is applied together with time series stock data to predict stock price movement. The experiments on five companies (Apple, Amazon, General Electric, Microsoft, and Target) in a duration of nine months demonstrate the effectiveness of our study in improving the prediction accuracy.

Keywords: machine learning, sentiment analysis, stock price prediction, tweet processing

Procedia PDF Downloads 138
1688 Comparison of the Factor of Safety and Strength Reduction Factor Values from Slope Stability Analysis of a Large Open Pit

Authors: James Killian, Sarah Cox

Abstract:

The use of stability criteria within geotechnical engineering is the way the results of analyses are conveyed, and sensitivities and risk assessments are performed. Historically, the primary stability criteria for slope design has been the Factor of Safety (FOS) coming from a limit calculation. Increasingly, the value derived from Strength Reduction Factor (SRF) analysis is being used as the criteria for stability analysis. The purpose of this work was to study in detail the relationship between SRF values produced from a numerical modeling technique and the traditional FOS values produced from Limit Equilibrium (LEM) analyses. This study utilized a model of a 3000-foot-high slope with a 45-degree slope angle, assuming a perfectly plastic mohr-coulomb constitutive model with high cohesion and friction angle values typical of a large hard rock mine slope. A number of variables affecting the values of the SRF in a numerical analysis were tested, including zone size, in-situ stress, tensile strength, and dilation angle. This paper demonstrates that in most cases, SRF values are lower than the corresponding LEM FOS values. Modeled zone size has the greatest effect on the estimated SRF value, which can vary as much as 15% to the downside compared to FOS. For consistency when using SRF as a stability criteria, the authors suggest that numerical model zone sizes should not be constructed to be smaller than about 1% of the overall problem slope height and shouldn’t be greater than 2%. Future work could include investigations of the effect of anisotropic strength assumptions or advanced constitutive models.

Keywords: FOS, SRF, LEM, comparison

Procedia PDF Downloads 287
1687 Evidence Theory Enabled Quickest Change Detection Using Big Time-Series Data from Internet of Things

Authors: Hossein Jafari, Xiangfang Li, Lijun Qian, Alexander Aved, Timothy Kroecker

Abstract:

Traditionally in sensor networks and recently in the Internet of Things, numerous heterogeneous sensors are deployed in distributed manner to monitor a phenomenon that often can be model by an underlying stochastic process. The big time-series data collected by the sensors must be analyzed to detect change in the stochastic process as quickly as possible with tolerable false alarm rate. However, sensors may have different accuracy and sensitivity range, and they decay along time. As a result, the big time-series data collected by the sensors will contain uncertainties and sometimes they are conflicting. In this study, we present a framework to take advantage of Evidence Theory (a.k.a. Dempster-Shafer and Dezert-Smarandache Theories) capabilities of representing and managing uncertainty and conflict to fast change detection and effectively deal with complementary hypotheses. Specifically, Kullback-Leibler divergence is used as the similarity metric to calculate the distances between the estimated current distribution with the pre- and post-change distributions. Then mass functions are calculated and related combination rules are applied to combine the mass values among all sensors. Furthermore, we applied the method to estimate the minimum number of sensors needed to combine, so computational efficiency could be improved. Cumulative sum test is then applied on the ratio of pignistic probability to detect and declare the change for decision making purpose. Simulation results using both synthetic data and real data from experimental setup demonstrate the effectiveness of the presented schemes.

Keywords: CUSUM, evidence theory, kl divergence, quickest change detection, time series data

Procedia PDF Downloads 320
1686 Prediction of B-Cell Epitope for 24 Mite Allergens: An in Silico Approach towards Epitope-Based Immune Therapeutics

Authors: Narjes Ebrahimi, Soheila Alyasin, Navid Nezafat, Hossein Esmailzadeh, Younes Ghasemi, Seyed Hesamodin Nabavizadeh

Abstract:

Immunotherapy with allergy vaccines is of great importance in allergen-specific immunotherapy. In recent years, B-cell epitope-based vaccines have attracted considerable attention and the prediction of epitopes is crucial to design these types of allergy vaccines. B-cell epitopes might be linear or conformational. The prerequisite for the identification of conformational epitopes is the information about allergens' tertiary structures. Bioinformatics approaches have paved the way towards the design of epitope-based allergy vaccines through the prediction of tertiary structures and epitopes. Mite allergens are one of the major allergy contributors. Several mite allergens can elicit allergic reactions; however, their structures and epitopes are not well established. So, B-cell epitopes of various groups of mite allergens (24 allergens in 6 allergen groups) were predicted in the present work. Tertiary structures of 17 allergens with unknown structure were predicted and refined with RaptorX and GalaxyRefine servers, respectively. The predicted structures were further evaluated by Rampage, ProSA-web, ERRAT and Verify 3D servers. Linear and conformational B-cell epitopes were identified with Ellipro, Bcepred, and DiscoTope 2 servers. To improve the accuracy level, consensus epitopes were selected. Fifty-four conformational and 133 linear consensus epitopes were predicted. Furthermore, overlapping epitopes in each allergen group were defined, following the sequence alignment of the allergens in each group. The predicted epitopes were also compared with the experimentally identified epitopes. The presented results provide valuable information for further studies about allergy vaccine design.

Keywords: B-cell epitope, Immunotherapy, In silico prediction, Mite allergens, Tertiary structure

Procedia PDF Downloads 150
1685 Non-Uniform Filter Banks-based Minimum Distance to Riemannian Mean Classifition in Motor Imagery Brain-Computer Interface

Authors: Ping Tan, Xiaomeng Su, Yi Shen

Abstract:

The motion intention in the motor imagery braincomputer interface is identified by classifying the event-related desynchronization (ERD) and event-related synchronization ERS characteristics of sensorimotor rhythm (SMR) in EEG signals. When the subject imagines different limbs or different parts moving, the rhythm components and bandwidth will change, which varies from person to person. How to find the effective sensorimotor frequency band of subjects is directly related to the classification accuracy of brain-computer interface. To solve this problem, this paper proposes a Minimum Distance to Riemannian Mean Classification method based on Non-Uniform Filter Banks. During the training phase, the EEG signals are decomposed into multiple different bandwidt signals by using multiple band-pass filters firstly; Then the spatial covariance characteristics of each frequency band signal are computered to be as the feature vectors. these feature vectors will be classified by the MDRM (Minimum Distance to Riemannian Mean) method, and cross validation is employed to obtain the effective sensorimotor frequency bands. During the test phase, the test signals are filtered by the bandpass filter of the effective sensorimotor frequency bands, and the extracted spatial covariance feature vectors will be classified by using the MDRM. Experiments on the BCI competition IV 2a dataset show that the proposed method is superior to other classification methods.

Keywords: non-uniform filter banks, motor imagery, brain-computer interface, minimum distance to Riemannian mean

Procedia PDF Downloads 105
1684 A Study on the Coefficient of Transforming Relative Lateral Displacement under Linear Analysis of Structure to Its Real Relative Lateral Displacement

Authors: Abtin Farokhipanah

Abstract:

In recent years, analysis of structures is based on ductility design in contradictory to strength design in surveying earthquake effects on structures. ASCE07-10 code offers to intensify relative drifts calculated from a linear analysis with Cd which is called (Deflection Amplification Factor) to obtain the real relative drifts which can be calculated using nonlinear analysis. This lateral drift should be limited to the code boundaries. Calculation of this amplification factor for different structures, comparing with ASCE07-10 code and offering the best coefficient are the purposes of this research. Following our target, short and tall building steel structures with various earthquake resistant systems in linear and nonlinear analysis should be surveyed, so these questions will be answered: 1. Does the Response Modification Coefficient (R) have a meaningful relation to Deflection Amplification Factor? 2. Does structure height, seismic zone, response spectrum and similar parameters have an effect on the conversion coefficient of linear analysis to real drift of structure? The procedure has used to conduct this research includes: (a) Study on earthquake resistant systems, (b) Selection of systems and modeling, (c) Analyzing modeled systems using linear and nonlinear methods, (d) Calculating conversion coefficient for each system and (e) Comparing conversion coefficients with the code offered ones and concluding results.

Keywords: ASCE07-10 code, deflection amplification factor, earthquake engineering, lateral displacement of structures, response modification coefficient

Procedia PDF Downloads 344
1683 Drilling Quantification and Bioactivity of Machinable Hydroxyapatite : Yttrium phosphate Bioceramic Composite

Authors: Rupita Ghosh, Ritwik Sarkar, Sumit K. Pal, Soumitra Paul

Abstract:

The use of Hydroxyapatite bioceramics as restorative implants is widely known. These materials can be manufactured by pressing and sintering route to a particular shape. However machining processes are still a basic requirement to give a near net shape to those implants for ensuring dimensional and geometrical accuracy. In this context, optimising the machining parameters is an important factor to understand the machinability of the materials and to reduce the production cost. In the present study a method has been optimized to produce true particulate drilled composite of Hydroxyapatite Yttrium Phosphate. The phosphates are used in varying ratio for a comparative study on the effect of flexural strength, hardness, machining (drilling) parameters and bioactivity.. The maximum flexural strength and hardness of the composite that could be attained are 46.07 MPa and 1.02 GPa respectively. Drilling is done with a conventional radial drilling machine aided with dynamometer with high speed steel (HSS) and solid carbide (SC) drills. The effect of variation in drilling parameters (cutting speed and feed), cutting tool, batch composition on torque, thrust force and tool wear are studied. It is observed that the thrust force and torque varies greatly with the increase in the speed, feed and yttrium phosphate content in the composite. Significant differences in the thrust and torque are noticed due to the change of the drills as well. Bioactivity study is done in simulated body fluid (SBF) upto 28 days. The growth of the bone like apatite has become denser with the increase in the number of days for all the composition of the composites and it is comparable to that of the pure hydroxyapatite.

Keywords: Bioactivity, Drilling, Hydroxyapatite, Yttrium Phosphate

Procedia PDF Downloads 285
1682 Blood Flow Simulations to Understand the Role of the Distal Vascular Branches of Carotid Artery in the Stroke Prediction

Authors: Muhsin Kizhisseri, Jorg Schluter, Saleh Gharie

Abstract:

Atherosclerosis is the main reason of stroke, which is one of the deadliest diseases in the world. The carotid artery in the brain is the prominent location for atherosclerotic progression, which hinders the blood flow into the brain. The inclusion of computational fluid dynamics (CFD) into the diagnosis cycle to understand the hemodynamics of the patient-specific carotid artery can give insights into stroke prediction. Realistic outlet boundary conditions are an inevitable part of the numerical simulations, which is one of the major factors in determining the accuracy of the CFD results. The Windkessel model-based outlet boundary conditions can give more realistic characteristics of the distal vascular branches of the carotid artery, such as the resistance to the blood flow and compliance of the distal arterial walls. This study aims to find the most influential distal branches of the carotid artery by using the Windkessel model parameters in the outlet boundary conditions. The parametric study approach to Windkessel model parameters can include the geometrical features of the distal branches, such as radius and length. The incorporation of the variations of the geometrical features of the major distal branches such as the middle cerebral artery, anterior cerebral artery, and ophthalmic artery through the Windkessel model can aid in identifying the most influential distal branch in the carotid artery. The results from this study can help physicians and stroke neurologists to have a more detailed and accurate judgment of the patient's condition.

Keywords: stroke, carotid artery, computational fluid dynamics, patient-specific, Windkessel model, distal vascular branches

Procedia PDF Downloads 201
1681 A Graph Library Development Based on the Service-‎Oriented Architecture: Used for Representation of the ‎Biological ‎Systems in the Computer Algorithms

Authors: Mehrshad Khosraviani, Sepehr Najjarpour

Abstract:

Considering the usage of graph-based approaches in systems and synthetic biology, and the various types of ‎the graphs employed by them, a comprehensive graph library based ‎on the three-tier architecture (3TA) was previously introduced for full representation of the biological systems. Although proposing a 3TA-based graph library, three following reasons motivated us to redesign the graph ‎library based on the service-oriented architecture (SOA): (1) Maintaining the accuracy of the data related to an input graph (including its edges, its ‎vertices, its topology, etc.) without involving the end user:‎ Since, in the case of using 3TA, the library files are available to the end users, they may ‎be utilized incorrectly, and consequently, the invalid graph data will be provided to the ‎computer algorithms. However, considering the usage of the SOA, the operation of the ‎graph registration is specified as a service by encapsulation of the library files. In other words, overall control operations needed for registration of the valid data will be the ‎responsibility of the services. (2) Partitioning of the library product into some different parts: Considering 3TA, a whole library product was provided in general. While here, the product ‎can be divided into smaller ones, such as an AND/OR graph drawing service, and each ‎one can be provided individually. As a result, the end user will be able to select any ‎parts of the library product, instead of all features, to add it to a project. (3) Reduction of the complexities: While using 3TA, several other libraries must be needed to add for connecting to the ‎database, responsibility of the provision of the needed library resources in the SOA-‎based graph library is entrusted with the services by themselves. Therefore, the end user ‎who wants to use the graph library is not involved with its complexity. In the end, in order to ‎make ‎the library easier to control in the system, and to restrict the end user from accessing the files, ‎it was preferred to use the service-oriented ‎architecture ‎‎(SOA) over the three-tier architecture (3TA) and to redevelop the previously proposed graph library based on it‎.

Keywords: Bio-Design Automation, Biological System, Graph Library, Service-Oriented Architecture, Systems and Synthetic Biology

Procedia PDF Downloads 299
1680 Preparing a Library of Abnormal Masses for Designing a Long-Lasting Anatomical Breast Phantom for Ultrasonography Training

Authors: Nasibullina A., Leonov D.

Abstract:

The ultrasonography method is actively used for the early diagnosis of various le-sions in the human body, including the mammary gland. The incidence of breast cancer has increased by more than 20%, and mortality by 14% since 2008. The correctness of the diagnosis often directly depends on the qualifications and expe-rience of a diagnostic medical sonographer. That is why special attention should be paid to the practical training of future specialists. Anatomical phantoms are ex-cellent teaching tools because they accurately imitate the characteristics of real hu-man tissues and organs. The purpose of this work is to create a breast phantom for practicing ultrasound diagnostic skills in grayscale and elastography imaging, as well as ultrasound-guided biopsy sampling. We used silicone-like compounds ranging from 3 to 17 on the Shore scale hardness units to simulate soft tissue and lesions. Impurities with experimentally selected concentrations were added to give the phantom the necessary attenuation and reflection parameters. We used 3D modeling programs and 3D printing with PLA plastic to create the casting mold. We developed a breast phantom with inclusions of varying shape, elasticity and echogenicity. After testing the created phantom in B-mode and elastography mode, we performed a survey asking 19 participants how realistic the sonograms of the phantom were. The results showed that the closest to real was the model of the cyst with 9.5 on the 0-10 similarity scale. Thus, the developed breast phantom can be used for ultrasonography, elastography, and ultrasound-guided biopsy training.

Keywords: breast ultrasound, mammary gland, mammography, training phantom, tissue-mimicking materials

Procedia PDF Downloads 76
1679 Evaluation System of Spatial Potential Under Bridges in High Density Urban Areas of Chongqing Municipality and Applied Research on Suitability

Authors: Xvelian Qin

Abstract:

Urban "organic renewal" based on the development of existing resources in high-density urban areas has become the mainstream of urban development in the new era. As an important stock resource of public space in high-density urban areas, promoting its value remodeling is an effective way to alleviate the shortage of public space resources. However, due to the lack of evaluation links in the process of underpass space renewal, a large number of underpass space resources have been left idle, facing the problems of low space conversion efficiency, lack of accuracy in development decision-making, and low adaptability of functional positioning to citizens' needs. Therefore, it is of great practical significance to construct the evaluation system of under-bridge space renewal potential and explore the renewal mode. In this paper, some of the under-bridge spaces in the main urban area of Chongqing are selected as the research object. Through the questionnaire interviews with the users of the built excellent space under the bridge, three types of six levels and twenty-two potential evaluation indexes of "objective demand factor, construction feasibility factor and construction suitability factor" are selected, including six levels of land resources, infrastructure, accessibility, safety, space quality and ecological environment. The analytical hierarchy process and expert scoring method are used to determine the index weight, construct the potential evaluation system of the space under the bridge in high-density urban areas of Chongqing, and explore the direction of renewal and utilization of its suitability.

Keywords: space under bridge, potential evaluation, high density urban area, updated using

Procedia PDF Downloads 64
1678 Quantification of Soft Tissue Artefacts Using Motion Capture Data and Ultrasound Depth Measurements

Authors: Azadeh Rouhandeh, Chris Joslin, Zhen Qu, Yuu Ono

Abstract:

The centre of rotation of the hip joint is needed for an accurate simulation of the joint performance in many applications such as pre-operative planning simulation, human gait analysis, and hip joint disorders. In human movement analysis, the hip joint center can be estimated using a functional method based on the relative motion of the femur to pelvis measured using reflective markers attached to the skin surface. The principal source of errors in estimation of hip joint centre location using functional methods is soft tissue artefacts due to the relative motion between the markers and bone. One of the main objectives in human movement analysis is the assessment of soft tissue artefact as the accuracy of functional methods depends upon it. Various studies have described the movement of soft tissue artefact invasively, such as intra-cortical pins, external fixators, percutaneous skeletal trackers, and Roentgen photogrammetry. The goal of this study is to present a non-invasive method to assess the displacements of the markers relative to the underlying bone using optical motion capture data and tissue thickness from ultrasound measurements during flexion, extension, and abduction (all with knee extended) of the hip joint. Results show that the artefact skin marker displacements are non-linear and larger in areas closer to the hip joint. Also marker displacements are dependent on the movement type and relatively larger in abduction movement. The quantification of soft tissue artefacts can be used as a basis for a correction procedure for hip joint kinematics.

Keywords: hip joint center, motion capture, soft tissue artefact, ultrasound depth measurement

Procedia PDF Downloads 267
1677 Geo-Collaboration Model between a City and Its Inhabitants to Develop Complementary Solutions for Better Household Waste Collection

Authors: Abdessalam Hijab, Hafida Boulekbache, Eric Henry

Abstract:

According to several research studies, the city as a whole is a complex, spatially organized system; its modeling must take into account several factors, socio-economic, and political, or geographical, acting at multiple scales of observation according to varied temporalities. Sustainable management and protection of the environment in this complex system require significant human and technical investment, particularly for monitoring and maintenance. The objective of this paper is to propose an intelligent approach based on the coupling of Geographic Information System (GIS) and Information and Communications Technology (ICT) tools in order to integrate the inhabitants in the processes of sustainable management and protection of the urban environment, specifically in the processes of household waste collection in urban areas. We are discussing a collaborative 'city/inhabitant' space. Indeed, it is a geo-collaborative approach, based on the spatialization and real-time geo-localization of topological and multimedia data taken by the 'active' inhabitant, in the form of geo-localized alerts related to household waste issues in their city. Our proposal provides a good understanding of the extent to which civil society (inhabitants) can help and contribute to the development of complementary solutions for the collection of household waste and the protection of the urban environment. Moreover, it allows the inhabitant to contribute to the enrichment of a data bank for future uses. Our geo-collaborative model will be tested in the Lamkansa sampling district of the city of Casablanca in Morocco.

Keywords: geographic information system, GIS, information and communications technology, ICT, geo-collaboration, inhabitants, city

Procedia PDF Downloads 101
1676 Development of Alpha Spectroscopy Method with Solid State Nuclear Track Detector Using Aluminium Thin Films

Authors: Nidal Dwaikat

Abstract:

This work presents the development of alpha spectroscopy method with Solid-state nuclear track detectors using aluminum thin films. The resolution of this method is high, and it is able to discriminate between alpha particles at different incident energy. It can measure the exact number of alpha particles at specific energy without needing a calibration of alpha track diameter versus alpha energy. This method was tested by using Cf-252 alpha standard source at energies 5.11 Mev, 3.86 MeV and 2.7 MeV, which produced by the variation of detector -standard source distance. On front side, two detectors were covered with two Aluminum thin films and the third detector was kept uncovered. The thickness of Aluminum thin films was selected carefully (using SRIM 2013) such that one of the films will block the lower two alpha particles (3.86 MeV and 2.7 MeV) and the alpha particles at higher energy (5.11 Mev) can penetrate the film and reach the detector’s surface. The second thin film will block alpha particles at lower energy of 2.7 MeV and allow alpha particles at higher two energies (5.11 Mev and 3.86 MeV) to penetrate and produce tracks. For uncovered detector, alpha particles at three different energies can produce tracks on it. For quality assurance and accuracy, the detectors were mounted on thick enough copper substrates to block exposure from the backside. The tracks on the first detector are due to alpha particles at energy of 5.11 MeV. The difference between the tracks number on the first detector and the tracks number on the second detector is due to alpha particles at energy of 3.8 MeV. Finally, by subtracting the tracks number on the second detector from the tracks number on the third detector (uncovered), we can find the tracks number due to alpha particles at energy 2.7 MeV. After knowing the efficiency calibration factor, we can exactly calculate the activity of standard source.

Keywords: aluminium thin film, alpha particles, copper substrate, CR-39 detector

Procedia PDF Downloads 352
1675 An In-Situ Integrated Micromachining System for Intricate Micro-Parts Machining

Authors: Shun-Tong Chen, Wei-Ping Huang, Hong-Ye Yang, Ming-Chieh Yeh, Chih-Wei Du

Abstract:

This study presents a novel versatile high-precision integrated micromachining system that combines contact and non-contact micromachining techniques to machine intricate micro-parts precisely. Two broad methods of micro fabrication-1) volume additive (micro co-deposition), and 2) volume subtractive (nanometric flycutting, ultrafine w-EDM (wire Electrical Discharge Machining), and micro honing) - are integrated in the developed micromachining system, and their effectiveness is verified. A multidirectional headstock that supports various machining orientations is designed to evaluate the feasibility of multifunctional micromachining. An exchangeable working-tank that allows for various machining mechanisms is also incorporated into the system. Hence, the micro tool and workpiece need not be unloaded or repositioned until all the planned tasks have been completed. By using the designed servo rotary mechanism, a nanometric flycutting approach with a concentric rotary accuracy of 5-nm is constructed and utilized with the system to machine a diffraction-grating element with a nano-metric scale V-groove array. To improve the wear resistance of the micro tool, the micro co-deposition function is used to provide a micro-abrasive coating by an electrochemical method. The construction of ultrafine w-EDM facilitates the fabrication of micro slots with a width of less than 20-µm on a hardened tool. The hardened tool can thus be employed as a micro honing-tool to hone a micro hole with an internal diameter of 200 µm on SKD-11 molded steel. Experimental results prove that intricate micro-parts can be in-situ manufactured with high-precision by the developed integrated micromachining system.

Keywords: integrated micromachining system, in-situ micromachining, nanometric flycutting, ultrafine w-EDM, micro honing

Procedia PDF Downloads 401
1674 Modelling Heat Transfer Characteristics in the Pasteurization Process of Medium Long Necked Bottled Beers

Authors: S. K. Fasogbon, O. E. Oguegbu

Abstract:

Pasteurization is one of the most important steps in the preservation of beer products, which improves its shelf life by inactivating almost all the spoilage organisms present in it. However, there is no gain saying the fact that it is always difficult to determine the slowest heating zone, the temperature profile and pasteurization units inside bottled beer during pasteurization, hence there had been significant experimental and ANSYS fluent approaches on the problem. This work now developed Computational fluid dynamics model using COMSOL Multiphysics. The model was simulated to determine the slowest heating zone, temperature profile and pasteurization units inside the bottled beer during the pasteurization process. The results of the simulation were compared with the existing data in the literature. The results showed that, the location and size of the slowest heating zone is dependent on the time-temperature combination of each zone. The results also showed that the temperature profile of the bottled beer was found to be affected by the natural convection resulting from variation in density during pasteurization process and that the pasteurization unit increases with time subject to the temperature reached by the beer. Although the results of this work agreed with literatures in the aspects of slowest heating zone and temperature profiles, the results of pasteurization unit however did not agree. It was suspected that this must have been greatly affected by the bottle geometry, specific heat capacity and density of the beer in question. The work concludes that for effective pasteurization to be achieved, there is a need to optimize the spray water temperature and the time spent by the bottled product in each of the pasteurization zones.

Keywords: modeling, heat transfer, temperature profile, pasteurization process, bottled beer

Procedia PDF Downloads 196
1673 Power Production Performance of Different Wave Energy Converters in the Southwestern Black Sea

Authors: Ajab G. Majidi, Bilal Bingölbali, Adem Akpınar

Abstract:

This study aims to investigate the amount of energy (economic wave energy potential) that can be obtained from the existing wave energy converters in the high wave energy potential region of the Black Sea in terms of wave energy potential and their performance at different depths in the region. The data needed for this purpose were obtained using the calibrated nested layered SWAN wave modeling program version 41.01AB, which was forced with Climate Forecast System Reanalysis (CFSR) winds from 1979 to 2009. The wave dataset at a time interval of 2 hours was accumulated for a sub-grid domain for around Karaburun beach in Arnavutkoy, a district of Istanbul city. The annual sea state characteristic matrices for the five different depths along with a vertical line to the coastline were calculated for 31 years. According to the power matrices of different wave energy converter systems and characteristic matrices for each possible installation depth, the probability distribution tables of the specified mean wave period or wave energy period and significant wave height were calculated. Then, by using the relationship between these distribution tables, according to the present wave climate, the energy that the wave energy converter systems at each depth can produce was determined. Thus, the economically feasible potential of the relevant coastal zone was revealed, and the effect of different depths on energy converter systems is presented. The Oceantic at 50, 75 and 100 m depths and Oyster at 5 and 25 m depths presents the best performance. In the 31-year long period 1998 the most and 1989 is the least dynamic year.

Keywords: annual power production, Black Sea, efficiency, power production performance, wave energy converter

Procedia PDF Downloads 126
1672 Analysis study According Some of Physical and Mechanical Variables for Joint Wrist Injury

Authors: Nabeel Abdulkadhim Athab

Abstract:

The purpose of this research is to conduct a comparative study according analysis of programmed to some of physical and mechanical variables for joint wrist injury. As it can be through this research to distinguish between the amount of variation in the work of the joint after sample underwent rehabilitation program to improve the effectiveness of the joint and naturally restore its effectiveness. Supposed researcher that there is statistically significant differences between the results of the tests pre and post the members research sample, as a result of submission the sample to the program of rehabilitation, which led to the development of muscle activity that are working on wrist joint and this is what led to note the differences between the results of the tests pre and post. The researcher used the descriptive method. The research sample included (6) of injured players in the wrist joint, as the average age (21.68) and standard deviation (1.13) either length average (178cm) and standard deviation (2.08). And the sample as evidenced homogeneous among themselves. And where the data were collected, introduced in program for statistical processing to get to the most important conclusions and recommendations and that the most important: 1-The commitment of the sample program the qualifying process variables studied in the search for the heterogeneity of study activity and effectiveness of wrist joint for injured players. 2-The analysis programmed a high accuracy in the measurement of the research variables, and which led to the possibility of discrimination into account differences in motor ability camel and injured in the wrist joint. To search recommendations including: 1-The use of computer systems in the scientific research for the possibility of obtaining accurate research results. 2-Programming exercises rehabilitation according to an expert system for possible use by patients without reference to the person processor.

Keywords: analysis of joint wrist injury, physical and mechanical variables, wrist joint, wrist injury

Procedia PDF Downloads 425
1671 Statistical Modeling of Local Area Fading Channels Based on Triply Stochastic Filtered Marked Poisson Point Processes

Authors: Jihad Daba, Jean-Pierre Dubois

Abstract:

Multi path fading noise degrades the performance of cellular communication, most notably in femto- and pico-cells in 3G and 4G systems. When the wireless channel consists of a small number of scattering paths, the statistics of fading noise is not analytically tractable and poses a serious challenge to developing closed canonical forms that can be analysed and used in the design of efficient and optimal receivers. In this context, noise is multiplicative and is referred to as stochastically local fading. In many analytical investigation of multiplicative noise, the exponential or Gamma statistics are invoked. More recent advances by the author of this paper have utilized a Poisson modulated and weighted generalized Laguerre polynomials with controlling parameters and uncorrelated noise assumptions. In this paper, we investigate the statistics of multi-diversity stochastically local area fading channel when the channel consists of randomly distributed Rayleigh and Rician scattering centers with a coherent specular Nakagami-distributed line of sight component and an underlying doubly stochastic Poisson process driven by a lognormal intensity. These combined statistics form a unifying triply stochastic filtered marked Poisson point process model.

Keywords: cellular communication, femto and pico-cells, stochastically local area fading channel, triply stochastic filtered marked Poisson point process

Procedia PDF Downloads 439
1670 Lessons of Passive Environmental Design in the Sarabhai and Shodan Houses by Le Corbusier

Authors: Juan Sebastián Rivera Soriano, Rosa Urbano Gutiérrez

Abstract:

The Shodan House and the Sarabhai House (Ahmedabad, India, 1954 and 1955, respectively) are considered some of the most important works of Le Corbusier produced in the last stage of his career. There are some academic publications that study the compositional and formal aspects of their architectural design, but there is no in-depth investigation into how the climatic conditions of this region were a determining factor in the design decisions implemented in these projects. This paper argues that Le Corbusier developed a specific architectural design strategy for these buildings based on scientific research on climate in the Indian context. This new language was informed by a pioneering study and interpretation of climatic data as a design methodology that would even involve the development of new design tools. This study investigated whether their use of climatic data meets values and levels of accuracy obtained with contemporary instruments and tools, such as Energy Plus weather data files and Climate Consultant. It also intended to find out if Le Corbusier's office’s intentions and decisions were indeed appropriate and efficient for those climate conditions by assessing these projects using BIM models and energy performance simulations from Design Builder. Accurate models were built using original historical data through archival research. The outcome is to provide a new understanding of the environment of these houses through the combination of modern building science and architectural history. The results confirm that in these houses, it was achieved a model of low energy consumption. This paper contributes new evidence not only on exemplary modern architecture concerned with environmental performance but also on how it developed progressive thinking in this direction.

Keywords: bioclimatic architecture, Le Corbusier, Shodan, Sarabhai Houses

Procedia PDF Downloads 52
1669 Using an Empathy Intervention Model to Enhance Empathy and Socially Shared Regulation in Youth with Autism Spectrum Disorder

Authors: Yu-Chi Chou

Abstract:

The purpose of this study was to establish a logical path of an instructional model of empathy and social regulation, providing feasibility evidence on the model implementation in students with autism spectrum disorder (ASD). This newly developed Emotional Bug-Out Bag (BoB) curriculum was designed to enhance the empathy and socially shared regulation of students with ASD. The BoB model encompassed three instructional phases of basic theory lessons (BTL), action plan practices (APP), and final theory practices (FTP) during implementation. Besides, a learning flow (teacher-directed instruction, student self-directed problem-solving, group-based task completion, group-based reflection) was infused into the progress of instructional phases to deliberately promote the social regulatory process in group-working activities. A total of 23 junior high school students with ASD were implemented with the BoB curriculum. To examine the logical path for model implementation, data was collected from the participating students’ self-report scores on the learning nodes and understanding questions. Path analysis using structural equation modeling (SEM) was utilized for analyzing scores on 10 learning nodes and 41 understanding questions through the three phases of the BoB model. Results showed (a) all participants progressed throughout the implementation of the BoB model, and (b) the models of learning nodes and phases were positive and significant as expected, confirming the hypothesized logic path of this curriculum.

Keywords: autism spectrum disorder, empathy, regulation, socially shared regulation

Procedia PDF Downloads 56