Search results for: invasive weed optimization algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6693

Search results for: invasive weed optimization algorithm

1533 Reliability Analysis of Computer Centre at Yobe State University Using LRU Algorithm

Authors: V. V. Singh, Yusuf Ibrahim Gwanda, Rajesh Prasad

Abstract:

In this paper, we focus on the reliability and performance analysis of Computer Centre (CC) at Yobe State University, Damaturu, Nigeria. The CC consists of three servers: one database mail server, one redundant and one for sharing with the client computers in the CC (called as a local server). Observing the different possibilities of the functioning of the CC, the analysis has been done to evaluate the various popular measures of reliability such as availability, reliability, mean time to failure (MTTF), profit analysis due to the operation of the system. The system can ultimately fail due to the failure of router, redundant server before repairing the mail server and switch failure. The system can also partially fail when a local server fails. The failed devices have restored according to Least Recently Used (LRU) techniques. The system can also fail entirely due to a cooling failure of the server, electricity failure or some natural calamity like earthquake, fire tsunami, etc. All the failure rates are assumed to be constant and follow exponential time distribution, while the repair follows two types of distributions: i.e. general and Gumbel-Hougaard family copula distribution.

Keywords: reliability, availability Gumbel-Hougaard family copula, MTTF, internet data centre

Procedia PDF Downloads 523
1532 Tape-Shaped Multiscale Fiducial Marker: A Design Prototype for Indoor Localization

Authors: Marcell Serra de Almeida Martins, Benedito de Souza Ribeiro Neto, Gerson Lima Serejo, Carlos Gustavo Resque Dos Santos

Abstract:

Indoor positioning systems use sensors such as Bluetooth, ZigBee, and Wi-Fi, as well as cameras for image capture, which can be fixed or mobile. These computer vision-based positioning approaches are low-cost to implement, mainly when it uses a mobile camera. The present study aims to create a design of a fiducial marker for a low-cost indoor localization system. The marker is tape-shaped to perform a continuous reading employing two detection algorithms, one for greater distances and another for smaller distances. Therefore, the location service is always operational, even with variations in capture distance. A minimal localization and reading algorithm were implemented for the proposed marker design, aiming to validate it. The accuracy tests consider readings varying the capture distance between [0.5, 10] meters, comparing the proposed marker with others. The tests showed that the proposed marker has a broader capture range than the ArUco and QRCode, maintaining the same size. Therefore, reducing the visual pollution and maximizing the tracking since the ambient can be covered entirely.

Keywords: multiscale recognition, indoor localization, tape-shaped marker, fiducial marker

Procedia PDF Downloads 126
1531 On the Construction of Some Optimal Binary Linear Codes

Authors: Skezeer John B. Paz, Ederlina G. Nocon

Abstract:

Finding an optimal binary linear code is a central problem in coding theory. A binary linear code C = [n, k, d] is called optimal if there is no linear code with higher minimum distance d given the length n and the dimension k. There are bounds giving limits for the minimum distance d of a linear code of fixed length n and dimension k. The lower bound which can be taken by construction process tells that there is a known linear code having this minimum distance. The upper bound is given by theoretic results such as Griesmer bound. One way to find an optimal binary linear code is to make the lower bound of d equal to its higher bound. That is, to construct a binary linear code which achieves the highest possible value of its minimum distance d, given n and k. Some optimal binary linear codes were presented by Andries Brouwer in his published table on bounds of the minimum distance d of binary linear codes for 1 ≤ n ≤ 256 and k ≤ n. This was further improved by Markus Grassl by giving a detailed construction process for each code exhibiting the lower bound. In this paper, we construct new optimal binary linear codes by using some construction processes on existing binary linear codes. Particularly, we developed an algorithm applied to the codes already constructed to extend the list of optimal binary linear codes up to 257 ≤ n ≤ 300 for k ≤ 7.

Keywords: bounds of linear codes, Griesmer bound, construction of linear codes, optimal binary linear codes

Procedia PDF Downloads 748
1530 Improving Cryptographically Generated Address Algorithm in IPv6 Secure Neighbor Discovery Protocol through Trust Management

Authors: M. Moslehpour, S. Khorsandi

Abstract:

As transition to widespread use of IPv6 addresses has gained momentum, it has been shown to be vulnerable to certain security attacks such as those targeting Neighbor Discovery Protocol (NDP) which provides the address resolution functionality in IPv6. To protect this protocol, Secure Neighbor Discovery (SEND) is introduced. This protocol uses Cryptographically Generated Address (CGA) and asymmetric cryptography as a defense against threats on integrity and identity of NDP. Although SEND protects NDP against attacks, it is computationally intensive due to Hash2 condition in CGA. To improve the CGA computation speed, we parallelized CGA generation process and used the available resources in a trusted network. Furthermore, we focused on the influence of the existence of malicious nodes on the overall load of un-malicious ones in the network. According to the evaluation results, malicious nodes have adverse impacts on the average CGA generation time and on the average number of tries. We utilized a Trust Management that is capable of detecting and isolating the malicious node to remove possible incentives for malicious behavior. We have demonstrated the effectiveness of the Trust Management System in detecting the malicious nodes and hence improving the overall system performance.

Keywords: CGA, ICMPv6, IPv6, malicious node, modifier, NDP, overall load, SEND, trust management

Procedia PDF Downloads 179
1529 The Influence of Incorporating Coffee Grounds on Enhancing the Engineering Properties of Expansive Soils: Experimental Approach and Optimization

Authors: Bencheikh Messaouda, Aidoud Assia, Salima Boukour, Benamara Fatima Zohra, Boukhatem Ghania, Zegueur Chaouki Salah Eddine

Abstract:

The utilization of waste materials in civil engineering has gained widespread attention in recent years due to their adverse effects on the environment. One such waste material is coffee grounds, a black residue generated daily across the country after coffee brewing. Instead of disposing of it, there is a growing interest in repurposing it for various agricultural and industrial applications. Utilizing coffee grounds in geotechnical engineering, such as in road embankments, presents an opportunity for its valorization. The study aims to contribute to the valorization of coffee grounds by enhancing the physical and mechanical properties of clayey soils through their incorporation at varying weight percentages (3%, 6%, 9%, 12%) as partial replacements in these soils. This not only addresses the issue of coffee ground waste but also makes a tangible contribution to sustainable development. The findings demonstrate that incorporating coffee grounds generally has positive effects on the physical and mechanical properties of clayey soil. However, the extent of these effects depends on factors such as the quantity of coffee grounds added, the particle size of the grounds, and the characteristics of the soil. Additionally, coffee grounds can improve the compression and tensile strength of clayey soil, resulting in increased stability and reduced susceptibility to deformation under external forces.

Keywords: clay soil, coffee grounds, optimizing, improvement, valorization, waste

Procedia PDF Downloads 37
1528 Optimization of the Dental Direct Digital Imaging by Applying the Self-Recognition Technology

Authors: Mina Dabirinezhad, Mohsen Bayat Pour, Amin Dabirinejad

Abstract:

This paper is intended to introduce the technology to solve some of the deficiencies of the direct digital radiology. Nowadays, digital radiology is the latest progression in dental imaging, which has become an essential part of dentistry. There are two main parts of the direct digital radiology comprised of an intraoral X-ray machine and a sensor (digital image receptor). The dentists and the dental nurses experience afflictions during the taking image process by the direct digital X-ray machine. For instance, sometimes they need to readjust the sensor in the mouth of the patient to take the X-ray image again due to the low quality of that. Another problem is, the position of the sensor may move in the mouth of the patient and it triggers off an inappropriate image for the dentists. It means that it is a time-consuming process for dentists or dental nurses. On the other hand, taking several the X-ray images brings some problems for the patient such as being harmful to their health and feeling pain in their mouth due to the pressure of the sensor to the jaw. The author provides a technology to solve the above-mentioned issues that is called “Self-Recognition Direct Digital Radiology” (SDDR). This technology is based on the principle that the intraoral X-ray machine is capable to diagnose the location of the sensor in the mouth of the patient automatically. In addition, to solve the aforementioned problems, SDDR technology brings out fewer environmental impacts in comparison to the previous version.

Keywords: Dental direct digital imaging, digital image receptor, digital x-ray machine, and environmental impacts

Procedia PDF Downloads 135
1527 AutoML: Comprehensive Review and Application to Engineering Datasets

Authors: Parsa Mahdavi, M. Amin Hariri-Ardebili

Abstract:

The development of accurate machine learning and deep learning models traditionally demands hands-on expertise and a solid background to fine-tune hyperparameters. With the continuous expansion of datasets in various scientific and engineering domains, researchers increasingly turn to machine learning methods to unveil hidden insights that may elude classic regression techniques. This surge in adoption raises concerns about the adequacy of the resultant meta-models and, consequently, the interpretation of the findings. In response to these challenges, automated machine learning (AutoML) emerges as a promising solution, aiming to construct machine learning models with minimal intervention or guidance from human experts. AutoML encompasses crucial stages such as data preparation, feature engineering, hyperparameter optimization, and neural architecture search. This paper provides a comprehensive overview of the principles underpinning AutoML, surveying several widely-used AutoML platforms. Additionally, the paper offers a glimpse into the application of AutoML on various engineering datasets. By comparing these results with those obtained through classical machine learning methods, the paper quantifies the uncertainties inherent in the application of a single ML model versus the holistic approach provided by AutoML. These examples showcase the efficacy of AutoML in extracting meaningful patterns and insights, emphasizing its potential to revolutionize the way we approach and analyze complex datasets.

Keywords: automated machine learning, uncertainty, engineering dataset, regression

Procedia PDF Downloads 55
1526 Intelligent Rheumatoid Arthritis Identification System Based Image Processing and Neural Classifier

Authors: Abdulkader Helwan

Abstract:

Rheumatoid joint inflammation is characterized as a perpetual incendiary issue which influences the joints by hurting body tissues Therefore, there is an urgent need for an effective intelligent identification system of knee Rheumatoid arthritis especially in its early stages. This paper is to develop a new intelligent system for the identification of Rheumatoid arthritis of the knee utilizing image processing techniques and neural classifier. The system involves two principle stages. The first one is the image processing stage in which the images are processed using some techniques such as RGB to gryascale conversion, rescaling, median filtering, background extracting, images subtracting, segmentation using canny edge detection, and features extraction using pattern averaging. The extracted features are used then as inputs for the neural network which classifies the X-ray knee images as normal or abnormal (arthritic) based on a backpropagation learning algorithm which involves training of the network on 400 X-ray normal and abnormal knee images. The system was tested on 400 x-ray images and the network shows good performance during that phase, resulting in a good identification rate 97%.

Keywords: rheumatoid arthritis, intelligent identification, neural classifier, segmentation, backpropoagation

Procedia PDF Downloads 527
1525 Evaluation of Cyclic Thermo-Mechanical Responses of an Industrial Gas Turbine Rotor

Authors: Y. Rae, A. Benaarbia, J. Hughes, Wei Sun

Abstract:

This paper describes an elasto-visco-plastic computational modelling method which can be used to assess the cyclic plasticity responses of high temperature structures operating under thermo-mechanical loadings. The material constitutive equation used is an improved unified multi-axial Chaboche-Lemaitre model, which takes into account non-linear kinematic and isotropic hardening. The computational methodology is a three-dimensional framework following an implicit formulation and based on a radial return mapping algorithm. The associated user material (UMAT) code is developed and calibrated across isothermal hold-time low cycle fatigue tests for a typical turbine rotor steel for use in finite element (FE) implementation. The model is applied to a realistic industrial gas turbine rotor, where the study focuses its attention on the deformation heterogeneities and critical high stress areas within the rotor structure. The potential improvements of such FE visco-plastic approach are discussed. An integrated life assessment procedure based on R5 and visco-plasticity modelling, is also briefly addressed.

Keywords: unified visco-plasticity, thermo-mechanical, turbine rotor, finite element modelling

Procedia PDF Downloads 125
1524 Investigations of the Crude Oil Distillation Preheat Section in Unit 100 of Abadan Refinery and Its Recommendation

Authors: Mahdi GoharRokhi, Mohammad H. Ruhipour, Mohammad R. ZamaniZadeh, Mohsen Maleki, Yusef Shamsayi, Mahdi FarhaniNejad, Farzad FarrokhZadeh

Abstract:

Possessing massive resources of natural gas and petroleum, Iran has a special place among all other oil producing countries, according to international institutions of energy. In order to use these resources, development and functioning optimization of refineries and industrial units is mandatory. Heat exchanger is one of the most important and strategic equipment which its key role in the process of production is clear to everyone. For instance, if the temperature of a processing fluid is not set as needed by heat exchangers, the specifications of desired product can change profoundly. Crude oil enters a network of heat exchangers in atmospheric distillation section before getting into the distillation tower; in this case, well-functioning of heat exchangers can significantly affect the operation of distillation tower. In this paper, different scenarios for pre-heating of oil are studied using oil and gas simulation software, and the results are discussed. As we reviewed various scenarios, adding a heat exchanger to pre-heating network is proposed as the most efficient factor in improving all governing parameters of the tower i.e. temperature, pressure, and reflux rate. This exchanger is embedded in crude oil’s path. Crude oil enters the exchanger after E-101 and exchanges heat with discharging kerosene pump around from E-136. As depicted in the results, it will efficiently assist the improvement of process operation and side expenses.

Keywords: atmospheric distillation unit, heat exchanger, preheat, simulation

Procedia PDF Downloads 651
1523 Introduction to Various Innovative Techniques Suggested for Seismic Hazard Assessment

Authors: Deepshikha Shukla, C. H. Solanki, Mayank K. Desai

Abstract:

Amongst all the natural hazards, earthquakes have the potential for causing the greatest damages. Since the earthquake forces are random in nature and unpredictable, the quantification of the hazards becomes important in order to assess the hazards. The time and place of a future earthquake are both uncertain. Since earthquakes can neither be prevented nor be predicted, engineers have to design and construct in such a way, that the damage to life and property are minimized. Seismic hazard analysis plays an important role in earthquake design structures by providing a rational value of input parameter. In this paper, both mathematical, as well as computational methods adopted by researchers globally in the past five years, will be discussed. Some mathematical approaches involving the concepts of Poisson’s ratio, Convex Set Theory, Empirical Green’s Function, Bayesian probability estimation applied for seismic hazard and FOSM (first-order second-moment) algorithm methods will be discussed. Computational approaches and numerical model SSIFiBo developed in MATLAB to study dynamic soil-structure interaction problem is discussed in this paper. The GIS-based tool will also be discussed which is predominantly used in the assessment of seismic hazards.

Keywords: computational methods, MATLAB, seismic hazard, seismic measurements

Procedia PDF Downloads 332
1522 Quantum Decision Making with Small Sample for Network Monitoring and Control

Authors: Tatsuya Otoshi, Masayuki Murata

Abstract:

With the development and diversification of applications on the Internet, applications that require high responsiveness, such as video streaming, are becoming mainstream. Application responsiveness is not only a matter of communication delay but also a matter of time required to grasp changes in network conditions. The tradeoff between accuracy and measurement time is a challenge in network control. We people make countless decisions all the time, and our decisions seem to resolve tradeoffs between time and accuracy. When making decisions, people are known to make appropriate choices based on relatively small samples. Although there have been various studies on models of human decision-making, a model that integrates various cognitive biases, called ”quantum decision-making,” has recently attracted much attention. However, the modeling of small samples has not been examined much so far. In this paper, we extend the model of quantum decision-making to model decision-making with a small sample. In the proposed model, the state is updated by value-based probability amplitude amplification. By analytically obtaining a lower bound on the number of samples required for decision-making, we show that decision-making with a small number of samples is feasible.

Keywords: quantum decision making, small sample, MPEG-DASH, Grover's algorithm

Procedia PDF Downloads 72
1521 Ultra-Rapid and Efficient Immunomagnetic Separation of Listeria Monocytogenes from Complex Samples in High-Gradient Magnetic Field Using Disposable Magnetic Microfluidic Device

Authors: L. Malic, X. Zhang, D. Brassard, L. Clime, J. Daoud, C. Luebbert, V. Barrere, A. Boutin, S. Bidawid, N. Corneau, J. Farber, T. Veres

Abstract:

The incidence of infections caused by foodborne pathogens such as Listeria monocytogenes (L. monocytogenes) poses a great potential threat to public health and safety. These issues are further exacerbated by legal repercussions due to “zero tolerance” food safety standards adopted in developed countries. Unfortunately, a large number of related disease outbreaks are caused by pathogens present in extremely low counts currently undetectable by available techniques. The development of highly sensitive and rapid detection of foodborne pathogens is therefore crucial, and requires robust and efficient pre-analytical sample preparation. Immunomagnetic separation is a popular approach to sample preparation. Microfluidic chips combined with external magnets have emerged as viable high throughput methods. However, external magnets alone are not suitable for the capture of nanoparticles, as very strong magnetic fields are required. Devices that incorporate externally applied magnetic field and microstructures of a soft magnetic material have thus been used for local field amplification. Unfortunately, very complex and costly fabrication processes used for integration of soft magnetic materials in the reported proof-of-concept devices would prohibit their use as disposable tools for food and water safety or diagnostic applications. We present a sample preparation magnetic microfluidic device implemented in low-cost thermoplastic polymers using fabrication techniques suitable for mass-production. The developed magnetic capture chip (M-chip) was employed for rapid capture and release of L. monocytogenes conjugated to immunomagnetic nanoparticles (IMNs) in buffer and beef filtrate. The M-chip relies on a dense array of Nickel-coated high-aspect ratio pillars for capture with controlled magnetic field distribution and a microfluidic channel network for sample delivery, waste, wash and recovery. The developed Nickel-coating process and passivation allows generation of switchable local perturbations within the uniform magnetic field generated with a pair of permanent magnets placed at the opposite edges of the chip. This leads to strong and reversible trapping force, wherein high local magnetic field gradients allow efficient capture of IMNs conjugated to L. monocytogenes flowing through the microfluidic chamber. The experimental optimization of the M-chip was performed using commercially available magnetic microparticles and fabricated silica-coated iron-oxide nanoparticles. The fabricated nanoparticles were optimized to achieve the desired magnetic moment and surface functionalization was tailored to allow efficient capture antibody immobilization. The integration, validation and further optimization of the capture and release protocol is demonstrated using both, dead and live L. monocytogenes through fluorescence microscopy and plate- culture method. The capture efficiency of the chip was found to vary as function of listeria to nanoparticle concentration ratio. The maximum capture efficiency of 30% was obtained and the 24-hour plate-culture method allowed the detection of initial sample concentration of only 16 cfu/ml. The device was also very efficient in concentrating the sample from a 10 ml initial volume. Specifically, 280% concentration efficiency was achieved in 17 minutes only, demonstrating the suitability of the system for food safety applications. In addition, flexible design and low-cost fabrication process will allow rapid sample preparation for applications beyond food and water safety, including point-of-care diagnosis.

Keywords: array of pillars, bacteria isolation, immunomagnetic sample preparation, polymer microfluidic device

Procedia PDF Downloads 268
1520 Efficiency-Based Model for Solar Urban Planning

Authors: M. F. Amado, A. Amado, F. Poggi, J. Correia de Freitas

Abstract:

Today it is widely understood that global energy consumption patterns are directly related to the ongoing urban expansion and development process. This expansion is based on the natural growth of human activities and has left most urban areas totally dependent on fossil fuel derived external energy inputs. This status-quo of production, transportation, storage and consumption of energy has become inefficient and is set to become even more so when the continuous increases in energy demand are factored in. The territorial management of land use and related activities is a central component in the search for more efficient models of energy use, models that can meet current and future regional, national and European goals. In this paper, a methodology is developed and discussed with the aim of improving energy efficiency at the municipal level. The development of this methodology is based on the monitoring of energy consumption and its use patterns resulting from the natural dynamism of human activities in the territory and can be utilized to assess sustainability at the local scale. A set of parameters and indicators are defined with the objective of constructing a systemic model based on the optimization, adaptation and innovation of the current energy framework and the associated energy consumption patterns. The use of the model will enable local governments to strike the necessary balance between human activities, economic development, and the local and global environment while safeguarding fairness in the energy sector.

Keywords: solar urban planning, solar smart city, urban development, energy efficiency

Procedia PDF Downloads 323
1519 Alternate Optical Coherence Tomography Technologies in Use for Corneal Diseases Diagnosis in Dogs and Cats

Authors: U. E. Mochalova, A. V. Demeneva, Shilkin A. G., J. Yu. Artiushina

Abstract:

Objective. In medical ophthalmology OCT has been actively used in the last decade. It is a modern non-invasive method of high-precision hardware examination, which gives a detailed cross-sectional image of eye tissues structure with a high level of resolution, which provides in vivo morphological information at the microscopic level about corneal tissue, structures of the anterior segment, retina and optic nerve. The purpose of this study was to explore the possibility of using the OCT technology in complex ophthalmological examination in dogs and cats, to characterize the revealed pathological structural changes in corneal tissue in cats and dogs with some of the most common corneal diseases. Procedures. Optical coherence tomography of the cornea was performed in 112 animals: 68 dogs and 44 cats. In total, 224 eyes were examined. Pathologies of the organ of vision included: dystrophy and degeneration of the cornea, endothelial corneal dystrophy, dry eye syndrome, chronic superficial vascular keratitis, pigmented keratitis, corneal erosion, ulcerative stromal keratitis, corneal sequestration, chronic glaucoma and also postoperative period after performed keratoplasty. When performing OCT, we used certified medical devices: "Huvitz HOCT-1/1F», «Optovue iVue 80» and "SOCT Copernicus Revo (60)". Results. The results of a clinical study on the use of optical coherence tomography (OCT)of the cornea in cats and dogs, performed by the authors of the article in the complex diagnosis of keratopathies of variousorigins: endothelial corneal dystrophy, pigmented keratitis, chronic keratoconjunctivitis, chronic herpetic keratitis, ulcerative keratitis, traumatic corneal damage, sequestration of the cornea of cats, chronic keratitis, complicating the course of glaucoma. The characteristics of the OCT scans are givencorneas of cats and dogs that do not have corneal pathologies. OCT scans of various corneal pathologies in dogs and cats with a description of the revealed pathological changes are presented. Of great clinical interest are the data obtained during OCT of the cornea of animals undergoing keratoplasty operations using various forms of grafts. Conclusions. OCT makes it possible to assess the thickness and pathological structural changes of the corneal surface epithelium, corneal stroma and descemet membrane. We can measure them, determine the exact localization, and record pathological changes. Clinical observation of the dynamics of the pathological process in the cornea using OCT makes it possible to evaluate the effectiveness of drug treatment. In case of negative dynamics of corneal disease, it is necessary to determine the indications for surgical treatment (to assess the thickness of the cornea, the localization of its thinning zones, to characterize the depth and area of pathological changes). According to the OCT of the cornea, it is possible to choose the optimal surgical treatment for the patient, the technique and depth of optically constructive surgery (penetrating or anterior lamellar keratoplasty).; determine the depth and diameter of the planned microsurgical trepanation of corneal tissue, which will ensure good adaptation of the edges of the donor material.

Keywords: optical coherence tomography, corneal sequestration, optical coherence tomography of the cornea, corneal transplantation, cat, dog

Procedia PDF Downloads 62
1518 Minimizing Students' Learning Difficulties in Mathematics

Authors: Hari Sharan Pandit

Abstract:

Mathematics teaching in Nepal has been centralized and guided by the notion of transfer of knowledge and skills from teachers to students. The overemphasis on the ‘algorithm-centric’ approach to mathematics teaching and the focus on ‘role–learning’ as the ultimate way of solving mathematical problems since the early years of schooling have been creating severe problems in school-level mathematics in Nepal. In this context, the author argues that students should learn real-world mathematical problems through various interesting, creative and collaborative, as well as artistic and alternative ways of knowing. The collaboration-incorporated pedagogy is a distinct pedagogical approach that offers a better alternative as an integrated and interdisciplinary approach to learning that encourages students to think more broadly and critically about real-world problems. The paper, as a summarized report of action research designed, developed and implemented by the author, focuses on the needs and usefulness of collaboration-incorporated pedagogy in the Nepali context to make mathematics teaching more meaningful for producing creative and critical citizens. This paper is useful for mathematics teachers, teacher educators and researchers who argue on arts integration in mathematics teaching.

Keywords: peer teaching, metacognitive approach, mitigating, action research

Procedia PDF Downloads 11
1517 Analyzing Medical Workflows Using Market Basket Analysis

Authors: Mohit Kumar, Mayur Betharia

Abstract:

Healthcare domain, with the emergence of Electronic Medical Record (EMR), collects a lot of data which have been attracting Data Mining expert’s interest. In the past, doctors have relied on their intuition while making critical clinical decisions. This paper presents the means to analyze the Medical workflows to get business insights out of huge dumped medical databases. Market Basket Analysis (MBA) which is a special data mining technique, has been widely used in marketing and e-commerce field to discover the association between products bought together by customers. It helps businesses in increasing their sales by analyzing the purchasing behavior of customers and pitching the right customer with the right product. This paper is an attempt to demonstrate Market Basket Analysis applications in healthcare. In particular, it discusses the Market Basket Analysis Algorithm ‘Apriori’ applications within healthcare in major areas such as analyzing the workflow of diagnostic procedures, Up-selling and Cross-selling of Healthcare Systems, designing healthcare systems more user-friendly. In the paper, we have demonstrated the MBA applications using Angiography Systems, but can be extrapolated to other modalities as well.

Keywords: data mining, market basket analysis, healthcare applications, knowledge discovery in healthcare databases, customer relationship management, healthcare systems

Procedia PDF Downloads 167
1516 Algorithmic Generation of Carbon Nanochimneys

Authors: Sorin Muraru

Abstract:

Computational generation of carbon nanostructures is still a very demanding process. This work provides an alternative to manual molecular modeling through an algorithm meant to automate the design of such structures. Specifically, carbon nanochimneys are obtained through the bonding of a carbon nanotube with the smaller edge of an open carbon nanocone. The methods of connection rely on mathematical, geometrical and chemical properties. Non-hexagonal rings are used in order to perform the correct bonding of dangling bonds. Once obtained, they are useful for thermal transport, gas storage or other applications such as gas separation. The carbon nanochimneys are meant to produce a less steep connection between structures such as the carbon nanotube and graphene sheet, as in the pillared graphene, but can also provide functionality on its own. The method relies on connecting dangling bonds at the edges of the two carbon nanostructures, employing the use of two different types of auxiliary structures on a case-by-case basis. The code is implemented in Python 3.7 and generates an output file in the .pdb format containing all the system’s coordinates. Acknowledgment: This work was supported by a grant of the Executive Agency for Higher Education, Research, Development and innovation funding (UEFISCDI), project number PN-III-P1-1.1-TE-2016-24-2, contract TE 122/2018.

Keywords: carbon nanochimneys, computational, carbon nanotube, carbon nanocone, molecular modeling, carbon nanostructures

Procedia PDF Downloads 164
1515 A Character Detection Method for Ancient Yi Books Based on Connected Components and Regressive Character Segmentation

Authors: Xu Han, Shanxiong Chen, Shiyu Zhu, Xiaoyu Lin, Fujia Zhao, Dingwang Wang

Abstract:

Character detection is an important issue for character recognition of ancient Yi books. The accuracy of detection directly affects the recognition effect of ancient Yi books. Considering the complex layout, the lack of standard typesetting and the mixed arrangement between images and texts, we propose a character detection method for ancient Yi books based on connected components and regressive character segmentation. First, the scanned images of ancient Yi books are preprocessed with nonlocal mean filtering, and then a modified local adaptive threshold binarization algorithm is used to obtain the binary images to segment the foreground and background for the images. Second, the non-text areas are removed by the method based on connected components. Finally, the single character in the ancient Yi books is segmented by our method. The experimental results show that the method can effectively separate the text areas and non-text areas for ancient Yi books and achieve higher accuracy and recall rate in the experiment of character detection, and effectively solve the problem of character detection and segmentation in character recognition of ancient books.

Keywords: CCS concepts, computing methodologies, interest point, salient region detections, image segmentation

Procedia PDF Downloads 124
1514 Investigation of Extreme Gradient Boosting Model Prediction of Soil Strain-Shear Modulus

Authors: Ehsan Mehryaar, Reza Bushehri

Abstract:

One of the principal parameters defining the clay soil dynamic response is the strain-shear modulus relation. Predicting the strain and, subsequently, shear modulus reduction of the soil is essential for performance analysis of structures exposed to earthquake and dynamic loadings. Many soil properties affect soil’s dynamic behavior. In order to capture those effects, in this study, a database containing 1193 data points consists of maximum shear modulus, strain, moisture content, initial void ratio, plastic limit, liquid limit, initial confining pressure resulting from dynamic laboratory testing of 21 clays is collected for predicting the shear modulus vs. strain curve of soil. A model based on an extreme gradient boosting technique is proposed. A tree-structured parzan estimator hyper-parameter tuning algorithm is utilized simultaneously to find the best hyper-parameters for the model. The performance of the model is compared to the existing empirical equations using the coefficient of correlation and root mean square error.

Keywords: XGBoost, hyper-parameter tuning, soil shear modulus, dynamic response

Procedia PDF Downloads 196
1513 Skills Needed Amongst Secondary School Students for Artificial Intelligence Development in Southeast Nigeria

Authors: Chukwuma Mgboji

Abstract:

Since the advent of Artificial Intelligence, robots have become a major stay in developing societies. Robots are deployed in Education, Health, Food and in other spheres of life. Nigeria a country in West Africa has a very low profile in the advancement of Artificial Intelligence especially in the grass roots. The benefits of Artificial intelligence are not fully maximised and harnessed. Advances in artificial intelligence are perceived as impossible or observed as irrelevant. This study seeks to ascertain the needed skills for the development of artificialintelligence amongst secondary schools in Nigeria. The study focused on South East Nigeria with Five states namely Imo, Abia, Ebonyi, Anambra and Enugu. The sample size is 1000 students drawn from Five Government owned Universities offering Computer Science, Computer Education, Electronics Engineering across the Five South East states. Survey method was used to solicit responses from respondents. The findings from the study identified mathematical skills, analytical skills, problem solving skills, computing skills, programming skills, algorithm skills amongst others. The result of this study to the best of the author’s knowledge will be highly beneficial to all stakeholders involved in the advancements and development of artificial intelligence.

Keywords: artificial intelligence, secondary school, robotics, skills

Procedia PDF Downloads 140
1512 Modelling and Simulation of a Commercial Thermophilic Biogas Plant

Authors: Jeremiah L. Chukwuneke, Obiora E. Anisiji, Chinonso H. Achebe, Paul C. Okolie

Abstract:

This paper developed a mathematical model of a commercial biogas plant for urban area clean energy requirement. It identified biodegradable waste materials like domestic/city refuse as economically viable alternative source of energy. The mathematical formulation of the proposed gas plant follows the fundamental principles of thermodynamics, and further analyses were accomplished to develop an algorithm for evaluating the plant performance preferably in terms of daily production capacity. In addition, the capacity of the plant is equally estimated for a given cycle of operation and presented in time histories. A nominal 1500 m3 power gas plant was studied characteristically and its performance efficiency evaluated. It was observed that the rate of bio gas production is essentially a function of the reactor temperature, pH, substrate concentration, rate of degradation of the biomass, and the accumulation of matter in the system due to bacteria growth. The results of this study conform to a very large extent with reported empirical data of some existing plant and further model validations were conducted in line with classical records found in literature.

Keywords: energy and mass conservation, specific growth rate, thermophilic bacteria, temperature, rate of bio gas production

Procedia PDF Downloads 435
1511 Application of Machine Learning Models to Predict Couchsurfers on Free Homestay Platform Couchsurfing

Authors: Yuanxiang Miao

Abstract:

Couchsurfing is a free homestay and social networking service accessible via the website and mobile app. Couchsurfers can directly request free accommodations from others and receive offers from each other. However, it is typically difficult for people to make a decision that accepts or declines a request when they receive it from Couchsurfers because they do not know each other at all. People are expected to meet up with some Couchsurfers who are kind, generous, and interesting while it is unavoidable to meet up with someone unfriendly. This paper utilized classification algorithms of Machine Learning to help people to find out the Good Couchsurfers and Not Good Couchsurfers on the Couchsurfing website. By knowing the prior experience, like Couchsurfer’s profiles, the latest references, and other factors, it became possible to recognize what kind of the Couchsurfers, and furthermore, it helps people to make a decision that whether to host the Couchsurfers or not. The value of this research lies in a case study in Kyoto, Japan in where the author has hosted 54 Couchsurfers, and the author collected relevant data from the 54 Couchsurfers, finally build a model based on classification algorithms for people to predict Couchsurfers. Lastly, the author offered some feasible suggestions for future research.

Keywords: Couchsurfing, Couchsurfers prediction, classification algorithm, hospitality tourism platform, hospitality sciences, machine learning

Procedia PDF Downloads 121
1510 Preparation and Characterization of Phosphate-Nickel-Titanium Composite Coating Obtained by Sol Gel Process for Corrosion Protection

Authors: Khalidou Ba, Abdelkrim Chahine, Mohamed Ebn Touhami

Abstract:

A strong industrial interest is focused on the development of coatings for anticorrosion protection. In this context, phosphate composite materials are expanding strongly due to their chemical characteristics and their interesting physicochemical properties. Sol-gel coatings offer high homogeneity and purity that may lead to obtain coating presenting good adhesion to metal surface. The goal behind this work is to develop efficient coatings for corrosion protection of steel to extend its life. In this context, a sol gel process allowing to obtain thin film coatings on carbon steel with high resistance to corrosion has been developed. The optimization of several experimental parameters such as the hydrolysis time, the temperature, the coating technique, the molar ratio between precursors, the number of layers and the drying mode has been realized in order to obtain a coating showing the best anti-corrosion properties. The effect of these parameters on the microstructure and anticorrosion performance of the films sol gel coating has been investigated using different characterization methods (FTIR, XRD, Raman, XPS, SEM, Profilometer, Salt Spray Test, etc.). An optimized coating presenting good adhesion and very stable anticorrosion properties in salt spray test, which consists of a corrosive attack accelerated by an artificial salt spray consisting of a solution of 5% NaCl, pH neutral, under precise conditions of temperature (35 °C) and pressure has been obtained.

Keywords: sol gel, coating, corrosion, XPS

Procedia PDF Downloads 124
1509 Continuous Production of Prebiotic Pectic Oligosaccharides from Sugar Beet Pulp in a Continuous Cross Flow Membrane Bioreactor

Authors: Neha Babbar, S. Van Roy, W. Dejonghe, S. Sforza, K. Elst

Abstract:

Pectic oligosaccharides (a class of prebiotics) are non-digestible carbohydrates which benefits the host by stimulating the growth of healthy gut micro flora. Production of prebiotic pectic oligosaccharides (POS) from pectin rich agricultural residues involves a cutting of long chain polymer of pectin to oligomers of pectin while avoiding the formation of monosaccharides. The objective of the present study is to develop a two-step continuous biocatalytic membrane reactor (MER) for the continuous production of POS (from sugar beet pulp) in which conversion is combined with separation. Optimization of the ratio of POS/monosaccharides, stability and productivities of the process was done by testing various residence times (RT) in the reactor vessel with diluted (10 RT, 20 RT, and 30 RT) and undiluted (30 RT, 40 RT and 60 RT) substrate. The results show that the most stable processes (steady state) were 20 RT and 30 RT for diluted substrate and 40 RT and 60 RT for undiluted substrate. The highest volumetric and specific productivities of 20 g/L/h and 11 g/gE/h; 17 g/l/h and 9 g/gE/h were respectively obtained with 20 RT (diluted substrate) and 40 RT (undiluted substrate). Under these conditions, the permeates of the reactor test with 20 RT (diluted substrate) consisted of 80 % POS fractions while that of 40 RT (undiluted substrate) resulted in 70% POS fractions. A two-step continuous biocatalytic MER for the continuous POS production looks very promising for the continuous production of tailor made POS. Although both the processes i.e 20 RT (diluted substrate) and 40 RT (undiluted substrate) gave the best results, but for an Industrial application it is preferable to use an undiluted substrate.

Keywords: pectic oligosaccharides, membrane reactor, residence time, specific productivity, volumetric productivity

Procedia PDF Downloads 431
1508 Effects of Process Parameters on the Yield of Oil from Coconut Fruit

Authors: Ndidi F. Amulu, Godian O. Mbah, Maxwel I. Onyiah, Callistus N. Ude

Abstract:

Analysis of the properties of coconut (Cocos nucifera) and its oil was evaluated in this work using standard analytical techniques. The analyses carried out include proximate composition of the fruit, extraction of oil from the fruit using different process parameters and physicochemical analysis of the extracted oil. The results showed the percentage (%) moisture, crude lipid, crude protein, ash, and carbohydrate content of the coconut as 7.59, 55.15, 5.65, 7.35, and 19.51 respectively. The oil from the coconut fruit was odourless and yellowish liquid at room temperature (30oC). The treatment combinations used (leaching time, leaching temperature and solute: solvent ratio) showed significant differences (P˂0.05) in the yield of oil from coconut flour. The oil yield ranged between 36.25%-49.83%. Lipid indices of the coconut oil indicated the acid value (AV) as 10.05 Na0H/g of oil, free fatty acid (FFA) as 5.03%, saponification values (SV) as 183.26 mgKOH-1 g of oil, iodine value (IV) as 81.00 I2/g of oil, peroxide value (PV) as 5.00 ml/ g of oil and viscosity (V) as 0.002. A standard statistical package minitab version 16.0 program was used in the regression analysis and analysis of variance (ANOVA). The statistical software mentioned above was also used to generate various plots such as single effect plot, interactions effect plot and contour plot. The response or yield of oil from the coconut flour was used to develop a mathematical model that correlates the yield to the process variables studied. The maximum conditions obtained that gave the highest yield of coconut oil were leaching time of 2 hrs, leaching temperature of 50 oC and solute/solvent ratio of 0.05 g/ml.

Keywords: coconut, oil-extraction, optimization, physicochemical, proximate

Procedia PDF Downloads 346
1507 Comparative Syudy Of Heat Transfer Capacity Limits of Heat Pipe

Authors: H. Shokouhmand, A. Ghanami

Abstract:

Heat pipe is simple heat transfer device which combines the conduction and phase change phenomena to control the heat transfer without any need for external power source. At hot surface of heat pipe, the liquid phase absorbs heat and changes to vapor phase. The vapor phase flows to condenser region and with the loss of heat changes to liquid phase. Due to gravitational force the liquid phase flows to evaporator section.In HVAC systems the working fluid is chosen based on the operating temperature. The heat pipe has significant capability to reduce the humidity in HVAC systems. Each HVAC system which uses heater, humidifier or dryer is a suitable nominate for the utilization of heat pipes. Generally heat pipes have three main sections: condenser, adiabatic region and evaporator.Performance investigation and optimization of heat pipes operation in order to increase their efficiency is crucial. In present article, a parametric study is performed to improve the heat pipe performance. Therefore, the heat capacity of heat pipe with respect to geometrical and confining parameters is investigated. For the better observation of heat pipe operation in HVAC systems, a CFD simulation in Eulerian- Eulerian multiphase approach is also performed. The results show that heat pipe heat transfer capacity is higher for water as working fluid with the operating temperature of 340 K. It is also observed that the vertical orientation of heat pipe enhances it’s heat transfer capacity.

Keywords: heat pipe, HVAC system, grooved heat pipe, heat pipe limits

Procedia PDF Downloads 365
1506 Heat Pipe Thermal Performance Improvement in H-VAC Systems Using CFD Modeling

Authors: H. Shokouhmand, A. Ghanami

Abstract:

Heat pipe is a simple heat transfer device which combines the conduction and phase change phenomena to control the heat transfer without any need for external power source. At hot surface of the heat pipe, the liquid phase absorbs heat and changes to vapor phase. The vapor phase flows to condenser region and with the loss of heat changes to liquid phase. Due to gravitational force, the liquid phase flows to evaporator section. In HVAC systems, the working fluid is chosen based on the operating temperature. The heat pipe has significant capability to reduce the humidity in HVAC systems. Each HVAC system which uses heater, humidifier or dryer is a suitable nominate for the utilization of heat pipes. Generally, heat pipes have three main sections: condenser, adiabatic region, and evaporator.Performance investigation and optimization of heat pipes operation in order to increase their efficiency is crucial. In the present article, a parametric study is performed to improve the heat pipe performance. Therefore, the heat capacity of the heat pipe with respect to geometrical and confining parameters is investigated. For the better observation of heat pipe operation in HVAC systems, a CFD simulation in Eulerian- Eulerian multiphase approach is also performed. The results show that heat pipe heat transfer capacity is higher for water as working fluid with the operating temperature of 340 K. It is also showed that the vertical orientation of heat pipe enhances its heat transfer capacity.

Keywords: heat pipe, HVAC system, grooved heat pipe, CFD simulation

Procedia PDF Downloads 490
1505 Use of DNA Barcoding and UPLC-MS to Authenticate Agathosma spp. in South African Herbal Products

Authors: E. Pretorius, A. M. Viljoen, M. van der Bank

Abstract:

Introduction: The phytochemistry of Agathosma crenulata and A. betulina has been studied extensively, while their molecular analysis through DNA barcoding remains virtually unexplored. This technique can confirm the identity of plant species included in a herbal product, thereby ensuring the efficacy of the herbal product and the accuracy of its label. Materials and methods: Authentic Agathosma reference material of A. betulina (n=16) and A. crenulata (n=10) were obtained. Thirteen commercial products were purchased from various health shops around Johannesburg, South Africa, using the search term “Agathosma” or “Buchu.” The plastid regions matK and ycf1 were used to barcode the Buchu products, and BRONX analysis confirmed the taxonomic identity of the samples. UPLC-MS analyses were also performed. Results: Only (30/60) 60% of the traded samples tested from 13 suppliers contained A. betulina in their herbal products. Similar results were also obtained for the UPLC-MS analysis. Conclusion: In this study, we demonstrate the application of DNA barcoding in combination with phytochemical analysis to authenticate herbal products claiming to contain Agathosma plants as an ingredient in their products. This supports manufacturing efforts to ensure that herbal products that are safe for the consumer.

Keywords: Buchu, substitution, barcoding, BRONX algorithm, matK, ycf1, UPLC-MS

Procedia PDF Downloads 123
1504 Heat Pipes Thermal Performance Improvement in H-VAC Systems Using CFD Modeling

Authors: M. Heydari, A. Ghanami

Abstract:

Heat pipe is simple heat transfer device which combines the conduction and phase change phenomena to control the heat transfer without any need for external power source. At hot surface of heat pipe, the liquid phase absorbs heat and changes to vapor phase. The vapor phase flows to condenser region and with the loss of heat changes to liquid phase. Due to gravitational force the liquid phase flows to evaporator section.In HVAC systems the working fluid is chosen based on the operating temperature. The heat pipe has significant capability to reduce the humidity in HVAC systems. Each HVAC system which uses heater, humidifier or dryer is a suitable nominate for the utilization of heat pipes. Generally heat pipes have three main sections: condenser, adiabatic region and evaporator.Performance investigation and optimization of heat pipes operation in order to increase their efficiency is crucial. In present article, a parametric study is performed to improve the heat pipe performance. Therefore, the heat capacity of heat pipe with respect to geometrical and confining parameters is investigated. For the better observation of heat pipe operation in HVAC systems, a CFD simulation in Eulerian- Eulerian multiphase approach is also performed. The results show that heat pipe heat transfer capacity is higher for water as working fluid with the operating temperature of 340 K. It is also showed that the vertical orientation of heat pipe enhances it’s heat transfer capacity.

Keywords: heat pipe, HVAC system, grooved heat pipe, heat pipe limits

Procedia PDF Downloads 438