Search results for: traditional techniques
9206 Comparing Image Processing and AI Techniques for Disease Detection in Plants
Authors: Luiz Daniel Garay Trindade, Antonio De Freitas Valle Neto, Fabio Paulo Basso, Elder De Macedo Rodrigues, Maicon Bernardino, Daniel Welfer, Daniel Muller
Abstract:
Agriculture plays an important role in society since it is one of the main sources of food in the world. To help the production and yield of crops, precision agriculture makes use of technologies aiming at improving productivity and quality of agricultural commodities. One of the problems hampering quality of agricultural production is the disease affecting crops. Failure in detecting diseases in a short period of time can result in small or big damages to production, causing financial losses to farmers. In order to provide a map of the contributions destined to the early detection of plant diseases and a comparison of the accuracy of the selected studies, a systematic literature review of the literature was performed, showing techniques for digital image processing and neural networks. We found 35 interesting tool support alternatives to detect disease in 19 plants. Our comparison of these studies resulted in an overall average accuracy of 87.45%, with two studies very closer to obtain 100%.Keywords: pattern recognition, image processing, deep learning, precision agriculture, smart farming, agricultural automation
Procedia PDF Downloads 3809205 Process for Separating and Recovering Materials from Kerf Slurry Waste
Authors: Tarik Ouslimane, Abdenour Lami, Salaheddine Aoudj, Mouna Hecini, Ouahiba Bouchelaghem, Nadjib Drouiche
Abstract:
Slurry waste is a byproduct generated from the slicing process of multi-crystalline silicon ingots. This waste can be used as a secondary resource to recover high purity silicon which has a great economic value. From the management perspective, the ever increasing generation of kerf slurry waste loss leads to significant challenges for the photovoltaic industry due to the current low use of slurry waste for silicon recovery. Slurry waste, in most cases, contains silicon, silicon carbide, metal fragments and mineral-oil-based or glycol-based slurry vehicle. As a result, of the global scarcity of high purity silicon supply, the high purity silicon content in slurry has increasingly attracted interest for research. This paper presents a critical overview of the current techniques employed for high purity silicon recovery from kerf slurry waste. Hydrometallurgy is continuously a matter of study and research. However, in this review paper, several new techniques about the process of high purity silicon recovery from slurry waste are introduced. The purpose of the information presented is to improve the development of a clean and effective recovery process of high purity silicon from slurry waste.Keywords: Kerf-loss, slurry waste, silicon carbide, silicon recovery, photovoltaic, high purity silicon, polyethylen glycol
Procedia PDF Downloads 3129204 Network Conditioning and Transfer Learning for Peripheral Nerve Segmentation in Ultrasound Images
Authors: Harold Mauricio Díaz-Vargas, Cristian Alfonso Jimenez-Castaño, David Augusto Cárdenas-Peña, Guillermo Alberto Ortiz-Gómez, Alvaro Angel Orozco-Gutierrez
Abstract:
Precise identification of the nerves is a crucial task performed by anesthesiologists for an effective Peripheral Nerve Blocking (PNB). Now, anesthesiologists use ultrasound imaging equipment to guide the PNB and detect nervous structures. However, visual identification of the nerves from ultrasound images is difficult, even for trained specialists, due to artifacts and low contrast. The recent advances in deep learning make neural networks a potential tool for accurate nerve segmentation systems, so addressing the above issues from raw data. The most widely spread U-Net network yields pixel-by-pixel segmentation by encoding the input image and decoding the attained feature vector into a semantic image. This work proposes a conditioning approach and encoder pre-training to enhance the nerve segmentation of traditional U-Nets. Conditioning is achieved by the one-hot encoding of the kind of target nerve a the network input, while the pre-training considers five well-known deep networks for image classification. The proposed approach is tested in a collection of 619 US images, where the best C-UNet architecture yields an 81% Dice coefficient, outperforming the 74% of the best traditional U-Net. Results prove that pre-trained models with the conditional approach outperform their equivalent baseline by supporting learning new features and enriching the discriminant capability of the tested networks.Keywords: nerve segmentation, U-Net, deep learning, ultrasound imaging, peripheral nerve blocking
Procedia PDF Downloads 1099203 A Cloud Computing System Using Virtual Hyperbolic Coordinates for Services Distribution
Authors: Telesphore Tiendrebeogo, Oumarou Sié
Abstract:
Cloud computing technologies have attracted considerable interest in recent years. Thus, these latters have become more important for many existing database applications. It provides a new mode of use and of offer of IT resources in general. Such resources can be used “on demand” by anybody who has access to the internet. Particularly, the Cloud platform provides an ease to use interface between providers and users, allow providers to develop and provide software and databases for users over locations. Currently, there are many Cloud platform providers support large scale database services. However, most of these only support simple keyword-based queries and can’t response complex query efficiently due to lack of efficient in multi-attribute index techniques. Existing Cloud platform providers seek to improve performance of indexing techniques for complex queries. In this paper, we define a new cloud computing architecture based on a Distributed Hash Table (DHT) and design a prototype system. Next, we perform and evaluate our cloud computing indexing structure based on a hyperbolic tree using virtual coordinates taken in the hyperbolic plane. We show through our experimental results that we compare with others clouds systems to show our solution ensures consistence and scalability for Cloud platform.Keywords: virtual coordinates, cloud, hyperbolic plane, storage, scalability, consistency
Procedia PDF Downloads 4269202 Control Flow around NACA 4415 Airfoil Using Slot and Injection
Authors: Imine Zakaria, Meftah Sidi Mohamed El Amine
Abstract:
One of the most vital aerodynamic organs of a flying machine is the wing, which allows it to fly in the air efficiently. The flow around the wing is very sensitive to changes in the angle of attack. Beyond a value, there is a phenomenon of the boundary layer separation on the upper surface, which causes instability and total degradation of aerodynamic performance called a stall. However, controlling flow around an airfoil has become a researcher concern in the aeronautics field. There are two techniques for controlling flow around a wing to improve its aerodynamic performance: passive and active controls. Blowing and suction are among the active techniques that control the boundary layer separation around an airfoil. Their objective is to give energy to the air particles in the boundary layer separation zones and to create vortex structures that will homogenize the velocity near the wall and allow control. Blowing and suction have long been used as flow control actuators around obstacles. In 1904 Prandtl applied a permanent blowing to a cylinder to delay the boundary layer separation. In the present study, several numerical investigations have been developed to predict a turbulent flow around an aerodynamic profile. CFD code was used for several angles of attack in order to validate the present work with that of the literature in the case of a clean profile. The variation of the lift coefficient CL with the momentum coefficientKeywords: CFD, control flow, lift, slot
Procedia PDF Downloads 2009201 Understanding the Reasons for Flooding in Chennai and Strategies for Making It Flood Resilient
Authors: Nivedhitha Venkatakrishnan
Abstract:
Flooding in urban areas in India has become a usual ritual phenomenon and a nightmare to most cities, which is a consequence of man-made disruption resulting in disaster. The City planning in India falls short of withstanding hydro generated disasters. This has become a barrier and challenge in the process of development put forth by urbanization, high population density, expanding informal settlements, environment degradation from uncollected and untreated waste that flows into natural drains and water bodies, this has disrupted the natural mechanism of hazard protection such as drainage channels, wetlands and floodplains. The magnitude and the impact of the mishap was high because of the failure of development policies, strategies, plans that the city had adopted. In the current scenario, cities are becoming the home for future, with economic diversification bringing in more investment into cities especially in domains of Urban infrastructure, planning and design. The uncertainty of the Urban futures in these low elevated coastal zones faces an unprecedented risk and threat. The study on focuses on three major pillars of resilience such as Recover, Resist and Restore. This process of getting ready to handle the situation bridges the gap between disaster response management and risk reduction requires a shift in paradigm. The study involved a qualitative research and a system design approach (framework). The initial stages involved mapping out of the urban water morphology with respect to the spatial growth gave an insight of the water bodies that have gone missing over the years during the process of urbanization. The major finding of the study was missing links between traditional water harvesting network was a major reason resulting in a manmade disaster. The research conceptualized the ideology of a sponge city framework which would guide the growth through institutional frameworks at different levels. The next stage was on understanding the implementation process at various stage to ensure the shift in paradigm. Demonstration of the concepts at a neighborhood level where, how, what are the functions and benefits of each component. Quantifying the design decision with rainwater harvest, surface runoff and how much water is collected and how it could be collected, stored and reused. The study came with further recommendation for Water Mitigation Spaces that will revive the traditional harvesting network.Keywords: flooding, man made disaster, resilient city, traditional harvesting network, waterbodies
Procedia PDF Downloads 1409200 Data Science-Based Key Factor Analysis and Risk Prediction of Diabetic
Authors: Fei Gao, Rodolfo C. Raga Jr.
Abstract:
This research proposal will ascertain the major risk factors for diabetes and to design a predictive model for risk assessment. The project aims to improve diabetes early detection and management by utilizing data science techniques, which may improve patient outcomes and healthcare efficiency. The phase relation values of each attribute were used to analyze and choose the attributes that might influence the examiner's survival probability using Diabetes Health Indicators Dataset from Kaggle’s data as the research data. We compare and evaluate eight machine learning algorithms. Our investigation begins with comprehensive data preprocessing, including feature engineering and dimensionality reduction, aimed at enhancing data quality. The dataset, comprising health indicators and medical data, serves as a foundation for training and testing these algorithms. A rigorous cross-validation process is applied, and we assess their performance using five key metrics like accuracy, precision, recall, F1-score, and area under the receiver operating characteristic curve (AUC-ROC). After analyzing the data characteristics, investigate their impact on the likelihood of diabetes and develop corresponding risk indicators.Keywords: diabetes, risk factors, predictive model, risk assessment, data science techniques, early detection, data analysis, Kaggle
Procedia PDF Downloads 779199 A U-Net Based Architecture for Fast and Accurate Diagram Extraction
Authors: Revoti Prasad Bora, Saurabh Yadav, Nikita Katyal
Abstract:
In the context of educational data mining, the use case of extracting information from images containing both text and diagrams is of high importance. Hence, document analysis requires the extraction of diagrams from such images and processes the text and diagrams separately. To the author’s best knowledge, none among plenty of approaches for extracting tables, figures, etc., suffice the need for real-time processing with high accuracy as needed in multiple applications. In the education domain, diagrams can be of varied characteristics viz. line-based i.e. geometric diagrams, chemical bonds, mathematical formulas, etc. There are two broad categories of approaches that try to solve similar problems viz. traditional computer vision based approaches and deep learning approaches. The traditional computer vision based approaches mainly leverage connected components and distance transform based processing and hence perform well in very limited scenarios. The existing deep learning approaches either leverage YOLO or faster-RCNN architectures. These approaches suffer from a performance-accuracy tradeoff. This paper proposes a U-Net based architecture that formulates the diagram extraction as a segmentation problem. The proposed method provides similar accuracy with a much faster extraction time as compared to the mentioned state-of-the-art approaches. Further, the segmentation mask in this approach allows the extraction of diagrams of irregular shapes.Keywords: computer vision, deep-learning, educational data mining, faster-RCNN, figure extraction, image segmentation, real-time document analysis, text extraction, U-Net, YOLO
Procedia PDF Downloads 1409198 A Nucleic Acid Extraction Method for High-Viscosity Floricultural Samples
Authors: Harunori Kawabe, Hideyuki Aoshima, Koji Murakami, Minoru Kawakami, Yuka Nakano, David D. Ordinario, C. W. Crawford, Iri Sato-Baran
Abstract:
With the recent advances in gene editing technologies allowing the rewriting of genetic sequences, additional market growth in the global floriculture market beyond previous trends is anticipated through increasingly sophisticated plant breeding techniques. As a prerequisite for gene editing, the gene sequence of the target plant must first be identified. This necessitates the genetic analysis of plants with unknown gene sequences, the extraction of RNA, and comprehensive expression analysis. Consequently, a technology capable of consistently and effectively extracting high-purity DNA and RNA from plants is of paramount importance. Although model plants, such as Arabidopsis and tobacco, have established methods for DNA and RNA extraction, floricultural species such as roses present unique challenges. Different techniques to extract DNA and RNA from various floricultural species were investigated. Upon sampling and grinding the petals of several floricultural species, it was observed that nucleic acid extraction from the ground petal solutions of low viscosity was straightforward; solutions of high viscosity presented a significant challenge. It is postulated that the presence of substantial quantities of polysaccharides and polyphenols in the plant tissue was responsible for the inhibition of nucleic acid extraction. Consequently, attempts were made to extract high-purity DNA and RNA by improving the CTAB method and combining it with commercially available nucleic acid extraction kits. The quality of the total extracted DNA and RNA was evaluated using standard methods. Finally, the effectiveness of the extraction method was assessed by determining whether it was possible to create a library that could be applied as a suitable template for a next-generation sequencer. In conclusion, a method was developed for consistent and accurate nucleic acid extraction from high-viscosity floricultural samples. These results demonstrate improved techniques for DNA and RNA extraction from flowers, help facilitate gene editing of floricultural species and expand the boundaries of research and commercial opportunities.Keywords: floriculture, gene editing, next-generation sequencing, nucleic acid extraction
Procedia PDF Downloads 299197 Enhancing Scalability in Ethereum Network Analysis: Methods and Techniques
Authors: Stefan K. Behfar
Abstract:
The rapid growth of the Ethereum network has brought forth the urgent need for scalable analysis methods to handle the increasing volume of blockchain data. In this research, we propose efficient methodologies for making Ethereum network analysis scalable. Our approach leverages a combination of graph-based data representation, probabilistic sampling, and parallel processing techniques to achieve unprecedented scalability while preserving critical network insights. Data Representation: We develop a graph-based data representation that captures the underlying structure of the Ethereum network. Each block transaction is represented as a node in the graph, while the edges signify temporal relationships. This representation ensures efficient querying and traversal of the blockchain data. Probabilistic Sampling: To cope with the vastness of the Ethereum blockchain, we introduce a probabilistic sampling technique. This method strategically selects a representative subset of transactions and blocks, allowing for concise yet statistically significant analysis. The sampling approach maintains the integrity of the network properties while significantly reducing the computational burden. Graph Convolutional Networks (GCNs): We incorporate GCNs to process the graph-based data representation efficiently. The GCN architecture enables the extraction of complex spatial and temporal patterns from the sampled data. This combination of graph representation and GCNs facilitates parallel processing and scalable analysis. Distributed Computing: To further enhance scalability, we adopt distributed computing frameworks such as Apache Hadoop and Apache Spark. By distributing computation across multiple nodes, we achieve a significant reduction in processing time and enhanced memory utilization. Our methodology harnesses the power of parallelism, making it well-suited for large-scale Ethereum network analysis. Evaluation and Results: We extensively evaluate our methodology on real-world Ethereum datasets covering diverse time periods and transaction volumes. The results demonstrate its superior scalability, outperforming traditional analysis methods. Our approach successfully handles the ever-growing Ethereum data, empowering researchers and developers with actionable insights from the blockchain. Case Studies: We apply our methodology to real-world Ethereum use cases, including detecting transaction patterns, analyzing smart contract interactions, and predicting network congestion. The results showcase the accuracy and efficiency of our approach, emphasizing its practical applicability in real-world scenarios. Security and Robustness: To ensure the reliability of our methodology, we conduct thorough security and robustness evaluations. Our approach demonstrates high resilience against adversarial attacks and perturbations, reaffirming its suitability for security-critical blockchain applications. Conclusion: By integrating graph-based data representation, GCNs, probabilistic sampling, and distributed computing, we achieve network scalability without compromising analytical precision. This approach addresses the pressing challenges posed by the expanding Ethereum network, opening new avenues for research and enabling real-time insights into decentralized ecosystems. Our work contributes to the development of scalable blockchain analytics, laying the foundation for sustainable growth and advancement in the domain of blockchain research and application.Keywords: Ethereum, scalable network, GCN, probabilistic sampling, distributed computing
Procedia PDF Downloads 789196 Diffusion Magnetic Resonance Imaging and Magnetic Resonance Spectroscopy in Detecting Malignancy in Maxillofacial Lesions
Authors: Mohamed Khalifa Zayet, Salma Belal Eiid, Mushira Mohamed Dahaba
Abstract:
Introduction: Malignant tumors may not be easily detected by traditional radiographic techniques especially in an anatomically complex area like maxillofacial region. At the same time, the advent of biological functional MRI was a significant footstep in the diagnostic imaging field. Objective: The purpose of this study was to define the malignant metabolic profile of maxillofacial lesions using diffusion MRI and magnetic resonance spectroscopy, as adjunctive aids for diagnosing of such lesions. Subjects and Methods: Twenty-one patients with twenty-two lesions were enrolled in this study. Both morphological and functional MRI scans were performed, where T1, T2 weighted images, diffusion-weighted MRI with four apparent diffusion coefficient (ADC) maps were constructed for analysis, and magnetic resonance spectroscopy with qualitative and semi-quantitative analyses of choline and lactate peaks were applied. Then, all patients underwent incisional or excisional biopsies within two weeks from MR scans. Results: Statistical analysis revealed that not all the parameters had the same diagnostic performance, where lactate had the highest areas under the curve (AUC) of 0.9 and choline was the lowest with insignificant diagnostic value. The best cut-off value suggested for lactate was 0.125, where any lesion above this value is supposed to be malignant with 90 % sensitivity and 83.3 % specificity. Despite that ADC maps had comparable AUCs still, the statistical measure that had the final say was the interpretation of likelihood ratio. As expected, lactate again showed the best combination of positive and negative likelihood ratios, whereas for the maps, ADC map with 500 and 1000 b-values showed the best realistic combination of likelihood ratios, however, with lower sensitivity and specificity than lactate. Conclusion: Diffusion weighted imaging and magnetic resonance spectroscopy are state-of-art in the diagnostic arena and they manifested themselves as key players in the differentiation process of orofacial tumors. The complete biological profile of malignancy can be decoded as low ADC values, high choline and/or high lactate, whereas that of benign entities can be translated as high ADC values, low choline and no lactate.Keywords: diffusion magnetic resonance imaging, magnetic resonance spectroscopy, malignant tumors, maxillofacial
Procedia PDF Downloads 1729195 Multiobjective Optimization of a Pharmaceutical Formulation Using Regression Method
Authors: J. Satya Eswari, Ch. Venkateswarlu
Abstract:
The formulation of a commercial pharmaceutical product involves several composition factors and response characteristics. When the formulation requires to satisfy multiple response characteristics which are conflicting, an optimal solution requires the need for an efficient multiobjective optimization technique. In this work, a regression is combined with a non-dominated sorting differential evolution (NSDE) involving Naïve & Slow and ε constraint techniques to derive different multiobjective optimization strategies, which are then evaluated by means of a trapidil pharmaceutical formulation. The analysis of the results show the effectiveness of the strategy that combines the regression model and NSDE with the integration of both Naïve & Slow and ε constraint techniques for Pareto optimization of trapidil formulation. With this strategy, the optimal formulation at pH=6.8 is obtained with the decision variables of micro crystalline cellulose, hydroxypropyl methylcellulose and compression pressure. The corresponding response characteristics of rate constant and release order are also noted down. The comparison of these results with the experimental data and with those of other multiple regression model based multiobjective evolutionary optimization strategies signify the better performance for optimal trapidil formulation.Keywords: pharmaceutical formulation, multiple regression model, response surface method, radial basis function network, differential evolution, multiobjective optimization
Procedia PDF Downloads 4099194 Subsea Processing: Deepwater Operation and Production
Authors: Md Imtiaz, Sanchita Dei, Shubham Damke
Abstract:
In recent years, there has been a rapidly accelerating shift from traditional surface processing operations to subsea processing operation. This shift has been driven by a number of factors including the depletion of shallow fields around the world, technological advances in subsea processing equipment, the need for production from marginal fields, and lower initial upfront investment costs compared to traditional production facilities. Moving production facilities to the seafloor offers a number of advantage, including a reduction in field development costs, increased production rates from subsea wells, reduction in the need for chemical injection, minimization of risks to worker ,reduction in spills due to hurricane damage, and increased in oil production by enabling production from marginal fields. Subsea processing consists of a range of technologies for separation, pumping, compression that enables production from offshore well without the need for surface facilities. At present, there are two primary technologies being used for subsea processing: subsea multiphase pumping and subsea separation. Multiphase pumping is the most basic subsea processing technology. Multiphase pumping involves the use of boosting system to transport the multiphase mixture through pipelines to floating production vessels. The separation system is combined with single phase pumps or water would be removed and either pumped to the surface, re-injected, or discharged to the sea. Subsea processing can allow for an entire topside facility to be decommissioned and the processed fluids to be tied back to a new, more distant, host. This type of application reduces costs and increased both overall facility and integrity and recoverable reserve. In future, full subsea processing could be possible, thereby eliminating the need for surface facilities.Keywords: FPSO, marginal field, Subsea processing, SWAG
Procedia PDF Downloads 4159193 Effects of Performance Appraisal on Employee Productivity in Yobe State University, Damaturu, (A Case Study of the Department of Islamic Studies)
Authors: Adam Abdullahi Mohammed
Abstract:
Performance appraisal is an assessment made to ensure the level of a worker’s productivity in a given period of time. The appraisal system is divided into two categories that are traditional methods and modern methods, with emphasis based on the evaluation of work results. In the traditional approach of staff appraisal, which puts more emphasis on individual traits, supervisors are required to measure employees through interactions based on what they achieved with reference to job descriptions, as well as rating them based on questionnaires without staff interaction. These methods are not effective because staff may give biased information. The study will attempt to assess the effect of performance appraisal on employee productivity at Yobe State University, Damaturu. It is aimed at assessing the process, methods, and objectives of performance appraisal and its feedback to know how they affect the success of the appraisal, its results, and employee productivity. In this study, a quantitative research method is adopted in collecting and analyzing data, and a questionnaire will be used as data collecting instrument. As it is a case study, the target population is the staff of the department of Islamic Studies. The research will employ a census sampling technique where all the subjects in the target populations are given a chance to participate in the study. This sampling method was considered because the entire target population is considered researchable. The expected findings are that staff performance appraisal in the department of Islamic Studies has effects on employee productivity; this is to say if it is given due consideration and the needful being done will improve employee productivity.Keywords: performance appraisal, employee productivity, Yobe state University, appraisal feedback
Procedia PDF Downloads 739192 Synthesis and Characterization of Functionalized Carbon Nanorods/Polystyrene Nanocomposites
Authors: M. A. Karakassides, M. Baikousi, A. Kouloumpis, D. Gournis
Abstract:
Nanocomposites of Carbon Nanorods (CNRs) with Polystyrene (PS), have been synthesized successfully by means of in situ polymerization process and characterized. Firstly, carbon nanorods with graphitic structure were prepared by the standard synthetic procedure of CMK-3 using MCM-41 as template, instead of SBA-15, and sucrose as carbon source. In order to create an organophilic surface on CNRs, two parts of modification were realized: surface chemical oxidation (CNRs-ox) according to the Staudenmaier’s method and the attachment of octadecylamine molecules on the functional groups of CNRs-ox (CNRs-ODA The nanocomposite materials of polystyrene with CNRs-ODA, were prepared by a solution-precipitation method at three nanoadditive to polymer loadings (1, 3 and 5 wt. %). The as derived nanocomposites were studied with a combination of characterization and analytical techniques. Especially, Fourier-transform infrared (FT-IR) and Raman spectroscopies were used for the chemical and structural characterization of the pristine materials and the derived nanocomposites while the morphology of nanocomposites and the dispersion of the carbon nanorods were analyzed by atomic force and scanning electron microscopy techniques. Tensile testing and thermogravimetric analysis (TGA) along with differential scanning calorimetry (DSC) were also used to examine the mechanical properties and thermal stability -glass transition temperature of PS after the incorporation of CNRs-ODA nanorods. The results showed that the thermal and mechanical properties of the PS/ CNRs-ODA nanocomposites gradually improved with increasing of CNRs-ODA loading.Keywords: nanocomposites, polystyrene, carbon, nanorods
Procedia PDF Downloads 3529191 In vivo Antidiarrheal and ex-vivo Spasmolytic Activities of the Aqueous Extract of the Roots of Echinops kebericho Mesfin in Rodents and Isolated Guinea-Pig Ileum
Authors: Fisseha Shiferie (Bpharm, Mpharm)
Abstract:
Diarrhea is a common gastrointestinal disorder characterized by an increase in stool frequency and a change in stool consistency. Inspite of the availability of many drugs as antidiarrheal agents, the search for a drug with affordable cost and better efficacy is essential to overcome diarrheal problems. The root extract of Echinops kebericho, is used by traditional practitioners for the treatment of diarrhea. However, the scientific basis for this usage has not been yet established. The purpose of the present study was to evaluate the antidiarrheal and spasmolytic activities of the aqueous extract of the roots of E. kebericho in rodents and isolated guinea-pig ileum preparations. In the castor oil induced intestinal transit test, E. kebericho produced a significant (p < 0.01) dose dependent decrease in propulsion with peristaltic index values of 45.05±3.3, 42.71±2.25 and 33.17±3.3%, respectively at doses of 100, 200 and 400 mg/kg compared with 63.43±7.3% for control. In the castor oil-induced diarrhea test, the mean defecation was reduced from 1.81±0.18 to 0.99 ± 0.21 compared with 2.59 ±0.81 for control. The extract (at doses stated above) significantly decreased the volume of intestinal fluid secretion induced by castor oil (2.31±0.1 to 2.01±0.2) in relation to 3.28±0.3 for control. When tested on a guinea-pig ileum, root extract of Echinops kebericho exhibited a dose dependent spasmolytic effect, 23.07 % being its highest inhibitory effect. The results obtained in this study give some scientific support to the use of Echinops kebericho as an antidiarrheal agent due to its inhibitory effects on the different diarrheal parameters used in this study.Keywords: antidiarrheal activity, E. kebericho, traditional medicine, diarrhea, enteropooling, and intestinal transit
Procedia PDF Downloads 3199190 Diagnosis of Rotavirus Infection among Egyptian Children by Using Different Laboratory Techniques
Authors: Mohamed A. Alhammad, Hadia A. Abou-Donia, Mona H. Hashish, Mohamed N. Massoud
Abstract:
Background: Rotavirus is the leading etiologic agent of severe diarrheal disease in infants and young children worldwide. The present study was aimed 1) to detect rotavirus infection as a cause of diarrhoea among children under 5 years of age using the two serological methods (ELISA and LA) and the PCR technique (2) to evaluate the three methodologies used for human RV detection in stool samples. Materials and Methods: This study was carried out on 247 children less than 5 years old, diagnosed clinically as acute gastroenteritis and attending Alexandria University Children Hospital at EL-Shatby. Rotavirus antigen was screened by ELISA and LA tests in all stool samples, whereas only 100 samples were subjected to RT-PCR method for detection of rotavirus RNA. Results: Out of the 247 studied cases with diarrhoea, rotavirus antigen was detected in 83 (33.6%) by ELISA and 73 (29.6%) by LA, while the 100 cases tested by RT-PCR showed that 44% of them had rotavirus RNA. Rotavirus diarrhoea was significantly presented with a marked seasonal peak during autumn and winter (61.4%). Conclusion: The present study confirms the huge burden of rotavirus as a major cause of acute diarrhoea in Egyptian infants and young children. It was concluded that; LA is equal in sensitivity to ELISA, ELISA is more specific than LA, and RT-PCR is more specific than ELISA and LA in diagnosis of rotavirus infection.Keywords: rotavirus, diarrhea, immunoenzyme techniques, latex fixation tests, RT-PCR
Procedia PDF Downloads 3709189 Evaluation of a Data Fusion Algorithm for Detecting and Locating a Radioactive Source through Monte Carlo N-Particle Code Simulation and Experimental Measurement
Authors: Hadi Ardiny, Amir Mohammad Beigzadeh
Abstract:
Through the utilization of a combination of various sensors and data fusion methods, the detection of potential nuclear threats can be significantly enhanced by extracting more information from different data. In this research, an experimental and modeling approach was employed to track a radioactive source by combining a surveillance camera and a radiation detector (NaI). To run this experiment, three mobile robots were utilized, with one of them equipped with a radioactive source. An algorithm was developed in identifying the contaminated robot through correlation between camera images and camera data. The computer vision method extracts the movements of all robots in the XY plane coordinate system, and the detector system records the gamma-ray count. The position of the robots and the corresponding count of the moving source were modeled using the MCNPX simulation code while considering the experimental geometry. The results demonstrated a high level of accuracy in finding and locating the target in both the simulation model and experimental measurement. The modeling techniques prove to be valuable in designing different scenarios and intelligent systems before initiating any experiments.Keywords: nuclear threats, radiation detector, MCNPX simulation, modeling techniques, intelligent systems
Procedia PDF Downloads 1269188 On the Existence of Homotopic Mapping Between Knowledge Graphs and Graph Embeddings
Authors: Jude K. Safo
Abstract:
Knowledge Graphs KG) and their relation to Graph Embeddings (GE) represent a unique data structure in the landscape of machine learning (relative to image, text and acoustic data). Unlike the latter, GEs are the only data structure sufficient for representing hierarchically dense, semantic information needed for use-cases like supply chain data and protein folding where the search space exceeds the limits traditional search methods (e.g. page-rank, Dijkstra, etc.). While GEs are effective for compressing low rank tensor data, at scale, they begin to introduce a new problem of ’data retreival’ which we observe in Large Language Models. Notable attempts by transE, TransR and other prominent industry standards have shown a peak performance just north of 57% on WN18 and FB15K benchmarks, insufficient practical industry applications. They’re also limited, in scope, to next node/link predictions. Traditional linear methods like Tucker, CP, PARAFAC and CANDECOMP quickly hit memory limits on tensors exceeding 6.4 million nodes. This paper outlines a topological framework for linear mapping between concepts in KG space and GE space that preserve cardinality. Most importantly we introduce a traceable framework for composing dense linguistic strcutures. We demonstrate performance on WN18 benchmark this model hits. This model does not rely on Large Langauge Models (LLM) though the applications are certainy relevant here as well.Keywords: representation theory, large language models, graph embeddings, applied algebraic topology, applied knot theory, combinatorics
Procedia PDF Downloads 689187 Islamic Finance: What is the Outlook for Italy?
Authors: Paolo Pietro Biancone
Abstract:
The spread of Islamic financial instruments is an opportunity to offer integration for the immigrant population and to attract, through the specific products, the richness of sovereign funds from the "Arab" countries. However, it is important to consider the possibility of comparing a traditional finance model, which in recent times has given rise to many doubts, with an "alternative" finance model, where the ethical aspect arising from religious principles is very important.Keywords: banks, Europe, Islamic finance, Italy
Procedia PDF Downloads 2739186 Determinants of Selenium Intake in a High HIV Prevalence Fishing Community in Bondo District, Kenya
Authors: Samwel Boaz Otieno, Fred Were, Ephantus Kabiru, Kaunda Waza
Abstract:
A study was done to establish determinants of selenium intake in a high HIV prevalence fishing community in the Pala Bondo district, Kenya. It was established that most of the respondents (61%) were small holder Farmers and Fishermen {χ2 (1, N=386) p<0.000}, and that most of them (91.2%) had up to college level education {χ2.(1, N=386) p<0.000}, while the number of males and females were not significantly different {χ (1, N=386) p=0.263} and 83.5% of respondents were married {χ2 (1, N=386) p=0.000}. The study showed that adults take on average 2.68 meals a day (N=382, SD=0.603), while children take 3.02 meals (N=386, SD=1.031) a day, and that in most households (82.6%) food is prepared by the women {χ2 (1, N=386) p=0.000} and further that 50% of foods eaten in that community are purchased {χ2 (1, N=386)=0.1818, p=0.6698}. The foods eaten by 75.2% of the respondents were Oreochromis niloticus, Lates niloticus, and Sorghum bicolour, 64.1% vegetables and that both children and adults eat same types of food, and further that traditional foods which have become extinct are mainly vegetables (46%). The study established that selenium levels in foods eaten in Pala sub-locations varies with traditional vegetables having higher levels of selenium; for example, Laurnea cornuta (148.5 mg/kg), Cleome gynandra (121.5 mg/kg), Vignia ungulata (21.97 mg/kg), while Rastrineobola argentea (51 mg/kg), Lates niloticus (0), Oreochromis niloticus (0) Sorgum bicolour (19.97 mg/kg), and Sorgum bicolour (0). The study showed that there is an inverse relationship between foods eaten and selenium levels {RR=1.21, p=0.000}, with foods eaten by 75.2% of respondents (Oreochromis niloticus/Lates niloticus) having no detectable selenium. The four soil types identified in the study area had varying selenium levels with pleat loam (13.3 mg/kg), sandy loam (10.7 mg/kg), clay (2.8 mg/kg) and loam (4.8 mg/kg). It was concluded from this study that for the foods eaten by most of the respondents the selenium levels were below Daily Reference Intake.Keywords: determinants, HIV, food, fishing, Selenium
Procedia PDF Downloads 2609185 In Vitro Hepatoprotective and Anti-Hepatitis B Activitis of Cyperus rotundus Rhizome Fractions
Authors: Mohammad K. Parvez, Ahmed H. Arbab, Mohammed S. Al-Dosari
Abstract:
Cyperus rotendus rhizomes are used as traditional medicine, including Ayurveda in chronic liver diseases and hepatitis B. We investigated the in vitro hepatoprotective and anti-hepatitis B virus (HBV) potential of Cyperus rotundus rhizome organic and aqueous fractions. Of these, the n-butanol and aqueous fractions showed the most promising, dose-dependent hepatoprotection in DCFH-injured HepG2 cells at 48 h. DCFH-toxicated cells were recovered to about 88% and 96%, upon treatment with n-butanol and aqueous fractions (200 g/ml), respectively compared to DCFH-only treated cells. Further, C. rotundus fractions were tested for anti-HBV activities by measuring the expression levels of viral antigens (HBsAg and HBeAg) in the HepG2.2.15 culture supernatants. At 48 h post-treatment, the ethyl acetate, n-butanol and aqueous fractions showed dose-dependent inhibition wherein at a higher dose (100 g/ml), HBsAg production was reduced to 60.27%, 46.87 and 42.76%, respectively. In a time-course study, HBsAg production was inhibited up to 50% and 40% by ethyl acetate and n-butanol fractions (100 g/ml), respectively on day 5. Three three active fractions were further subjected to time-dependent inhibition of HBeAg expression, an indirect measure of HBV active DNA replication. At day 5 post-treatment, ethyl acetate and n-butanol fractions downregulated HBV replication by 44.14% and 24.70%, respectively. In conclusion, our results showed very promising hepatoprotective and anti-HBV potential of C. rotendus tubers fractions in vitro. Our data could, therefore, provide the basis for the claimed traditional use of C. rotendus for jaundice and hepatitis.Keywords: anti-hepatitis B, cyperus rotundus, hepatitis B virus, hepatoprotection
Procedia PDF Downloads 2379184 The Use of Medicinal Plants among Middle Aged People in Rural Area, West Java, Indonesia
Authors: Rian Diana, Naufal Muharam Nurdin, Faisal Anwar, Hadi Riyadi, Ali Khomsan
Abstract:
The use of traditional medicine (herbs and medicinal plants) are common among Indonesian people especially the elderly. Few study explore the use of medicinal plants in middle aged people. This study aims to collect information on the use of medicinal plants in middle aged people in rural areas. This cross sectional study included 224 subjects aged 45-59 years old and conducted in Cianjur District, West Java in 2014. Semi-structured questionnaires were used to collect information about preference in treatment of illness, the use of medicinal plants, and their purposes. Information also recorded plant names, parts used, mode of preparation, and dosage. Buying drugs in stall (83.9%) is the first preference in treatment of illness, followed by modern treatment 19.2% (doctors) and traditional treatment 17.0% (herbs/medicinal plants). 87 subjects (38.8%) were using herbs and medicinal plants for curative (66.7%), preventive (31.2%), and rehabilitative (2.1%) purposes. In this study, 48 species are used by the subjects. Physalis minima L. 'cecenet', Orthosiphon aristatus Mic. 'kumis kucing', and Annona muricata 'sirsak' are commonly used for the treatment of hypertension and stiffness. Leaves (64.6%) are the most common part used. Medicinal plants were washed and boiled in a hot water. Subject drinks the herbs with a different dosage. One in three middle aged people used herbal and medicinal plants for curative and preventive treatment particularly hypertension and stiffness. Increasing knowledge about herbal or medicinal plants dosage and their interaction with medical drugs are important to do.Doses vary between 1-3 glasses/day for treatment and 1-2 glasses/months for prevention of diseases.Keywords: herbs, hypertension, medicinal plants, middle age, rural
Procedia PDF Downloads 2449183 Integrating Knowledge Distillation of Multiple Strategies
Authors: Min Jindong, Wang Mingxia
Abstract:
With the widespread use of artificial intelligence in life, computer vision, especially deep convolutional neural network models, has developed rapidly. With the increase of the complexity of the real visual target detection task and the improvement of the recognition accuracy, the target detection network model is also very large. The huge deep neural network model is not conducive to deployment on edge devices with limited resources, and the timeliness of network model inference is poor. In this paper, knowledge distillation is used to compress the huge and complex deep neural network model, and the knowledge contained in the complex network model is comprehensively transferred to another lightweight network model. Different from traditional knowledge distillation methods, we propose a novel knowledge distillation that incorporates multi-faceted features, called M-KD. In this paper, when training and optimizing the deep neural network model for target detection, the knowledge of the soft target output of the teacher network in knowledge distillation, the relationship between the layers of the teacher network and the feature attention map of the hidden layer of the teacher network are transferred to the student network as all knowledge. in the model. At the same time, we also introduce an intermediate transition layer, that is, an intermediate guidance layer, between the teacher network and the student network to make up for the huge difference between the teacher network and the student network. Finally, this paper adds an exploration module to the traditional knowledge distillation teacher-student network model. The student network model not only inherits the knowledge of the teacher network but also explores some new knowledge and characteristics. Comprehensive experiments in this paper using different distillation parameter configurations across multiple datasets and convolutional neural network models demonstrate that our proposed new network model achieves substantial improvements in speed and accuracy performance.Keywords: object detection, knowledge distillation, convolutional network, model compression
Procedia PDF Downloads 2789182 Bacterial Flora of the Anopheles Fluviatilis S. L. in an Endemic Malaria Area in Southeastern Iran for Candidate Paraterasgenesis Strains
Authors: Seyed Hassan Moosa-kazemi, Jalal Mohammadi Soleimani, Hassan Vatandoost, Mohammad Hassan Shirazi, Sara Hajikhani, Roonak Bakhtiari, Morteza Akbari, Siamak Hydarzadeh
Abstract:
Malaria is an infectious disease and considered most important health problems in the southeast of Iran. Iran is elimination malaria phase and new tool need to vector control. Paraterasgenesis is a new way to cut of life cycle of the malaria parasite. In this study, the microflora of the surface and gut of various stages of Anopheles fluviatilis James as one of the important malaria vector was studied using biochemical and molecular techniques during 2013-2014. Twelve bacteria species were found including; Providencia rettgeri, Morganella morganii, Enterobacter aerogenes, Pseudomonas oryzihabitans, Citrobacter braakii، Citrobacter freundii، Aeromonas hydrophila، Klebsiella oxytoca, Citrobacter koseri, Serratia fonticola، Enterobacter sakazakii and Yersinia pseudotuberculosis. The species of Alcaligenes faecalis, Providencia vermicola and Enterobacter hormaechei were identified in various stages of the vector and confirmed by biochemical and molecular techniques. We found Providencia rettgeri proper candidate for paratransgenesis.Keywords: Anopheles fluviatilis, bacteria, malaria, Paraterasgenesis, Southern Iran
Procedia PDF Downloads 4959181 Fiber Stiffness Detection of GFRP Using Combined ABAQUS and Genetic Algorithms
Authors: Gyu-Dong Kim, Wuk-Jae Yoo, Sang-Youl Lee
Abstract:
Composite structures offer numerous advantages over conventional structural systems in the form of higher specific stiffness and strength, lower life-cycle costs, and benefits such as easy installation and improved safety. Recently, there has been a considerable increase in the use of composites in engineering applications and as wraps for seismic upgrading and repairs. However, these composites deteriorate with time because of outdated materials, excessive use, repetitive loading, climatic conditions, manufacturing errors, and deficiencies in inspection methods. In particular, damaged fibers in a composite result in significant degradation of structural performance. In order to reduce the failure probability of composites in service, techniques to assess the condition of the composites to prevent continual growth of fiber damage are required. Condition assessment technology and nondestructive evaluation (NDE) techniques have provided various solutions for the safety of structures by means of detecting damage or defects from static or dynamic responses induced by external loading. A variety of techniques based on detecting the changes in static or dynamic behavior of isotropic structures has been developed in the last two decades. These methods, based on analytical approaches, are limited in their capabilities in dealing with complex systems, primarily because of their limitations in handling different loading and boundary conditions. Recently, investigators have introduced direct search methods based on metaheuristics techniques and artificial intelligence, such as genetic algorithms (GA), simulated annealing (SA) methods, and neural networks (NN), and have promisingly applied these methods to the field of structural identification. Among them, GAs attract our attention because they do not require a considerable amount of data in advance in dealing with complex problems and can make a global solution search possible as opposed to classical gradient-based optimization techniques. In this study, we propose an alternative damage-detection technique that can determine the degraded stiffness distribution of vibrating laminated composites made of Glass Fiber-reinforced Polymer (GFRP). The proposed method uses a modified form of the bivariate Gaussian distribution function to detect degraded stiffness characteristics. In addition, this study presents a method to detect the fiber property variation of laminated composite plates from the micromechanical point of view. The finite element model is used to study free vibrations of laminated composite plates for fiber stiffness degradation. In order to solve the inverse problem using the combined method, this study uses only first mode shapes in a structure for the measured frequency data. In particular, this study focuses on the effect of the interaction among various parameters, such as fiber angles, layup sequences, and damage distributions, on fiber-stiffness damage detection.Keywords: stiffness detection, fiber damage, genetic algorithm, layup sequences
Procedia PDF Downloads 2779180 An Evolutionary Approach for QAOA for Max-Cut
Authors: Francesca Schiavello
Abstract:
This work aims to create a hybrid algorithm, combining Quantum Approximate Optimization Algorithm (QAOA) with an Evolutionary Algorithm (EA) in the place of traditional gradient based optimization processes. QAOA’s were first introduced in 2014, where, at the time, their algorithm performed better than the traditional best known classical algorithm for Max-cut graphs. Whilst classical algorithms have improved since then and have returned to being faster and more efficient, this was a huge milestone for quantum computing, and their work is often used as a benchmarking tool and a foundational tool to explore variants of QAOA’s. This, alongside with other famous algorithms like Grover’s or Shor’s, highlights to the world the potential that quantum computing holds. It also presents the reality of a real quantum advantage where, if the hardware continues to improve, this could constitute a revolutionary era. Given that the hardware is not there yet, many scientists are working on the software side of things in the hopes of future progress. Some of the major limitations holding back quantum computing are the quality of qubits and the noisy interference they generate in creating solutions, the barren plateaus that effectively hinder the optimization search in the latent space, and the availability of number of qubits limiting the scale of the problem that can be solved. These three issues are intertwined and are part of the motivation for using EAs in this work. Firstly, EAs are not based on gradient or linear optimization methods for the search in the latent space, and because of their freedom from gradients, they should suffer less from barren plateaus. Secondly, given that this algorithm performs a search in the solution space through a population of solutions, it can also be parallelized to speed up the search and optimization problem. The evaluation of the cost function, like in many other algorithms, is notoriously slow, and the ability to parallelize it can drastically improve the competitiveness of QAOA’s with respect to purely classical algorithms. Thirdly, because of the nature and structure of EA’s, solutions can be carried forward in time, making them more robust to noise and uncertainty. Preliminary results show that the EA algorithm attached to QAOA can perform on par with the traditional QAOA with a Cobyla optimizer, which is a linear based method, and in some instances, it can even create a better Max-Cut. Whilst the final objective of the work is to create an algorithm that can consistently beat the original QAOA, or its variants, due to either speedups or quality of the solution, this initial result is promising and show the potential of EAs in this field. Further tests need to be performed on an array of different graphs with the parallelization aspect of the work commencing in October 2023 and tests on real hardware scheduled for early 2024.Keywords: evolutionary algorithm, max cut, parallel simulation, quantum optimization
Procedia PDF Downloads 609179 Retrofitting Adaptive Reuse into Palaces of Northern India
Authors: Shefali Nayak
Abstract:
The architectural appeal, familiarity, and idiom of culturally significant structures are due to societal attachment to various movements, historical association or deviation. Generally, the urge to preserve a building in the northern part of India is driven either by emotional dogma or rational thinking, but, it is also influenced by traditional affinity. The northern region of India has an assortment of palaces and Havelis belonging to various time periods and families with vernacular yet signature style of architecture. Many of them are either successfully conserved by being put into adaptive reuse and some of them have been midst controversies and continued to remain in ruins. The research focuses on comparing successful examples of adaptive reuse such as Neemrana, Mehrangargh Fort palace with a few other merchant havelis converted into heritage hotels. Furthermore, evaluates the architectural aspects of structure, materials, plumbing and electrical installations, as well as specific challenges faced by heritage professionals practicing sustainability, while respecting traditional feelings of various stakeholders. This paper concludes through the analysis of the case study that, its highly unlikely for sustainable design cannot be used as a stand-alone application for heritage structures or cities, it needs the support of architecture conservation to be put into practice. However, it is often demanding to fit a new use of a building into an aged structure. This paper records modern-day generic requirements that reflect challenges faced by different architects, while conserving a heritage structure and retrofitting it into today's requisites. The research objective is to establish how conservation, restoration, and urban regeneration are closely related to sustainable architecture in historical cities.Keywords: architecture conservation, architecture heritage, adaptive reuse, retrofitting, sustainability, urban regeneration
Procedia PDF Downloads 1829178 Comparison of Budgeting Reforms: A Case Study of Thailand and OECD Member Countries
Authors: Nattapol Pourprasert, Siriwan Manowan
Abstract:
This study aims to find out what budget problems Thailand is facing with and how the results from the comparison between the budgeting reform by Thailand and the reforms by OECD member countries can be used for carrying out budgeting reform of Thailand. The findings from the study on the budget problems that Thailand is facing with reveal that the budgeting system of Thailand lacks of the assessment for the cost-effectiveness of the expenditure of borrowed money and budgets in order to determine whether the expenses are worth the taxes collected from people or not. This is because most popularity policies have unlimited budgets which can lead to the financial risks. Also, these policies create great tax burdens for the descendants in the future and affect the fair distribution of incomes but the Parliament of Thailand never considers these facts. The findings from the comparison between Thai budgeting reform and those by OECD member countries manifest that the traditional budgeting system of Thailand is the department-based budgeting, which is still used without being changed or adjusted in order to fit the new administrative regimes. This traditional budgeting system suggests that a department is responsible for budgeting tasks. Meanwhile, in OECD member countries, budgeting reforms are carried out simultaneously with the reforms of civil service systems so that they are driven in the same directions. The budgeting reforms that rely only on the analyses on economic or technical dimension can hardly lead to success. The budgeting systems of OECD member countries are designed to deal with the unique problems that each of the member countries is facing with rather than adopting the modern system developed by other countries. The budgeting system that has a complicated concept and practice has to be implemented under a flexible strategy so that the departments that implement it can learn about and adjust itself to the system. Continuous and consistent development and training for staff members are also necessary.Keywords: budgeting reforms, Thailand, OECD member countries, budget problems
Procedia PDF Downloads 2879177 Vision Zero for the Caribbean Using the Systemic Approach for Road Safety: A Case Study Analyzing Jamaican Road Crash Data (Ongoing)
Authors: Rachelle McFarlane
Abstract:
The Second Decade of Action Road Safety has begun with increased focus on countries who are disproportionately affected by road fatalities. Researchers highlight the low effectiveness of road safety campaigns in Latin America and the Caribbean (LAC) still reporting approximately 130,000 deaths and six million injuries annually. The regional fatality rate 19.2 per 100,000 with heightened concern for persons 15 to 44 years. In 2021, 483 Jamaicans died in 435 crashes, with 33% of these fatalities occurring during Covid-19 curfew hours. The study objective is to conduct a systemic safety review of Jamaican road crashes and provide a framework for its use in complementing traditional methods. The methodology involves the use of the FHWA Systemic Safety Project Selection Tool for analysis. This tool reviews systemwide data in order to identify risk factors across the network associated with severe and fatal crashes, rather that only hotspots. A total of 10,379 crashes with 745 fatalities and serious injuries were reviewed. Of the focus crash types listed, 50% of ‘Pedestrian Accidents’ resulted in fatalities and serious injuries, followed by 32% ‘Bicycle’, 24% ‘Single’ and 12% of ‘Head-on’. This study seeks to understand the associated risk factors with these priority crash types across the network and recommend cost-effective countermeasures across common sites. As we press towards Vision Zero, the inclusion of the systemic safety review method, complementing traditional methods, may create a wider impact in reducing road fatalities and serious injury by targeting issues across network with similarities; focus crash types and contributing factors.Keywords: systemic safety review, risk factors, road crashes, crash types
Procedia PDF Downloads 91