Search results for: DNA extraction methods
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 16669

Search results for: DNA extraction methods

15709 Review and Evaluation of Trending Canonical Correlation Analyses-Based Brain Computer Interface Methods

Authors: Bayar Shahab

Abstract:

The fast development of technology that has advanced neuroscience and human interaction with computers has enabled solutions to various problems, and issues of this new era have been found and are being found like no other time in history. Brain-computer interface so-called BCI has opened the door to several new research areas and have been able to provide solutions to critical and important issues such as supporting a paralyzed patient to interact with the outside world, controlling a robot arm, playing games in VR with the brain, driving a wheelchair or even a car and neurotechnology enabled the rehabilitation of the lost memory, etc. This review work presents state-of-the-art methods and improvements of canonical correlation analyses (CCA), which is an SSVEP-based BCI method. These are the methods used to extract EEG signal features or, to be said in a different way, the features of interest that we are looking for in the EEG analyses. Each of the methods from oldest to newest has been discussed while comparing their advantages and disadvantages. This would create a great context and help researchers to understand the most state-of-the-art methods available in this field with their pros and cons, along with their mathematical representations and usage. This work makes a vital contribution to the existing field of study. It differs from other similar recently published works by providing the following: (1) stating most of the prominent methods used in this field in a hierarchical way (2) explaining pros and cons of each method and their performance (3) presenting the gaps that exist at the end of each method that can open the understanding and doors to new research and/or improvements.

Keywords: BCI, CCA, SSVEP, EEG

Procedia PDF Downloads 143
15708 TessPy – Spatial Tessellation Made Easy

Authors: Jonas Hamann, Siavash Saki, Tobias Hagen

Abstract:

Discretization of urban areas is a crucial aspect in many spatial analyses. The process of discretization of space into subspaces without overlaps and gaps is called tessellation. It helps understanding spatial space and provides a framework for analyzing geospatial data. Tessellation methods can be divided into two groups: regular tessellations and irregular tessellations. While regular tessellation methods, like squares-grids or hexagons-grids, are suitable for addressing pure geometry problems, they cannot take the unique characteristics of different subareas into account. However, irregular tessellation methods allow the border between the subareas to be defined more realistically based on urban features like a road network or Points of Interest (POI). Even though Python is one of the most used programming languages when it comes to spatial analysis, there is currently no library that combines different tessellation methods to enable users and researchers to compare different techniques. To close this gap, we are proposing TessPy, an open-source Python package, which combines all above-mentioned tessellation methods and makes them easily accessible to everyone. The core functions of TessPy represent the five different tessellation methods: squares, hexagons, adaptive squares, Voronoi polygons, and city blocks. By using regular methods, users can set the resolution of the tessellation which defines the finesse of the discretization and the desired number of tiles. Irregular tessellation methods allow users to define which spatial data to consider (e.g., amenity, building, office) and how fine the tessellation should be. The spatial data used is open-source and provided by OpenStreetMap. This data can be easily extracted and used for further analyses. Besides the methodology of the different techniques, the state-of-the-art, including examples and future work, will be discussed. All dependencies can be installed using conda or pip; however, the former is more recommended.

Keywords: geospatial data science, geospatial data analysis, tessellations, urban studies

Procedia PDF Downloads 126
15707 Weighted Risk Scores Method Proposal for Occupational Safety Risk Assessment

Authors: Ulas Cinar, Omer Faruk Ugurlu, Selcuk Cebi

Abstract:

Occupational safety risk management is the most important element of a safe working environment. Effective risk management can only be possible with accurate analysis and evaluations. Scoring-based risk assessment methods offer considerable ease of application as they convert linguistic expressions into numerical results. It can also be easily adapted to any field. Contrary to all these advantages, important problems in scoring-based methods are frequently discussed. Effective measurability is one of the most critical problems. Existing methods allow experts to choose a score equivalent to each parameter. Therefore, experts prefer the score of the most likely outcome for risk. However, all other possible consequences are neglected. Assessments of the existing methods express the most probable level of risk, not the real risk of the enterprises. In this study, it is aimed to develop a method that will present a more comprehensive evaluation compared to the existing methods by evaluating the probability and severity scores, all sub-parameters, and potential results, and a new scoring-based method is proposed in the literature.

Keywords: occupational health and safety, risk assessment, scoring based risk assessment method, underground mining, weighted risk scores

Procedia PDF Downloads 133
15706 Analysis of the Aquifer Vulnerability of a Miopliocene Arid Area Using Drastic and SI Models

Authors: H. Majour, L. Djabri

Abstract:

Many methods in the groundwater vulnerability have been developed in the world (methods like PRAST, DRIST, APRON/ARAA, PRASTCHIM, GOD). In this study, our choice dealt with two recent complementary methods using category mapping of index with weighting criteria (Point County Systems Model MSCP) namely the standard DRASTIC method and SI (Susceptibility Index). At present, these two methods are the most used for the mapping of the intrinsic vulnerability of groundwater. Two classes of groundwater vulnerability in the Biskra sandy aquifer were identified by the DRASTIC method (average and high) and the SI method (very high and high). Integrated analysis has revealed that the high class is predominant for the DRASTIC method whereas for that of SI the preponderance is for the very high class. Furthermore, we notice that the method SI estimates better the vulnerability for the pollution in nitrates, with a rate of 85 % between the concentrations in nitrates of groundwater and the various established classes of vulnerability, against 75 % for the DRASTIC method. By including the land use parameter, the SI method produced more realistic results.

Keywords: DRASTIC, SI, GIS, Biskra sandy aquifer, Algeria

Procedia PDF Downloads 483
15705 Off-Line Detection of "Pannon Wheat" Milling Fractions by Near-Infrared Spectroscopic Methods

Authors: E. Izsó, M. Bartalné-Berceli, Sz. Gergely, A. Salgó

Abstract:

The aims of this investigation is to elaborate near-infrared methods for testing and recognition of chemical components and quality in “Pannon wheat” allied (i.e. true to variety or variety identified) milling fractions as well as to develop spectroscopic methods following the milling processes and evaluate the stability of the milling technology by different types of milling products and according to sampling times, respectively. This wheat categories produced under industrial conditions where samples were collected versus sampling time and maximum or minimum yields. The changes of the main chemical components (such as starch, protein, lipid) and physical properties of fractions (particle size) were analysed by dispersive spectrophotometers using visible (VIS) and near-infrared (NIR) regions of the electromagnetic radiation. Close correlation were obtained between the data of spectroscopic measurement techniques processed by various chemometric methods (e.g. principal component analysis (PCA), cluster analysis (CA) and operation condition of milling technology. Its obvious that NIR methods are able to detect the deviation of the yield parameters and differences of the sampling times by a wide variety of fractions, respectively. NIR technology can be used in the sensitive monitoring of milling technology.

Keywords: near infrared spectroscopy, wheat categories, milling process, monitoring

Procedia PDF Downloads 405
15704 Damage Assessment and Repair for Older Brick Buildings

Authors: Tim D. Sass

Abstract:

The experience of engineers and architects practicing today is typically limited to current building code requirements and modern construction methods and materials. However, many cities have a mix of new and old buildings with many buildings constructed over one hundred years ago when building codes and construction methods were much different. When a brick building sustains damage, a structural engineer is often hired to determine the cause of damage as well as determine the necessary repairs. Forensic studies of dozens of brick buildings shows an appreciation of historical building methods and materials is needed to correctly identify the cause of damage and design an appropriate repair. Damage on an older, brick building can be mistakenly attributed to storms or seismic events when the real source of the damage is deficient original construction. Assessing and remediating damaged brickwork on older brick buildings requires an understanding of the original construction, an understanding of older repair methods, and, an understanding of current building code requirements.

Keywords: brick, damage, deterioration, facade

Procedia PDF Downloads 225
15703 Multi-Stream Graph Attention Network for Recommendation with Knowledge Graph

Authors: Zhifei Hu, Feng Xia

Abstract:

In recent years, Graph neural network has been widely used in knowledge graph recommendation. The existing recommendation methods based on graph neural network extract information from knowledge graph through entity and relation, which may not be efficient in the way of information extraction. In order to better propose useful entity information for the current recommendation task in the knowledge graph, we propose an end-to-end Neural network Model based on multi-stream graph attentional Mechanism (MSGAT), which can effectively integrate the knowledge graph into the recommendation system by evaluating the importance of entities from both users and items. Specifically, we use the attention mechanism from the user's perspective to distil the domain nodes information of the predicted item in the knowledge graph, to enhance the user's information on items, and generate the feature representation of the predicted item. Due to user history, click items can reflect the user's interest distribution, we propose a multi-stream attention mechanism, based on the user's preference for entities and relationships, and the similarity between items to be predicted and entities, aggregate user history click item's neighborhood entity information in the knowledge graph and generate the user's feature representation. We evaluate our model on three real recommendation datasets: Movielens-1M (ML-1M), LFM-1B 2015 (LFM-1B), and Amazon-Book (AZ-book). Experimental results show that compared with the most advanced models, our proposed model can better capture the entity information in the knowledge graph, which proves the validity and accuracy of the model.

Keywords: graph attention network, knowledge graph, recommendation, information propagation

Procedia PDF Downloads 115
15702 Experimental Investigation on Flexural Properties of Bamboo Fibres Polypropylene Composites

Authors: Tigist Girma Kidane, Yalew Dessalegn Asfaw

Abstract:

Abstract: The current investigation aims to measure the longitudinal and transversal three-point bending tests of bamboo fibres polypropylene composites (BFPPCs) for the application of the automobile industry. Research has not been done on the properties of Ethiopian bamboo fibres for the utilization of composite development. The samples of bamboo plants have been harvested in 3–groups of age, 2–harvesting seasons, and 3–regions of bamboo species. Roll milling machine used for the extraction of bamboo fibres which has been developed by the authors. Chemical constituents measured using gravimetric methods. Unidirectional bamboo fibres prepreg has been produced using PP and hot press machine, then BFPPCs were produced using 6 layers of prepregs at automatic hot press machine. Age, harvesting month, and bamboo species have a statistically significant effect on the longitudinal and transverse flexural strength (FS), modulus of elasticity (MOE), and failure strain at α = 0.05 as evaluated by one-way ANOVA. 2–yrs old of BFPPCs have the highest FS and MOE, whereas November has the highest value of flexural properties. The highest to the lowest FS and MOE of BFPPCs has measured in Injibara, Mekaneselam, and Kombolcha, respectively. The transverse 3-point bending test has a lower FS and MOE compared to the longitudinal direction. The chemical constituents of Injibara, Mekaneselam, and Kombolcha have the highest to the lowest, respectively. 2-years old of bamboo fibres has the highest chemical constituent. The chemical constituents improved the flexural properties. Bamboo fibres in Ethiopia can be relevant for composite development, which has been applied in the area of requiring higher flexural properties.

Keywords: age, bamboo species, flexural properties, harvesting season, polypropylene

Procedia PDF Downloads 50
15701 Is Hormone Replacement Therapy Associated with Age-Related Macular Degeneration? A Systematic Review and Meta-Analysis

Authors: Hongxin Zhao, Shibing Yang, Bingming Yi, Yi Ning

Abstract:

Background: A few studies have found evidence that exposure to endogenous or postmenopausal exogenous estrogens may be associated with a lower prevalence of age-related macular degeneration (AMD), but dispute over this association is ongoing due to inconsistent results reported by different studies. Objectives: To conduct a systematic review and meta-analysis to investigate the association between hormone replacement therapy (HRT) use and AMD. Methods: Relevant studies that assessed the association between HRT and AMD were searched through four databases (PubMed, Web of Science, Cochrane Library, EMBASE) and reference lists of retrieved studies. Study selection, data extraction and quality assessment were conducted by three independent reviewers. The fixed-effect meta-analyses were performed to estimate the association between HRT ever-use and AMD by pooling risk ratio (RR) or odds ratio (OR) across studies. Results: The review identified 2 prospective and 7 cross-sectional studies with 93992 female participants that reported an estimate of the association between HRT ever-use and presence of early AMD or late AMD. Meta-analyses showed that there were no statistically significant associations between HRT ever-use and early AMD (pooled RR for cohort studies was 1.04, 95% CI 0.86 - 1.24; pooled OR for cross-sectional studies was 0.91, 95% CI 0.82 - 1.01). The pooled results from cross-sectional studies also showed no statistically significant association between HRT ever-use and late AMD (OR 1.01; 95% CI 0.89 - 1.15). Conclusions: The pooled effects from observational studies published to date indicate that HRT use is associated with neither early nor late AMD. Exposure to HRT may not protect women from developing AMD.

Keywords: hormone replacement therapy, age-related macular degeneration, meta-analysis, systematic review

Procedia PDF Downloads 348
15700 Edible and Ecofriendly Packaging – A Trendsetter of the Modern Era – Standardization and Properties of Films and Cutleries from Food Starch

Authors: P. Raajeswari, S. M. Devatha, R. Pragatheeswari

Abstract:

The edible packaging is a new trendsetter in the era of modern packaging. The researchers and food scientist recognise edible packaging as a useful alternative or addition to conventional packaging to reduce waste and to create novel applications for improving product stability. Starch was extracted from different sources that contains abundantly like potato, tapioca, rice, wheat, and corn. The starch based edible films and cutleries are developed as an alternative for conventional packages providing the nutritional benefit when consumed along with the food. The development of starch based edible films by the extraction of starch from various raw ingredients at lab scale level. The films are developed by the employment of plasticiser at different concentrations of 1.5ml and 2ml. The films developed using glycerol as a plasticiser in filmogenic solution to increase the flexibility and plasticity of film. It reduces intra and intermolecular forces in starch, and it increases the mobility of starch based edible films. The films developed are tested for its functional properties such as thickness, tensile strength, elongation at break, moisture permeability, moisture content, and puncture strength. The cutleries like spoons and cups are prepared by making dough and rolling the starch along with water. The overall results showed that starch based edible films absorbed less moisture, and they also contributed to the low moisture permeability with high tensile strength. Food colorants extracted from red onion peel, pumpkin, and red amaranth adds on the nutritive value, colour, and attraction when incorporated in edible cutleries, and it doesn’t influence the functional properties. Addition of a low quantity of glycerol in edible films and colour extraction from onion peel, pumpkin, and red amaranth enhances biodegradability and provides a good quantity of nutrients when consumed. Therefore, due to its multiple advantages, food starch can serve as the best response for eco-friendly industrial products aimed to replace single use plastics at low cost.

Keywords: edible films, edible cutleries, plasticizer, glycerol, starch, functional property

Procedia PDF Downloads 183
15699 Comparative Evaluation of EBT3 Film Dosimetry Using Flat Bad Scanner, Densitometer and Spectrophotometer Methods and Its Applications in Radiotherapy

Authors: K. Khaerunnisa, D. Ryangga, S. A. Pawiro

Abstract:

Over the past few decades, film dosimetry has become a tool which is used in various radiotherapy modalities, either for clinical quality assurance (QA) or dose verification. The response of the film to irradiation is usually expressed in optical density (OD) or net optical density (netOD). While the film's response to radiation is not linear, then the use of film as a dosimeter must go through a calibration process. This study aimed to compare the function of the calibration curve of various measurement methods with various densitometer, using a flat bad scanner, point densitometer and spectrophotometer. For every response function, a radichromic film calibration curve is generated from each method by performing accuracy, precision and sensitivity analysis. netOD is obtained by measuring changes in the optical density (OD) of the film before irradiation and after irradiation when using a film scanner if it uses ImageJ to extract the pixel value of the film on the red channel of three channels (RGB), calculate the change in OD before and after irradiation when using a point densitometer, and calculate changes in absorbance before and after irradiation when using a spectrophotometer. the results showed that the three calibration methods gave readings with a netOD precision of doses below 3% for the uncertainty value of 1σ (one sigma). while the sensitivity of all three methods has the same trend in responding to film readings against radiation, it has a different magnitude of sensitivity. while the accuracy of the three methods provides readings below 3% for doses above 100 cGy and 200 cGy, but for doses below 100 cGy found above 3% when using point densitometers and spectrophotometers. when all three methods are used for clinical implementation, the results of the study show accuracy and precision below 2% for the use of scanners and spectrophotometers and above 3% for precision and accuracy when using point densitometers.

Keywords: Callibration Methods, Film Dosimetry EBT3, Flat Bad Scanner, Densitomete, Spectrophotometer

Procedia PDF Downloads 132
15698 Breast Cancer Early Recognition, New Methods of Screening, and Analysis

Authors: Sahar Heidary

Abstract:

Breast cancer is a main public common obstacle global. Additionally, it is the second top reason for tumor death across women. Considering breast cancer cure choices can aid private doctors in precaution for their patients through future cancer treatment. This article reviews usual management centered on stage, histology, and biomarkers. The growth of breast cancer is a multi-stage procedure including numerous cell kinds and its inhibition residues stimulating in the universe. Timely identification of breast cancer is one of the finest methods to stop this illness. Entirely chief therapeutic administrations mention screening mammography for women aged 40 years and older. Breast cancer metastasis interpretations for the mainstream of deaths from breast cancer. The discovery of breast cancer metastasis at the initial step is essential for managing and estimate of breast cancer development. Developing methods consuming the exploration of flowing cancer cells illustrate talented outcomes in forecasting and classifying the initial steps of breast cancer metastasis in patients. In public, mammography residues are the key screening implement though the efficiency of medical breast checks and self-checkup is less. Innovative screening methods are doubtful to exchange mammography in the close upcoming for screening the overall people.

Keywords: breast cancer, screening, metastasis, methods

Procedia PDF Downloads 166
15697 Application of Deep Learning and Ensemble Methods for Biomarker Discovery in Diabetic Nephropathy through Fibrosis and Propionate Metabolism Pathways

Authors: Oluwafunmibi Omotayo Fasanya, Augustine Kena Adjei

Abstract:

Diabetic nephropathy (DN) is a major complication of diabetes, with fibrosis and propionate metabolism playing critical roles in its progression. Identifying biomarkers linked to these pathways may provide novel insights into DN diagnosis and treatment. This study aims to identify biomarkers associated with fibrosis and propionate metabolism in DN. Analyze the biological pathways and regulatory mechanisms of these biomarkers. Develop a machine learning model to predict DN-related biomarkers and validate their functional roles. Publicly available transcriptome datasets related to DN (GSE96804 and GSE104948) were obtained from the GEO database (https://www.ncbi.nlm.nih.gov/gds), and 924 propionate metabolism-related genes (PMRGs) and 656 fibrosis-related genes (FRGs) were identified. The analysis began with the extraction of DN-differentially expressed genes (DN-DEGs) and propionate metabolism-related DEGs (PM-DEGs), followed by the intersection of these with fibrosis-related genes to identify key intersected genes. Instead of relying on traditional models, we employed a combination of deep neural networks (DNNs) and ensemble methods such as Gradient Boosting Machines (GBM) and XGBoost to enhance feature selection and biomarker discovery. Recursive feature elimination (RFE) was coupled with these advanced algorithms to refine the selection of the most critical biomarkers. Functional validation was conducted using convolutional neural networks (CNN) for gene set enrichment and immunoinfiltration analysis, revealing seven significant biomarkers—SLC37A4, ACOX2, GPD1, ACE2, SLC9A3, AGT, and PLG. These biomarkers are involved in critical biological processes such as fatty acid metabolism and glomerular development, providing a mechanistic link to DN progression. Furthermore, a TF–miRNA–mRNA regulatory network was constructed using natural language processing models to identify 8 transcription factors and 60 miRNAs that regulate these biomarkers, while a drug–gene interaction network revealed potential therapeutic targets such as UROKINASE–PLG and ATENOLOL–AGT. This integrative approach, leveraging deep learning and ensemble models, not only enhances the accuracy of biomarker discovery but also offers new perspectives on DN diagnosis and treatment, specifically targeting fibrosis and propionate metabolism pathways.

Keywords: diabetic nephropathy, deep neural networks, gradient boosting machines (GBM), XGBoost

Procedia PDF Downloads 6
15696 A Review of Travel Data Collection Methods

Authors: Muhammad Awais Shafique, Eiji Hato

Abstract:

Household trip data is of crucial importance for managing present transportation infrastructure as well as to plan and design future facilities. It also provides basis for new policies implemented under Transportation Demand Management. The methods used for household trip data collection have changed with passage of time, starting with the conventional face-to-face interviews or paper-and-pencil interviews and reaching to the recent approach of employing smartphones. This study summarizes the step-wise evolution in the travel data collection methods. It provides a comprehensive review of the topic, for readers interested to know the changing trends in the data collection field.

Keywords: computer, smartphone, telephone, travel survey

Procedia PDF Downloads 310
15695 Physical and Chemical Alternative Methods of Fresh Produce Disinfection

Authors: Tuji Jemal Ahmed

Abstract:

Fresh produce is an essential component of a healthy diet. However, it can also be a potential source of pathogenic microorganisms that can cause foodborne illnesses. Traditional disinfection methods, such as washing with water and chlorine, have limitations and may not effectively remove or inactivate all microorganisms. This has led to the development of alternative/new methods of fresh produce disinfection, including physical and chemical methods. In this paper, we explore the physical and chemical new methods of fresh produce disinfection, their advantages and disadvantages, and their suitability for different types of produce. Physical methods of disinfection, such as ultraviolet (UV) radiation and high-pressure processing (HPP), are crucial in ensuring the microbiological safety of fresh produce. UV radiation uses short-wavelength UV-C light to damage the DNA and RNA of microorganisms, and HPP applies high levels of pressure to fresh produce to reduce the microbial load. These physical methods are highly effective in killing a wide range of microorganisms, including bacteria, viruses, and fungi. However, they may not penetrate deep enough into the product to kill all microorganisms and can alter the sensory characteristics of the product. Chemical methods of disinfection, such as acidic electrolyzed water (AEW), ozone, and peroxyacetic acid (PAA), are also important in ensuring the microbiological safety of fresh produce. AEW uses a low concentration of hypochlorous acid and a high concentration of hydrogen ions to inactivate microorganisms, ozone uses ozone gas to damage the cell membranes and DNA of microorganisms, and PAA uses a combination of hydrogen peroxide and acetic acid to inactivate microorganisms. These chemical methods are highly effective in killing a wide range of microorganisms, but they may cause discoloration or changes in the texture and flavor of some products and may require specialized equipment and trained personnel to produce and apply. In conclusion, the selection of the most suitable method of fresh produce disinfection should take into consideration the type of product, the level of microbial contamination, the effectiveness of the method in reducing the microbial load, and any potential negative impacts on the sensory characteristics, nutritional composition, and safety of the produce.

Keywords: fresh produce, pathogenic microorganisms, foodborne illnesses, disinfection methods

Procedia PDF Downloads 72
15694 Automated Transformation of 3D Point Cloud to BIM Model: Leveraging Algorithmic Modeling for Efficient Reconstruction

Authors: Radul Shishkov, Orlin Davchev

Abstract:

The digital era has revolutionized architectural practices, with building information modeling (BIM) emerging as a pivotal tool for architects, engineers, and construction professionals. However, the transition from traditional methods to BIM-centric approaches poses significant challenges, particularly in the context of existing structures. This research introduces a technical approach to bridge this gap through the development of algorithms that facilitate the automated transformation of 3D point cloud data into detailed BIM models. The core of this research lies in the application of algorithmic modeling and computational design methods to interpret and reconstruct point cloud data -a collection of data points in space, typically produced by 3D scanners- into comprehensive BIM models. This process involves complex stages of data cleaning, feature extraction, and geometric reconstruction, which are traditionally time-consuming and prone to human error. By automating these stages, our approach significantly enhances the efficiency and accuracy of creating BIM models for existing buildings. The proposed algorithms are designed to identify key architectural elements within point clouds, such as walls, windows, doors, and other structural components, and to translate these elements into their corresponding BIM representations. This includes the integration of parametric modeling techniques to ensure that the generated BIM models are not only geometrically accurate but also embedded with essential architectural and structural information. Our methodology has been tested on several real-world case studies, demonstrating its capability to handle diverse architectural styles and complexities. The results showcase a substantial reduction in time and resources required for BIM model generation while maintaining high levels of accuracy and detail. This research contributes significantly to the field of architectural technology by providing a scalable and efficient solution for the integration of existing structures into the BIM framework. It paves the way for more seamless and integrated workflows in renovation and heritage conservation projects, where the accuracy of existing conditions plays a critical role. The implications of this study extend beyond architectural practices, offering potential benefits in urban planning, facility management, and historic preservation.

Keywords: BIM, 3D point cloud, algorithmic modeling, computational design, architectural reconstruction

Procedia PDF Downloads 61
15693 Wet Processing of Algae for Protein and Carbohydrate Recovery as Co-Product of Algal Oil

Authors: Sahil Kumar, Rajaram Ghadge, Ramesh Bhujade

Abstract:

Historically, lipid extraction from dried algal biomass remained a focus area of the algal research. It has been realized over the past few years that the lipid-centric approach and conversion technologies that require dry algal biomass have several challenges. Algal culture in cultivation systems contains more than 99% water, with algal concentrations of just a few hundred milligrams per liter ( < 0.05 wt%), which makes harvesting and drying energy intensive. Drying the algal biomass followed by extraction also entails the loss of water and nutrients. In view of these challenges, focus has shifted toward developing processes that will enable oil production from wet algal biomass without drying. Hydrothermal liquefaction (HTL), an emerging technology, is a thermo-chemical conversion process that converts wet biomass to oil and gas using water as a solvent at high temperature and high pressure. HTL processes wet algal slurry containing more than 80% water and significantly reduces the adverse cost impact owing to drying the algal biomass. HTL, being inherently feedstock agnostic, i.e., can convert carbohydrates and proteins also to fuels and recovers water and nutrients. It is most effective with low-lipid (10--30%) algal biomass, and bio-crude yield is two to four times higher than the lipid content in the feedstock. In the early 2010s, research remained focused on increasing the oil yield by optimizing the process conditions of HTL. However, various techno-economic studies showed that simply converting algal biomass to only oil does not make economic sense, particularly in view of low crude oil prices. Making the best use of every component of algae is a key for economic viability of algal to oil process. On investigation of HTL reactions at the molecular level, it has been observed that sequential HTL has the potential to recover value-added products along with biocrude and improve the overall economics of the process. This potential of sequential HTL makes it a most promising technology for converting wet waste to wealth. In this presentation, we will share our experience on the techno-economic and engineering aspects of sequential HTL for conversion of algal biomass to algal bio-oil and co-products.

Keywords: algae, biomass, lipid, protein

Procedia PDF Downloads 213
15692 Artificial Intelligence Based Abnormality Detection System and Real Valuᵀᴹ Product Design

Authors: Junbeom Lee, Jaehyuck Cho, Wookyeong Jeong, Jonghan Won, Jungmin Hwang, Youngseok Song, Taikyeong Jeong

Abstract:

This paper investigates and analyzes meta-learning technologies that use multiple-cameras to monitor and check abnormal behavior in people in real-time in the area of healthcare fields. Advances in artificial intelligence and computer vision technologies have confirmed that cameras can be useful for individual health monitoring and abnormal behavior detection. Through this, it is possible to establish a system that can respond early by automatically detecting abnormal behavior of the elderly, such as patients and the elderly. In this paper, we use a technique called meta-learning to analyze image data collected from cameras and develop a commercial product to determine abnormal behavior. Meta-learning applies machine learning algorithms to help systems learn and adapt quickly to new real data. Through this, the accuracy and reliability of the abnormal behavior discrimination system can be improved. In addition, this study proposes a meta-learning-based abnormal behavior detection system that includes steps such as data collection and preprocessing, feature extraction and selection, and classification model development. Various healthcare scenarios and experiments analyze the performance of the proposed system and demonstrate excellence compared to other existing methods. Through this study, we present the possibility that camera-based meta-learning technology can be useful for monitoring and testing abnormal behavior in the healthcare area.

Keywords: artificial intelligence, abnormal behavior, early detection, health monitoring

Procedia PDF Downloads 85
15691 The Outcome of Using Machine Learning in Medical Imaging

Authors: Adel Edwar Waheeb Louka

Abstract:

Purpose AI-driven solutions are at the forefront of many pathology and medical imaging methods. Using algorithms designed to better the experience of medical professionals within their respective fields, the efficiency and accuracy of diagnosis can improve. In particular, X-rays are a fast and relatively inexpensive test that can diagnose diseases. In recent years, X-rays have not been widely used to detect and diagnose COVID-19. The under use of Xrays is mainly due to the low diagnostic accuracy and confounding with pneumonia, another respiratory disease. However, research in this field has expressed a possibility that artificial neural networks can successfully diagnose COVID-19 with high accuracy. Models and Data The dataset used is the COVID-19 Radiography Database. This dataset includes images and masks of chest X-rays under the labels of COVID-19, normal, and pneumonia. The classification model developed uses an autoencoder and a pre-trained convolutional neural network (DenseNet201) to provide transfer learning to the model. The model then uses a deep neural network to finalize the feature extraction and predict the diagnosis for the input image. This model was trained on 4035 images and validated on 807 separate images from the ones used for training. The images used to train the classification model include an important feature: the pictures are cropped beforehand to eliminate distractions when training the model. The image segmentation model uses an improved U-Net architecture. This model is used to extract the lung mask from the chest X-ray image. The model is trained on 8577 images and validated on a validation split of 20%. These models are calculated using the external dataset for validation. The models’ accuracy, precision, recall, f1-score, IOU, and loss are calculated. Results The classification model achieved an accuracy of 97.65% and a loss of 0.1234 when differentiating COVID19-infected, pneumonia-infected, and normal lung X-rays. The segmentation model achieved an accuracy of 97.31% and an IOU of 0.928. Conclusion The models proposed can detect COVID-19, pneumonia, and normal lungs with high accuracy and derive the lung mask from a chest X-ray with similarly high accuracy. The hope is for these models to elevate the experience of medical professionals and provide insight into the future of the methods used.

Keywords: artificial intelligence, convolutional neural networks, deeplearning, image processing, machine learningSarapin, intraarticular, chronic knee pain, osteoarthritisFNS, trauma, hip, neck femur fracture, minimally invasive surgery

Procedia PDF Downloads 72
15690 Comparison between Deterministic and Probabilistic Stability Analysis, Featuring Consequent Risk Assessment

Authors: Isabela Moreira Queiroz

Abstract:

Slope stability analyses are largely carried out by deterministic methods and evaluated through a single security factor. Although it is known that the geotechnical parameters can present great dispersal, such analyses are considered fixed and known. The probabilistic methods, in turn, incorporate the variability of input key parameters (random variables), resulting in a range of values of safety factors, thus enabling the determination of the probability of failure, which is an essential parameter in the calculation of the risk (probability multiplied by the consequence of the event). Among the probabilistic methods, there are three frequently used methods in geotechnical society: FOSM (First-Order, Second-Moment), Rosenblueth (Point Estimates) and Monte Carlo. This paper presents a comparison between the results from deterministic and probabilistic analyses (FOSM method, Monte Carlo and Rosenblueth) applied to a hypothetical slope. The end was held to evaluate the behavior of the slope and consequent risk analysis, which is used to calculate the risk and analyze their mitigation and control solutions. It can be observed that the results obtained by the three probabilistic methods were quite close. It should be noticed that the calculation of the risk makes it possible to list the priority to the implementation of mitigation measures. Therefore, it is recommended to do a good assessment of the geological-geotechnical model incorporating the uncertainty in viability, design, construction, operation and closure by means of risk management. 

Keywords: probabilistic methods, risk assessment, risk management, slope stability

Procedia PDF Downloads 389
15689 Structural Health Monitoring and Damage Structural Identification Using Dynamic Response

Authors: Reza Behboodian

Abstract:

Monitoring the structural health and diagnosing their damage in the early stages has always been one of the topics of concern. Nowadays, research on structural damage detection methods based on vibration analysis is very extensive. Moreover, these methods can be used as methods of permanent and timely inspection of structures and prevent further damage to structures. Non-destructive methods are the low-cost and economical methods for determining the damage of structures. In this research, a non-destructive method for detecting and identifying the failure location in structures based on dynamic responses resulting from time history analysis is proposed. When the structure is damaged due to the reduction of stiffness, and due to the applied loads, the displacements in different parts of the structure were increased. In the proposed method, the damage position is determined based on the calculation of the strain energy difference in each member of the damaged structure and the healthy structure at any time. Defective members of the structure are indicated by the amount of strain energy relative to the healthy state. The results indicated that the proper accuracy and performance of the proposed method for identifying failure in structures.

Keywords: failure, time history analysis, dynamic response, strain energy

Procedia PDF Downloads 131
15688 Musical Instruments Classification Using Machine Learning Techniques

Authors: Bhalke D. G., Bormane D. S., Kharate G. K.

Abstract:

This paper presents classification of musical instrument using machine learning techniques. The classification has been carried out using temporal, spectral, cepstral and wavelet features. Detail feature analysis is carried out using separate and combined features. Further, instrument model has been developed using K-Nearest Neighbor and Support Vector Machine (SVM). Benchmarked McGill university database has been used to test the performance of the system. Experimental result shows that SVM performs better as compared to KNN classifier.

Keywords: feature extraction, SVM, KNN, musical instruments

Procedia PDF Downloads 479
15687 Optimization of Machine Learning Regression Results: An Application on Health Expenditures

Authors: Songul Cinaroglu

Abstract:

Machine learning regression methods are recommended as an alternative to classical regression methods in the existence of variables which are difficult to model. Data for health expenditure is typically non-normal and have a heavily skewed distribution. This study aims to compare machine learning regression methods by hyperparameter tuning to predict health expenditure per capita. A multiple regression model was conducted and performance results of Lasso Regression, Random Forest Regression and Support Vector Machine Regression recorded when different hyperparameters are assigned. Lambda (λ) value for Lasso Regression, number of trees for Random Forest Regression, epsilon (ε) value for Support Vector Regression was determined as hyperparameters. Study results performed by using 'k' fold cross validation changed from 5 to 50, indicate the difference between machine learning regression results in terms of R², RMSE and MAE values that are statistically significant (p < 0.001). Study results reveal that Random Forest Regression (R² ˃ 0.7500, RMSE ≤ 0.6000 ve MAE ≤ 0.4000) outperforms other machine learning regression methods. It is highly advisable to use machine learning regression methods for modelling health expenditures.

Keywords: machine learning, lasso regression, random forest regression, support vector regression, hyperparameter tuning, health expenditure

Procedia PDF Downloads 224
15686 Fermented Fruit and Vegetable Discard as a Source of Feeding Ingredients and Functional Additives

Authors: Jone Ibarruri, Mikel Manso, Marta Cebrián

Abstract:

A high amount of food is lost or discarded in the World every year. In addition, in the last decades, an increasing demand of new alternative and sustainable sources of proteins and other valuable compounds is being observed in the food and feeding sectors and, therefore, the use of food by-products as nutrients for these purposes sounds very interesting from the environmental and economical point of view. However, the direct use of discarded fruit and vegetables that present, in general, a low protein content is not interesting as feeding ingredient except if they are used as a source of fiber for ruminants. Especially in the case of aquaculture, several alternatives to the use of fish meal and other vegetable protein sources have been extensively explored due to the scarcity of fish stocks and the unsustainability of fishing for these purposes. Fish mortality is also of great concern in this sector as this problem highly reduces their economic feasibility. So, the development of new functional and natural ingredients that could reduce the need for vaccination is also of great interest. In this work, several fermentation tests were developed at lab scale using a selected mixture of fruit and vegetable discards from a wholesale market located in the Basque Country to increase their protein content and also to produce some bioactive extracts that could be used as additives in aquaculture. Fruit and vegetable mixtures (60/40 ww) were centrifugated for humidity reduction and crushed to 2-5 mm particle size. Samples were inoculated with a selected Rhizopus oryzae strain and fermented for 7 days in controlled conditions (humidity between 65 and 75% and 28ºC) in Petri plates (120 mm) by triplicate. Obtained results indicated that the final fermented product presented a twofold protein content (from 13 to 28% d.w). Fermented product was further processed to determine their possible functionality as a feed additive. Extraction tests were carried out to obtain an ethanolic extract (60:40 ethanol: water, v.v) and remaining biomass that also could present applications in food or feed sectors. The extract presented a polyphenol content of about 27 mg GAE/gr d.w with antioxidant activity of 8.4 mg TEAC/g d.w. Remining biomass is mainly composed of fiber (51%), protein (24%) and fat (10%). Extracts also presented antibacterial activity according to the results obtained in Agar Diffusion and to the Minimum Inhibitory Concentration (MIC) tests determined against several food and fish pathogen strains. In vitro, digestibility was also assessed to obtain preliminary information about the expected effect of extraction procedure on fermented product digestibility. First results indicated that remaining biomass after extraction doesn´t seem to improve digestibility in comparison to the initial fermented product. These preliminary results show that fermented fruit and vegetables can be a useful source of functional ingredients for aquaculture applications and a substitute of other protein sources in the feeding sector. Further validation will be also carried out through “in vivo” tests with trout and bass.

Keywords: fungal solid state fermentation, protein increase, functional extracts, feed ingredients

Procedia PDF Downloads 63
15685 Method To Create Signed Word - Application In Teaching And Learning Vietnamese Sign Language

Authors: Nguyen Thi Kim Thoa

Abstract:

Vietnam currently has about two million five hundred deaf/hard of hearing people. Although the issue of Vietnamese Sign Language (VSL) education has received attention from the State, there are still many issues that need to be resolved, such as policies, teacher training in both knowledge and teaching methods, education programs, and textbook compilation. Furthermore, the issue of research on VSL has not yet attracted the attention of linguists. Using the quantitative description method, the article will analyze, synthesize, and compare to find methods to create signed words in VSL, such as based on external shape characteristics, operational characteristics, operating methods, and basic meanings, from which we can see the special nature of signed words, the division of word types and the morphological meaning of creating new words through sign methods. From the results of this research, the aspect of ‘visual culture’ will be clarified in Vietnamese Deaf Culture. Through that, we also develop a number of vocabulary teaching methods (such as teaching vocabulary through a group of methods of forming signed words, teaching vocabulary using mind maps, and teaching vocabulary through culture...), with the aim of further improving the effectiveness of teaching and learning VSL in Vietnam. The research results also provide deaf people in Vietnam with a scientific and effective method of learning vocabulary, helping them quickly integrate into the community. The article will be a useful reference for linguists who want to research VSL.

Keywords: Vietnamese sign language (VSL), signed word, teaching, method

Procedia PDF Downloads 34
15684 Optimization of Shale Gas Production by Advanced Hydraulic Fracturing

Authors: Fazl Ullah, Rahmat Ullah

Abstract:

This paper shows a comprehensive learning focused on the optimization of gas production in shale gas reservoirs through hydraulic fracturing. Shale gas has emerged as an important unconventional vigor resource, necessitating innovative techniques to enhance its extraction. The key objective of this study is to examine the influence of fracture parameters on reservoir productivity and formulate strategies for production optimization. A sophisticated model integrating gas flow dynamics and real stress considerations is developed for hydraulic fracturing in multi-stage shale gas reservoirs. This model encompasses distinct zones: a single-porosity medium region, a dual-porosity average region, and a hydraulic fracture region. The apparent permeability of the matrix and fracture system is modeled using principles like effective stress mechanics, porous elastic medium theory, fractal dimension evolution, and fluid transport apparatuses. The developed model is then validated using field data from the Barnett and Marcellus formations, enhancing its reliability and accuracy. By solving the partial differential equation by means of COMSOL software, the research yields valuable insights into optimal fracture parameters. The findings reveal the influence of fracture length, diversion capacity, and width on gas production. For reservoirs with higher permeability, extending hydraulic fracture lengths proves beneficial, while complex fracture geometries offer potential for low-permeability reservoirs. Overall, this study contributes to a deeper understanding of hydraulic cracking dynamics in shale gas reservoirs and provides essential guidance for optimizing gas production. The research findings are instrumental for energy industry professionals, researchers, and policymakers alike, shaping the future of sustainable energy extraction from unconventional resources.

Keywords: fluid-solid coupling, apparent permeability, shale gas reservoir, fracture property, numerical simulation

Procedia PDF Downloads 70
15683 Internet of Things Networks: Denial of Service Detection in Constrained Application Protocol Using Machine Learning Algorithm

Authors: Adamu Abdullahi, On Francisca, Saidu Isah Rambo, G. N. Obunadike, D. T. Chinyio

Abstract:

The paper discusses the potential threat of Denial of Service (DoS) attacks in the Internet of Things (IoT) networks on constrained application protocols (CoAP). As billions of IoT devices are expected to be connected to the internet in the coming years, the security of these devices is vulnerable to attacks, disrupting their functioning. This research aims to tackle this issue by applying mixed methods of qualitative and quantitative for feature selection, extraction, and cluster algorithms to detect DoS attacks in the Constrained Application Protocol (CoAP) using the Machine Learning Algorithm (MLA). The main objective of the research is to enhance the security scheme for CoAP in the IoT environment by analyzing the nature of DoS attacks and identifying a new set of features for detecting them in the IoT network environment. The aim is to demonstrate the effectiveness of the MLA in detecting DoS attacks and compare it with conventional intrusion detection systems for securing the CoAP in the IoT environment. Findings: The research identifies the appropriate node to detect DoS attacks in the IoT network environment and demonstrates how to detect the attacks through the MLA. The accuracy detection in both classification and network simulation environments shows that the k-means algorithm scored the highest percentage in the training and testing of the evaluation. The network simulation platform also achieved the highest percentage of 99.93% in overall accuracy. This work reviews conventional intrusion detection systems for securing the CoAP in the IoT environment. The DoS security issues associated with the CoAP are discussed.

Keywords: algorithm, CoAP, DoS, IoT, machine learning

Procedia PDF Downloads 79
15682 Evaluation of Progesterone and Estradiol17-ß Levels in Ewes Induced with Different Methods

Authors: E. Sinem Ozdemir Salci, Nazmiye Gunes, Guven Ozkaya, Gulsen Goncagul, Kamil Seyrek Intas

Abstract:

The aim of this study was to show the effects of progesterone and estrogen concentrations in ewes induced with different induction of parturition methods. Twenty-four healthy ewes (n=24) on 138th gestation day were randomly separated according to induction methods (group I (n=6), (0.09% NaCl), group II (n=6) (dexamethasone, 16 mg im.), group III (n=6) (aglepristone 5mg/kg sc.) and group IV (n=6) (aglepristone, 2,5 mg/kg sc.+dexamethasone 8 mg im.). The blood samples of the ewes were collected at 12 hours intervals from induction time to the postpartum 2nd day in order to determine progesterone and estradiol 17-ß levels. These hormone concentrations were determined by ELISA, and obtained results were statistically analyzed with Kruskal Wallis and Dunn tests between the groups, and Friedman and Wilcoxon test within the groups. The results pointed out that there was no significant difference within the groups in terms of estradiol 17-ß (group 1, p=0.508; group 2, p=0.054; group 3, p=0.672; group 4, p=0,170). And there was only a significant difference at 138th day (p=0,019) between groups II and IV (p=0,010). There was a significant difference in terms of progesterone concentration within group 1, 2 and 4 (p=0.000). And there was a significant difference at all times except 138th day between the groups (p<0.05). As a conclusion, the induction of parturition methods could be performed successfully. These methods have no effect on estradiol 17-ß concentration but also make changings on progesterone concentrations as in groups 3 and 4.

Keywords: ewe, estradiol 17-ß, induction of parturition, progesterone

Procedia PDF Downloads 214
15681 A Study of Teachers’ View on Modern Methods of Teaching Regarding the Quality of Instruction in Shiraz High Schools

Authors: Nasrin Badrkhani

Abstract:

Teaching is an interaction between the teacher, student, and the concept being taught, especially within the classroom setting. As society increasingly values thoughtful and creative individuals, there is a growing need to adopt modern, active teaching methods. These methods should engage students in activities that foster problem-solving, creativity, cooperation, and scientific thinking skills. Modern teaching methods emphasize student involvement, gradual and continuous learning (process-centered approaches), and holistic evaluation of students' abilities and talents. A shift from teacher-centered to student-centered teaching is crucial. Among these modern methods are group work, role-playing, group discussions, and activities that engage students in evaluating societal values. This research employs a survey and a 38-question Likert scale questionnaire to explore teachers' perspectives on the impact of modern teaching methods on the quality of education. The study also examines the relationship between these perspectives and variables such as gender, major, and teaching experience. The statistical population consists of high school teachers in Shiraz, Iran, with sampling done using the Morgan table. Discriminant analysis was used for the initial analysis of the questions, and Cronbach's Alpha test was employed for the final examination. SPSS Software was used for statistical analysis, including T-tests and one-way ANOVA. The results indicate that teachers in this city generally have positive attitudes towards the use of modern teaching methods, except when it comes to engaging in judgments concerning societal values. There is no significant difference in viewpoints based on gender or educational background. The findings are consistent with similar studies conducted both within Iran and internationally.

Keywords: learning, modern methods, student, teacher, teaching

Procedia PDF Downloads 22
15680 Weak Instability in Direct Integration Methods for Structural Dynamics

Authors: Shuenn-Yih Chang, Chiu-Li Huang

Abstract:

Three structure-dependent integration methods have been developed for solving equations of motion, which are second-order ordinary differential equations, for structural dynamics and earthquake engineering applications. Although they generally have the same numerical properties, such as explicit formulation, unconditional stability and second-order accuracy, a different performance is found in solving the free vibration response to either linear elastic or nonlinear systems with high frequency modes. The root cause of this different performance in the free vibration responses is analytically explored herein. As a result, it is verified that a weak instability is responsible for the different performance of the integration methods. In general, a weak instability will result in an inaccurate solution or even numerical instability in the free vibration responses of high frequency modes. As a result, a weak instability must be prohibited for time integration methods.

Keywords: dynamic analysis, high frequency, integration method, overshoot, weak instability

Procedia PDF Downloads 221