Search results for: artificial reefs
496 Seroprevalence of Bovine Brucellosis and its Public Health Significance in Selected Sites of Central High Land of Ethiopia
Authors: Temesgen Kassa Getahun, Gezahegn Mamo, Beksisa Urge
Abstract:
A cross-sectional study was conducted from December 2019 to May 2020 with the aim of determining the seroprevalence of brucellosis in dairy cows and their owners in the central highland of Oromia, Ethiopia. A total of 352 blood samples from dairy cattle, 149 from animal owners, and 17 from farm workers were collected and initially screened using the Rose Bengal Plate test and confirmed by the Complement Fixation test. Overall seroprevalence was 0.6% (95% CI: 0.0016–0.0209) in bovines and 1.2% (95% CI: 0.0032–0.0427) in humans. Market-based stock replacement (OR=16.55, p=0.002), breeding by artificial insemination (OR=7.58, p=0.05), and parturition pen (OR = 11.511, p=0.027) were found to be significantly associated with the seropositivity for Brucella infection in dairy cattle. Human housing (OR=1.8, p=0.002), contact with an aborted fetus (OR=21.19, p=0.017), drinking raw milk from non-aborted (OR=24.99, p=0.012), aborted (OR=5.72, p=0.019) and retained fetal membrane (OR=4.22, p=0.029) cows had a significant influence on human brucellosis. A structured interview question was administered to 284 respondents. Accordingly, most respondents had no knowledge of brucellosis (93.3%), and in contrast, 90% of them consumed raw milk. In conclusion, the present seroprevalence study revealed that brucellosis was low among dairy cattle and exposed individuals in the study areas. However, since there were no control strategies implemented in the study areas, there is a potential risk of transmission of brucellosis in dairy cattle and the exposed human population in the study areas. Implementation of a test and slaughter strategy with compensation to farmers is recommended, while in the case of human brucellosis, continuous social training and implementing one health approach framework must be applied.Keywords: abortion, bovine brucellosis, human brucellosis, risk factors, seroprevalence
Procedia PDF Downloads 103495 A Survey of Skin Cancer Detection and Classification from Skin Lesion Images Using Deep Learning
Authors: Joseph George, Anne Kotteswara Roa
Abstract:
Skin disease is one of the most common and popular kinds of health issues faced by people nowadays. Skin cancer (SC) is one among them, and its detection relies on the skin biopsy outputs and the expertise of the doctors, but it consumes more time and some inaccurate results. At the early stage, skin cancer detection is a challenging task, and it easily spreads to the whole body and leads to an increase in the mortality rate. Skin cancer is curable when it is detected at an early stage. In order to classify correct and accurate skin cancer, the critical task is skin cancer identification and classification, and it is more based on the cancer disease features such as shape, size, color, symmetry and etc. More similar characteristics are present in many skin diseases; hence it makes it a challenging issue to select important features from a skin cancer dataset images. Hence, the skin cancer diagnostic accuracy is improved by requiring an automated skin cancer detection and classification framework; thereby, the human expert’s scarcity is handled. Recently, the deep learning techniques like Convolutional neural network (CNN), Deep belief neural network (DBN), Artificial neural network (ANN), Recurrent neural network (RNN), and Long and short term memory (LSTM) have been widely used for the identification and classification of skin cancers. This survey reviews different DL techniques for skin cancer identification and classification. The performance metrics such as precision, recall, accuracy, sensitivity, specificity, and F-measures are used to evaluate the effectiveness of SC identification using DL techniques. By using these DL techniques, the classification accuracy increases along with the mitigation of computational complexities and time consumption.Keywords: skin cancer, deep learning, performance measures, accuracy, datasets
Procedia PDF Downloads 126494 Fully Automated Methods for the Detection and Segmentation of Mitochondria in Microscopy Images
Authors: Blessing Ojeme, Frederick Quinn, Russell Karls, Shannon Quinn
Abstract:
The detection and segmentation of mitochondria from fluorescence microscopy are crucial for understanding the complex structure of the nervous system. However, the constant fission and fusion of mitochondria and image distortion in the background make the task of detection and segmentation challenging. In the literature, a number of open-source software tools and artificial intelligence (AI) methods have been described for analyzing mitochondrial images, achieving remarkable classification and quantitation results. However, the availability of combined expertise in the medical field and AI required to utilize these tools poses a challenge to its full adoption and use in clinical settings. Motivated by the advantages of automated methods in terms of good performance, minimum detection time, ease of implementation, and cross-platform compatibility, this study proposes a fully automated framework for the detection and segmentation of mitochondria using both image shape information and descriptive statistics. Using the low-cost, open-source python and openCV library, the algorithms are implemented in three stages: pre-processing, image binarization, and coarse-to-fine segmentation. The proposed model is validated using the mitochondrial fluorescence dataset. Ground truth labels generated using a Lab kit were also used to evaluate the performance of our detection and segmentation model. The study produces good detection and segmentation results and reports the challenges encountered during the image analysis of mitochondrial morphology from the fluorescence mitochondrial dataset. A discussion on the methods and future perspectives of fully automated frameworks conclude the paper.Keywords: 2D, binarization, CLAHE, detection, fluorescence microscopy, mitochondria, segmentation
Procedia PDF Downloads 356493 Impact of Climate Change on Sea Level Rise along the Coastline of Mumbai City, India
Authors: Chakraborty Sudipta, A. R. Kambekar, Sarma Arnab
Abstract:
Sea-level rise being one of the most important impacts of anthropogenic induced climate change resulting from global warming and melting of icebergs at Arctic and Antarctic, the investigations done by various researchers both on Indian Coast and elsewhere during the last decade has been reviewed in this paper. The paper aims to ascertain the propensity of consistency of different suggested methods to predict the near-accurate future sea level rise along the coast of Mumbai. Case studies at East Coast, Southern Tip and West and South West coast of India have been reviewed. Coastal Vulnerability Index of several important international places has been compared, which matched with Intergovernmental Panel on Climate Change forecasts. The application of Geographic Information System mapping, use of remote sensing technology, both Multi Spectral Scanner and Thematic Mapping data from Landsat classified through Iterative Self-Organizing Data Analysis Technique for arriving at high, moderate and low Coastal Vulnerability Index at various important coastal cities have been observed. Instead of data driven, hindcast based forecast for Significant Wave Height, additional impact of sea level rise has been suggested. Efficacy and limitations of numerical methods vis-à-vis Artificial Neural Network has been assessed, importance of Root Mean Square error on numerical results is mentioned. Comparing between various computerized methods on forecast results obtained from MIKE 21 has been opined to be more reliable than Delft 3D model.Keywords: climate change, Coastal Vulnerability Index, global warming, sea level rise
Procedia PDF Downloads 130492 Unsupervised Echocardiogram View Detection via Autoencoder-Based Representation Learning
Authors: Andrea Treviño Gavito, Diego Klabjan, Sanjiv J. Shah
Abstract:
Echocardiograms serve as pivotal resources for clinicians in diagnosing cardiac conditions, offering non-invasive insights into a heart’s structure and function. When echocardiographic studies are conducted, no standardized labeling of the acquired views is performed. Employing machine learning algorithms for automated echocardiogram view detection has emerged as a promising solution to enhance efficiency in echocardiogram use for diagnosis. However, existing approaches predominantly rely on supervised learning, necessitating labor-intensive expert labeling. In this paper, we introduce a fully unsupervised echocardiographic view detection framework that leverages convolutional autoencoders to obtain lower dimensional representations and the K-means algorithm for clustering them into view-related groups. Our approach focuses on discriminative patches from echocardiographic frames. Additionally, we propose a trainable inverse average layer to optimize decoding of average operations. By integrating both public and proprietary datasets, we obtain a marked improvement in model performance when compared to utilizing a proprietary dataset alone. Our experiments show boosts of 15.5% in accuracy and 9.0% in the F-1 score for frame-based clustering, and 25.9% in accuracy and 19.8% in the F-1 score for view-based clustering. Our research highlights the potential of unsupervised learning methodologies and the utilization of open-sourced data in addressing the complexities of echocardiogram interpretation, paving the way for more accurate and efficient cardiac diagnoses.Keywords: artificial intelligence, echocardiographic view detection, echocardiography, machine learning, self-supervised representation learning, unsupervised learning
Procedia PDF Downloads 28491 Evaluating the Satisfaction of Chinese Consumers toward Influencers at TikTok
Authors: Noriyuki Suyama
Abstract:
The progress and spread of digitalization have led to the provision of a variety of new services. The recent progress in digitization can be attributed to rapid developments in science and technology. First, the research and diffusion of artificial intelligence (AI) has made dramatic progress. Around 2000, the third wave of AI research, which had been underway for about 50 years, arrived. Specifically, machine learning and deep learning were made possible in AI, and the ability of AI to acquire knowledge, define the knowledge, and update its own knowledge in a quantitative manner made the use of big data practical even for commercial PCs. On the other hand, with the spread of social media, information exchange has become more common in our daily lives, and the lending and borrowing of goods and services, in other words, the sharing economy, has become widespread. The scope of this trend is not limited to any industry, and its momentum is growing as the SDGs take root. In addition, the Social Network Service (SNS), a part of social media, has brought about the evolution of the retail business. In the past few years, social network services (SNS) involving users or companies have especially flourished. The People's Republic of China (hereinafter referred to as "China") is a country that is stimulating enormous consumption through its own unique SNS, which is different from the SNS used in developed countries around the world. This paper focuses on the effectiveness and challenges of influencer marketing by focusing on the influence of influencers on users' behavior and satisfaction with Chinese SNSs. Specifically, Conducted was the quantitative survey of Tik Tok users living in China, with the aim of gaining new insights from the analysis and discussions. As a result, we found several important findings and knowledge.Keywords: customer satisfaction, social networking services, influencer marketing, Chinese consumers’ behavior
Procedia PDF Downloads 87490 Integrating Knowledge Distillation of Multiple Strategies
Authors: Min Jindong, Wang Mingxia
Abstract:
With the widespread use of artificial intelligence in life, computer vision, especially deep convolutional neural network models, has developed rapidly. With the increase of the complexity of the real visual target detection task and the improvement of the recognition accuracy, the target detection network model is also very large. The huge deep neural network model is not conducive to deployment on edge devices with limited resources, and the timeliness of network model inference is poor. In this paper, knowledge distillation is used to compress the huge and complex deep neural network model, and the knowledge contained in the complex network model is comprehensively transferred to another lightweight network model. Different from traditional knowledge distillation methods, we propose a novel knowledge distillation that incorporates multi-faceted features, called M-KD. In this paper, when training and optimizing the deep neural network model for target detection, the knowledge of the soft target output of the teacher network in knowledge distillation, the relationship between the layers of the teacher network and the feature attention map of the hidden layer of the teacher network are transferred to the student network as all knowledge. in the model. At the same time, we also introduce an intermediate transition layer, that is, an intermediate guidance layer, between the teacher network and the student network to make up for the huge difference between the teacher network and the student network. Finally, this paper adds an exploration module to the traditional knowledge distillation teacher-student network model. The student network model not only inherits the knowledge of the teacher network but also explores some new knowledge and characteristics. Comprehensive experiments in this paper using different distillation parameter configurations across multiple datasets and convolutional neural network models demonstrate that our proposed new network model achieves substantial improvements in speed and accuracy performance.Keywords: object detection, knowledge distillation, convolutional network, model compression
Procedia PDF Downloads 275489 Efficient Chess Board Representation: A Space-Efficient Protocol
Authors: Raghava Dhanya, Shashank S.
Abstract:
This paper delves into the intersection of chess and computer science, specifically focusing on the efficient representation of chess game states. We propose two methods: the Static Method and the Dynamic Method, each offering unique advantages in terms of space efficiency and computational complexity. The Static Method aims to represent the game state using a fixedlength encoding, allocating 192 bits to capture the positions of all pieces on the board. This method introduces a protocol for ordering and encoding piece positions, ensuring efficient storage and retrieval. However, it faces challenges in representing pieces no longer in play. In contrast, the Dynamic Method adapts to the evolving game state by dynamically adjusting the encoding length based on the number of pieces in play. By incorporating Alive Bits for each piece kind, this method achieves greater flexibility and space efficiency. Additionally, it includes provisions for encoding additional game state information such as castling rights and en passant squares. Our findings demonstrate that the Dynamic Method offers superior space efficiency compared to traditional Forsyth-Edwards Notation (FEN), particularly as the game progresses and pieces are captured. However, it comes with increased complexity in encoding and decoding processes. In conclusion, this study provides insights into optimizing the representation of chess game states, offering potential applications in chess engines, game databases, and artificial intelligence research. The proposed methods offer a balance between space efficiency and computational overhead, paving the way for further advancements in the field.Keywords: chess, optimisation, encoding, bit manipulation
Procedia PDF Downloads 45488 Automated Detection of Targets and Retrieve the Corresponding Analytics Using Augmented Reality
Authors: Suvarna Kumar Gogula, Sandhya Devi Gogula, P. Chanakya
Abstract:
Augmented reality is defined as the collection of the digital (or) computer generated information like images, audio, video, 3d models, etc. and overlay them over the real time environment. Augmented reality can be thought as a blend between completely synthetic and completely real. Augmented reality provides scope in a wide range of industries like manufacturing, retail, gaming, advertisement, tourism, etc. and brings out new dimensions in the modern digital world. As it overlays the content, it makes the users enhance the knowledge by providing the content blended with real world. In this application, we integrated augmented reality with data analytics and integrated with cloud so the virtual content will be generated on the basis of the data present in the database and we used marker based augmented reality where every marker will be stored in the database with corresponding unique ID. This application can be used in wide range of industries for different business processes, but in this paper, we mainly focus on the marketing industry which helps the customer in gaining the knowledge about the products in the market which mainly focus on their prices, customer feedback, quality, and other benefits. This application also focuses on providing better market strategy information for marketing managers who obtain the data about the stocks, sales, customer response about the product, etc. In this paper, we also included the reports from the feedback got from different people after the demonstration, and finally, we presented the future scope of Augmented Reality in different business processes by integrating with new technologies like cloud, big data, artificial intelligence, etc.Keywords: augmented reality, data analytics, catch room, marketing and sales
Procedia PDF Downloads 234487 An Efficient Machine Learning Model to Detect Metastatic Cancer in Pathology Scans Using Principal Component Analysis Algorithm, Genetic Algorithm, and Classification Algorithms
Authors: Bliss Singhal
Abstract:
Machine learning (ML) is a branch of Artificial Intelligence (AI) where computers analyze data and find patterns in the data. The study focuses on the detection of metastatic cancer using ML. Metastatic cancer is the stage where cancer has spread to other parts of the body and is the cause of approximately 90% of cancer-related deaths. Normally, pathologists spend hours each day to manually classifying whether tumors are benign or malignant. This tedious task contributes to mislabeling metastasis being over 60% of the time and emphasizes the importance of being aware of human error and other inefficiencies. ML is a good candidate to improve the correct identification of metastatic cancer, saving thousands of lives and can also improve the speed and efficiency of the process, thereby taking fewer resources and time. So far, the deep learning methodology of AI has been used in research to detect cancer. This study is a novel approach to determining the potential of using preprocessing algorithms combined with classification algorithms in detecting metastatic cancer. The study used two preprocessing algorithms: principal component analysis (PCA) and the genetic algorithm, to reduce the dimensionality of the dataset and then used three classification algorithms: logistic regression, decision tree classifier, and k-nearest neighbors to detect metastatic cancer in the pathology scans. The highest accuracy of 71.14% was produced by the ML pipeline comprising of PCA, the genetic algorithm, and the k-nearest neighbor algorithm, suggesting that preprocessing and classification algorithms have great potential for detecting metastatic cancer.Keywords: breast cancer, principal component analysis, genetic algorithm, k-nearest neighbors, decision tree classifier, logistic regression
Procedia PDF Downloads 79486 Mammographic Multi-View Cancer Identification Using Siamese Neural Networks
Authors: Alisher Ibragimov, Sofya Senotrusova, Aleksandra Beliaeva, Egor Ushakov, Yuri Markin
Abstract:
Mammography plays a critical role in screening for breast cancer in women, and artificial intelligence has enabled the automatic detection of diseases in medical images. Many of the current techniques used for mammogram analysis focus on a single view (mediolateral or craniocaudal view), while in clinical practice, radiologists consider multiple views of mammograms from both breasts to make a correct decision. Consequently, computer-aided diagnosis (CAD) systems could benefit from incorporating information gathered from multiple views. In this study, the introduce a method based on a Siamese neural network (SNN) model that simultaneously analyzes mammographic images from tri-view: bilateral and ipsilateral. In this way, when a decision is made on a single image of one breast, attention is also paid to two other images – a view of the same breast in a different projection and an image of the other breast as well. Consequently, the algorithm closely mimics the radiologist's practice of paying attention to the entire examination of a patient rather than to a single image. Additionally, to the best of our knowledge, this research represents the first experiments conducted using the recently released Vietnamese dataset of digital mammography (VinDr-Mammo). On an independent test set of images from this dataset, the best model achieved an AUC of 0.87 per image. Therefore, this suggests that there is a valuable automated second opinion in the interpretation of mammograms and breast cancer diagnosis, which in the future may help to alleviate the burden on radiologists and serve as an additional layer of verification.Keywords: breast cancer, computer-aided diagnosis, deep learning, multi-view mammogram, siamese neural network
Procedia PDF Downloads 133485 Clustering for Detection of the Population at Risk of Anticholinergic Medication
Authors: A. Shirazibeheshti, T. Radwan, A. Ettefaghian, G. Wilson, C. Luca, Farbod Khanizadeh
Abstract:
Anticholinergic medication has been associated with events such as falls, delirium, and cognitive impairment in older patients. To further assess this, anticholinergic burden scores have been developed to quantify risk. A risk model based on clustering was deployed in a healthcare management system to cluster patients into multiple risk groups according to anticholinergic burden scores of multiple medicines prescribed to patients to facilitate clinical decision-making. To do so, anticholinergic burden scores of drugs were extracted from the literature, which categorizes the risk on a scale of 1 to 3. Given the patients’ prescription data on the healthcare database, a weighted anticholinergic risk score was derived per patient based on the prescription of multiple anticholinergic drugs. This study was conducted on over 300,000 records of patients currently registered with a major regional UK-based healthcare provider. The weighted risk scores were used as inputs to an unsupervised learning algorithm (mean-shift clustering) that groups patients into clusters that represent different levels of anticholinergic risk. To further evaluate the performance of the model, any association between the average risk score within each group and other factors such as socioeconomic status (i.e., Index of Multiple Deprivation) and an index of health and disability were investigated. The clustering identifies a group of 15 patients at the highest risk from multiple anticholinergic medication. Our findings also show that this group of patients is located within more deprived areas of London compared to the population of other risk groups. Furthermore, the prescription of anticholinergic medicines is more skewed to female than male patients, indicating that females are more at risk from this kind of multiple medications. The risk may be monitored and controlled in well artificial intelligence-equipped healthcare management systems.Keywords: anticholinergic medicines, clustering, deprivation, socioeconomic status
Procedia PDF Downloads 208484 Forecasting Nokoué Lake Water Levels Using Long Short-Term Memory Network
Authors: Namwinwelbere Dabire, Eugene C. Ezin, Adandedji M. Firmin
Abstract:
The prediction of hydrological flows (rainfall-depth or rainfall-discharge) is becoming increasingly important in the management of hydrological risks such as floods. In this study, the Long Short-Term Memory (LSTM) network, a state-of-the-art algorithm dedicated to time series, is applied to predict the daily water level of Nokoue Lake in Benin. This paper aims to provide an effective and reliable method enable of reproducing the future daily water level of Nokoue Lake, which is influenced by a combination of two phenomena: rainfall and river flow (runoff from the Ouémé River, the Sô River, the Porto-Novo lagoon, and the Atlantic Ocean). Performance analysis based on the forecasting horizon indicates that LSTM can predict the water level of Nokoué Lake up to a forecast horizon of t+10 days. Performance metrics such as Root Mean Square Error (RMSE), coefficient of correlation (R²), Nash-Sutcliffe Efficiency (NSE), and Mean Absolute Error (MAE) agree on a forecast horizon of up to t+3 days. The values of these metrics remain stable for forecast horizons of t+1 days, t+2 days, and t+3 days. The values of R² and NSE are greater than 0.97 during the training and testing phases in the Nokoué Lake basin. Based on the evaluation indices used to assess the model's performance for the appropriate forecast horizon of water level in the Nokoué Lake basin, the forecast horizon of t+3 days is chosen for predicting future daily water levels.Keywords: forecasting, long short-term memory cell, recurrent artificial neural network, Nokoué lake
Procedia PDF Downloads 60483 Decision-Making Strategies on Smart Dairy Farms: A Review
Authors: L. Krpalkova, N. O' Mahony, A. Carvalho, S. Campbell, G. Corkery, E. Broderick, J. Walsh
Abstract:
Farm management and operations will drastically change due to access to real-time data, real-time forecasting, and tracking of physical items in combination with Internet of Things developments to further automate farm operations. Dairy farms have embraced technological innovations and procured vast amounts of permanent data streams during the past decade; however, the integration of this information to improve the whole farm-based management and decision-making does not exist. It is now imperative to develop a system that can collect, integrate, manage, and analyse on-farm and off-farm data in real-time for practical and relevant environmental and economic actions. The developed systems, based on machine learning and artificial intelligence, need to be connected for useful output, a better understanding of the whole farming issue, and environmental impact. Evolutionary computing can be very effective in finding the optimal combination of sets of some objects and, finally, in strategy determination. The system of the future should be able to manage the dairy farm as well as an experienced dairy farm manager with a team of the best agricultural advisors. All these changes should bring resilience and sustainability to dairy farming as well as improving and maintaining good animal welfare and the quality of dairy products. This review aims to provide an insight into the state-of-the-art of big data applications and evolutionary computing in relation to smart dairy farming and identify the most important research and development challenges to be addressed in the future. Smart dairy farming influences every area of management, and its uptake has become a continuing trend.Keywords: big data, evolutionary computing, cloud, precision technologies
Procedia PDF Downloads 189482 Effects of Adding Sodium Nitroprusside in Semen Diluents on Motility, Viability and Lipid Peroxidation of Sperm of Holstein Bulls
Authors: Leila Karshenas, Hamid Reza Khodaei, Behnaz Mahdavi
Abstract:
We know that nitric oxide (NO) plays an important role in all sexual activities of animals. It is made in body from NO synthase enzyme and L-arginin molecule. NO can bound with sulfur-iron complexes and because production of steroid sexual hormones is related to enzymes which have this complex, NO can change the activity of these enzymes. NO affects many cells including endothelial cells of veins, macrophages and mast cells. These cells are found in testis leydig cells and therefore are important source of NO in testis tissue. Minimizing damages to sperm at the time of sperm freezing and thawing is really important. The goal of this study was to determine the function of NO before freezing and its effects on quality and viability of sperms after thawing and incubation. 4 Holstein bulls were selected from the age of 4, and artificial insemination was done for 3 weeks (2 times a week). Treatments were 0, 10, 50 and 100 nm of sodium nitroprusside (SNP). Data analysis was performed by SAS98 program. Also, mean comparison was done using Duncan's multiple ranges test (P<0.05). Concentrations used was found to increase motility and viability of spermatozoa at 1, 2 and 3 hours after thawing significantly (P<0.05), but there was no significant difference at zero time. SNP levels reduced the amount of lipid peroxidation in sperm membrane, increased acrosome health and improved sample membranes especially in 50 and 100 nm treatments. According to results, adding SNP to semen diluents increases motility and viability of spermatozoa. Also, it reduces lipid peroxidation in sperm membrane and improves sperm function.Keywords: sperm motility, nitric oxide, lipid peroxidation, spermatozoa
Procedia PDF Downloads 357481 Optimizing the Readability of Orthopaedic Trauma Patient Education Materials Using ChatGPT-4
Authors: Oscar Covarrubias, Diane Ghanem, Christopher Murdock, Babar Shafiq
Abstract:
Introduction: ChatGPT is an advanced language AI tool designed to understand and generate human-like text. The aim of this study is to assess the ability of ChatGPT-4 to re-write orthopaedic trauma patient education materials at the recommended 6th-grade level. Methods: Two independent reviewers accessed ChatGPT-4 (chat.openai.com) and gave identical instructions to simplify the readability of provided text to a 6th-grade level. All trauma-related articles by the Orthopaedic Trauma Association (OTA) and American Academy of Orthopaedic Surgeons (AAOS) were sequentially provided. The academic grade level was determined using the Flesh-Kincaid Grade Level (FKGL) and Flesch Reading Ease (FRE). Paired t-tests and Wilcox-rank sum tests were used to compare the FKGL and FRE between the ChatGPT-4 revised and original text. Inter-rater correlation coefficient (ICC) was used to assess variability in ChatGPT-4 generated text between the two reviewers. Results: ChatGPT-4 significantly reduced FKGL and increased FRE scores in the OTA (FKGL: 5.7±0.5 compared to the original 8.2±1.1, FRE: 76.4±5.7 compared to the original 65.5±6.6, p < 0.001) and AAOS articles (FKGL: 5.8±0.8 compared to the original 8.9±0.8, FRE: 76±5.5 compared to the original 56.7±5.9, p < 0.001). On average, 14.6% of OTA and 28.6% of AAOS articles required at least two revisions by ChatGPT-4 to achieve a 6th-grade reading level. ICC demonstrated poor reliability for FKGL (OTA 0.24, AAOS 0.45) and moderate reliability for FRE (OTA 0.61, AAOS 0.73). Conclusion: This study provides a novel, simple and efficient method using language AI to optimize the readability of patient education content which may only require the surgeon’s final proofreading. This method would likely be as effective for other medical specialties.Keywords: artificial intelligence, AI, chatGPT, patient education, readability, trauma education
Procedia PDF Downloads 70480 The Intersection of Art and Technology: Innovations in Visual Communication Design
Authors: Sareh Enjavi
Abstract:
In recent years, the field of visual communication design has seen a significant shift in the way that art is created and consumed, with the advent of new technologies like virtual reality, augmented reality, and artificial intelligence. This paper explores the ways in which technology is changing the landscape of visual communication design, and how designers are incorporating new technological tools into their artistic practices. The primary objective of this research paper is to investigate the ways in which technology is influencing the creative process of designers and artists in the field of visual communication design. The paper also aims to examine the challenges and limitations that arise from the intersection of art and technology in visual communication design, and to identify strategies for overcoming these challenges. Drawing on examples from a range of fields, including advertising, fine art, and digital media, this paper highlights the exciting innovations that are emerging as artists and designers use technology to push the boundaries of traditional artistic expression. The paper argues that embracing technological innovation is essential for the continued evolution of visual communication design. By exploring the intersection of art and technology, designers can create new and exciting visual experiences that engage and inspire audiences in new ways. The research also contributes to the theoretical and methodological understanding of the intersection of art and technology, a topic that has gained significant attention in recent years. Ultimately, this paper emphasizes the importance of embracing innovation and experimentation in the field of visual communication design, and highlights the exciting innovations that are emerging as a result of the intersection of art and technology, and emphasizes the importance of embracing innovation and experimentation in the field of visual communication design.Keywords: visual communication design, art and technology, virtual reality, interactive art, creative process
Procedia PDF Downloads 114479 A Virtual Reality Simulation Tool for Reducing the Risk of Building Content during Earthquakes
Authors: Ali Asgary, Haopeng Zhou, Ghassem Tofighi
Abstract:
Use of virtual (VR), augmented reality (AR), and extended reality technologies for training and education has increased in recent years as more hardware and software tools have become available and accessible to larger groups of users. Similarly, the applications of these technologies in earthquake related training and education are on the rise. Several studies have reported promising results for the use of VR and AR for evacuation behaviour and training under earthquake situations. They simulate the impacts that earthquake has on buildings, buildings’ contents, and how building occupants and users can find safe spots or open paths to outside. Considering that considerable number of earthquake injuries and fatalities are linked to the behaviour, our goal is to use these technologies to reduce the impacts of building contents on people. Building on our artificial intelligence (AI) based indoor earthquake risk assessment application that enables users to use their mobile device to assess the risks associated with building contents during earthquakes, we develop a virtual reality application to demonstrate the behavior of different building contents during earthquakes, their associate moving, spreading, falling, and collapsing risks, and their risk mitigation methods. We integrate realistic seismic models, building contents behavior with and without risk mitigation measures in virtual reality environment. The application can be used for training of architects, interior design experts, and building users to enhance indoor safety of the buildings that can sustain earthquakes. This paper describes and demonstrates the application development background, structure, components, and usage.Keywords: virtual reality, earthquake damage, building content, indoor risks, earthquake risk mitigation, interior design, unity game engine, oculus
Procedia PDF Downloads 102478 A Review on the Hydrologic and Hydraulic Performances in Low Impact Development-Best Management Practices Treatment Train
Authors: Fatin Khalida Abdul Khadir, Husna Takaijudin
Abstract:
Bioretention system is one of the alternatives to approach the conventional stormwater management, low impact development (LID) strategy for best management practices (BMPs). Incorporating both filtration and infiltration, initial research on bioretention systems has shown that this practice extensively decreases runoff volumes and peak flows. The LID-BMP treatment train is one of the latest LID-BMPs for stormwater treatments in urbanized watersheds. The treatment train is developed to overcome the drawbacks that arise from conventional LID-BMPs and aims to enhance the performance of the existing practices. In addition, it is also used to improve treatments in both water quality and water quantity controls as well as maintaining the natural hydrology of an area despite the current massive developments. The objective of this paper is to review the effectiveness of the conventional LID-BMPS on hydrologic and hydraulic performances through column studies in different configurations. The previous studies on the applications of LID-BMP treatment train that were developed to overcome the drawbacks of conventional LID-BMPs are reviewed and use as the guidelines for implementing this system in Universiti Teknologi Petronas (UTP) and elsewhere. The reviews on the analysis conducted for hydrologic and hydraulic performances using the artificial neural network (ANN) model are done in order to be utilized in this study. In this study, the role of the LID-BMP treatment train is tested by arranging bioretention cells in series in order to be implemented for controlling floods that occurred currently and in the future when the construction of the new buildings in UTP completed. A summary of the research findings on the performances of the system is provided which includes the proposed modifications on the designs.Keywords: bioretention system, LID-BMP treatment train, hydrological and hydraulic performance, ANN analysis
Procedia PDF Downloads 116477 Air Pollution: The Journey from Single Particle Characterization to in vitro Fate
Authors: S. Potgieter-Vermaak, N. Bain, A. Brown, K. Shaw
Abstract:
It is well-known from public news media that air pollution is a health hazard and is responsible for early deaths. The quantification of the relationship between air quality and health is a probing question not easily answered. It is known that airborne particulate matter (APM) <2.5µm deposits in the tracheal and alveoli zones and our research probes the possibility of quantifying pulmonary injury by linking reactive oxygen species (ROS) in these particles to DNA damage. Currently, APM mass concentration is linked to early deaths and limited studies probe the influence of other properties on human health. To predict the full extent and type of impact, particles need to be characterised for chemical composition and structure. APMs are routinely analysed for their bulk composition, but of late analysis on a micro level probing single particle character, using micro-analytical techniques, are considered. The latter, single particle analysis (SPA), permits one to obtain detailed information on chemical character from nano- to micron-sized particles. This paper aims to provide a snapshot of studies using data obtained from chemical characterisation and its link with in-vitro studies to inform on personal health risks. For this purpose, two studies will be compared, namely, the bioaccessibility of the inhalable fraction of urban road dust versus total suspended solids (TSP) collected in the same urban environment. The significant influence of metals such as Cu and Fe in TSP on DNA damage is illustrated. The speciation of Hg (determined by SPA) in different urban environments proved to dictate its bioaccessibility in artificial lung fluids rather than its concentration.Keywords: air pollution, human health, in-vitro studies, particulate matter
Procedia PDF Downloads 223476 Characteristics of Tremella fuciformis and Annulohypoxylon stygium for Optimal Cultivation Conditions
Authors: Eun-Ji Lee, Hye-Sung Park, Chan-Jung Lee, Won-Sik Kong
Abstract:
We analyzed the DNA sequence of the ITS (Internal Transcribed Spacer) region of the 18S ribosomal gene and compared it with the gene sequence of T. fuciformis and Hypoxylon sp. in the BLAST database. The sequences of collected T. fuciformis and Hypoxylon sp. have over 99% homology in the T. fuciformis and Hypoxylon sp. sequence BLAST database. In order to select the optimal medium for T. fuciformis, five kinds of a medium such as Potato Dextrose Agar (PDA), Mushroom Complete Medium (MCM), Malt Extract Agar (MEA), Yeast extract (YM), and Compost Extract Dextrose Agar (CDA) were used. T. fuciformis showed the best growth on PDA medium, and Hypoxylon sp. showed the best growth on MCM. So as to investigate the optimum pH and temperature, the pH range was set to pH4 to pH8 and the temperature range was set to 15℃ to 35℃ (5℃ degree intervals). Optimum culture conditions for the T. fuciformis growth were pH5 at 25℃. Hypoxylon sp. were pH6 at 25°C. In order to confirm the most suitable carbon source, we used fructose, galactose, saccharose, soluble starch, inositol, glycerol, xylose, dextrose, lactose, dextrin, Na-CMC, adonitol. Mannitol, mannose, maltose, raffinose, cellobiose, ethanol, salicine, glucose, arabinose. In the optimum carbon source, T. fuciformis is xylose and Hypoxylon sp. is arabinose. Using the column test, we confirmed sawdust a suitable for T. fuciformis, since the composition of sawdust affects the growth of fruiting bodies of T. fuciformis. The sawdust we used is oak tree, pine tree, poplar, birch, cottonseed meal, cottonseed hull. In artificial cultivation of T. fuciformis with sawdust medium, T. fuciformis and Hypoxylon sp. showed fast mycelial growth on mixture of oak tree sawdust, cottonseed hull, and wheat bran.Keywords: cultivation, optimal condition, tremella fuciformis, nutritional source
Procedia PDF Downloads 209475 Development of Noninvasive Method to Analyze Dynamic Changes of Matrix Stiffness and Elasticity Characteristics
Authors: Elena Petersen, Inna Kornienko, Svetlana Guryeva, Sergey Dobdin, Anatoly Skripal, Andrey Usanov, Dmitry Usanov
Abstract:
One of the most important unsolved problems in modern medicine is the increase of chronic diseases that lead to organ dysfunction or even complete loss of function. Current methods of treatment do not result in decreased mortality and disability statistics. Currently, the best treatment for many patients is still transplantation of organs and/or tissues. Therefore, finding a way of correct artificial matrix biofabrication in case of limited number of natural organs for transplantation is a critical task. One important problem that needs to be solved is development of a nondestructive and noninvasive method to analyze dynamic changes of mechanical characteristics of a matrix with minimal side effects on the growing cells. This research was focused on investigating the properties of matrix as a marker of graft condition. In this study, the collagen gel with human primary dermal fibroblasts in suspension (60, 120, 240*103 cells/mL) and collagen gel with cell spheroids were used as model objects. The stiffness and elasticity characteristics were evaluated by a semiconductor laser autodyne. The time and cell concentration dependency of the stiffness and elasticity were investigated. It was shown that these properties changed in a non-linear manner with respect to cell concentration. The maximum matrix stiffness was observed in the collagen gel with the cell concentration of 120*103 cells/mL. This study proved the opportunity to use the mechanical properties of matrix as a marker of graft condition, which can be measured by noninvasive semiconductor laser autodyne technique.Keywords: graft, matrix, noninvasive method, regenerative medicine, semiconductor laser autodyne
Procedia PDF Downloads 343474 Analysis of Truck Drivers’ Distraction on Crash Risk
Authors: Samuel Nderitu Muchiri, Tracy Wangechi Maina
Abstract:
Truck drivers face a myriad of challenges in their profession. Enhancements in logistics effectiveness can be pivotal in propelling economic developments. The specific objective of the study was to assess the influence of driver distraction on crash risk. The study is significant as it elucidates best practices that truck drivers can embrace in an effort to enhance road safety. These include amalgamating behaviors that enable drivers to fruitfully execute multifaceted functions such as finding and following routes, evading collisions, monitoring speed, adhering to road regulations, and evaluating vehicle systems’ conditions. The analysis involved an empirical review of ten previous studies related to the research topic. The articles revealed that driver distraction plays a substantial role in road accidents and other crucial road security incidents across the globe. Africa depends immensely on the freight transport sector to facilitate supply chain operations. Several studies indicate that drivers who operate primarily on rural roads, such as those found in Sub-Saharan Africa, have an increased propensity to engage in distracted activities such as cell phone usage while driving. The findings also identified the need for digitalization in truck driving operations, including carrier management techniques such as fatigue management, artificial intelligence, and automating functions like cell phone usage controls. The recommendations can aid policymakers and commercial truck carriers in deepening their understanding of driver distraction and enforcing mitigations to foster road safety.Keywords: truck drivers, distraction, digitalization, crash risk, road safety
Procedia PDF Downloads 46473 Fabrication of Hybrid Scaffolds Consisting of Cell-laden Electrospun Micro/Nanofibers and PCL Micro-structures for Tissue Regeneration
Authors: MyungGu Yeo, JongHan Ha, Gi-Hoon Yang, JaeYoon Lee, SeungHyun Ahn, Hyeongjin Lee, HoJun Jeon, YongBok Kim, Minseong Kim, GeunHyung Kim
Abstract:
Tissue engineering is a rapidly growing interdisciplinary research area that may provide options for treating damaged tissues and organs. As a promising technique for regenerating various tissues, this technology requires biomedical scaffolds, which serve as an artificial extracellular matrix (ECM) to support neotissue growth. Electrospun micro/nanofibers have been used widely in tissue engineering because of their high surface-area-to-volume ratio and structural similarity to extracellular matrix. However, low mechanical sustainability, low 3D shape-ability, and low cell infiltration have been major limitations to their use. In this work, we propose new hybrid scaffolds interlayered with cell-laden electrospun micro/nano fibers and poly(caprolactone) microstructures. Also, we applied various concentrations of alginate and electric field strengths to determine optimal conditions for the cell-electrospinning process. The combination of cell-laden bioink (2 ⅹ 10^5 osteoblast-like MG63 cells/mL, 2 wt% alginate, 2 wt% poly(ethylene oxide), and 0.7 wt% lecithin) and a 0.16 kV/mm electric field showed the highest cell viability and fiber formation in this process. Using these conditions and PCL microstructures, we achieved mechanically stable hybrid scaffolds. In addition, the cells embedded in the fibrous structure were viable and proliferated. We suggest that the cell-embedded hybrid scaffolds fabricated using the cell-electrospinning process may be useful for various soft- and hard-tissue regeneration applications.Keywords: bioink, cell-laden scaffold, micro/nanofibers, poly(caprolactone)
Procedia PDF Downloads 378472 Stress-Strain Relation for Human Trabecular Bone Based on Nanoindentation Measurements
Authors: Marek Pawlikowski, Krzysztof Jankowski, Konstanty Skalski, Anna Makuch
Abstract:
Nanoindentation or depth-sensing indentation (DSI) technique has proven to be very useful to measure mechanical properties of various tissues at a micro-scale. Bone tissue, both trabecular and cortical one, is one of the most commonly tested tissues by means of DSI. Most often such tests on bone samples are carried out to compare the mechanical properties of lamellar and interlamellar bone, osteonal bone as well as compact and cancellous bone. In the paper, a relation between stress and strain for human trabecular bone is presented. The relation is based on the results of nanoindentation tests. The formulation of a constitutive model for human trabecular bone is based on nanoindentation tests. In the study, the approach proposed by Olivier-Pharr is adapted. The tests were carried out on samples of trabecular tissue extracted from human femoral heads. The heads were harvested during surgeries of artificial hip joint implantation. Before samples preparation, the heads were kept in 95% alcohol in temperature 4 Celsius degrees. The cubic samples cut out of the heads were stored in the same conditions. The dimensions of the specimens were 25 mm x 25 mm x 20 mm. The number of 20 samples have been tested. The age range of donors was between 56 and 83 years old. The tests were conducted with the indenter spherical tip of the diameter 0.200 mm. The maximum load was P = 500 mN and the loading rate 500 mN/min. The data obtained from the DSI tests allows one only to determine bone behoviour in terms of nanoindentation force vs. nanoindentation depth. However, it is more interesting and useful to know the characteristics of trabecular bone in the stress-strain domain. This allows one to simulate trabecular bone behaviour in a more realistic way. The stress-strain curves obtained in the study show relation between the age and the mechanical behaviour of trabecular bone. It was also observed that the bone matrix of trabecular tissue indicates an ability of energy absorption.Keywords: constitutive model, mechanical behaviour, nanoindentation, trabecular bone
Procedia PDF Downloads 218471 Design and Characterization of Ecological Materials Based on Demolition and Concrete Waste, Casablanca (Morocco)
Authors: Mourad Morsli, Mohamed Tahiri, Azzedine Samdi
Abstract:
The Cities are the urbanized territories most favorable to the consumption of resources (materials, energy). In Morocco, the economic capital Casablanca is one of them, with its 4M inhabitants and its 60% share in the economic and industrial activity of the kingdom. In the absence of legal status in force, urban development has favored the generation of millions of tons of demolition and construction waste scattered in open spaces causing a significant nuisance to the environment and citizens. Hence the main objective of our work is to valorize concrete waste. The representative wastes are mainly concrete, concrete, and fired clay bricks, ceramic tiles, marble panels, gypsum, and scrap metal. The work carried out includes: geolocation with a combination of artificial intelligence, GIS, and Google Earth, which allowed the estimation of the quantity of these wastes per site; then the sorting, crushing, grinding, and physicochemical characterization of the collected samples allowed the definition of the exploitation ways for each extracted fraction for integrated management of the said wastes. In the present work, we proceeded to the exploitation of the fractions obtained after sieving the representative samples to incorporate them in the manufacture of new ecological materials for construction. These formulations prepared studies have been tested and characterized: physical criteria (specific surface, resistance to flexion and compression) and appearance (cracks, deformation). We will present in detail the main results of our research work and also describe the specific properties of each material developed.Keywords: demolition and construction waste, GIS combination software, inert waste recovery, ecological materials, Casablanca, Morocco
Procedia PDF Downloads 131470 Effect of Rice Cultivars and Water Regimes Application as Mitigation Strategy for Greenhouse Gases in Paddy Fields
Authors: Mthiyane Pretty, Mitsui Toshiake, Aycan Murat, Nagano Hirohiko
Abstract:
Methane (CH₄) is one of the most dangerous greenhouse gases (GHG) emitted into the atmosphere by terrestrial ecosystems, with a global warming potential (GWP) 25-34 times that of CO2 on a centennial scale. Paddy rice cultivations are a major source of methane emission and is the major driving force for climate change. Thus, it is necessary to find out GHG emissions mitigation strategies from rice cultivation. A study was conducted at Niigata University. And the prime objective of this research was to determine the effects of rice varieties CH4 lowland (NU1, YNU, Nipponbare, Koshihikari) and upland (Norin 1, Norin 24, Hitachihatamochi) japonica rice varieties using different growth media which was paddy field soil and artificial soil. The treatments were laid out in a split plot design. The soil moisture was kept at 40-50% and 70%, respectively. The CH₄ emission rates were determined by collecting air samples using the closed chamber technique and measuring CH₄ concentrations using a gas chromatograph. CH₄ emission rates varied with the growth, growth media type and development of the rice varieties. The soil moisture was monitored at a soil depth of 5–10 cm with an HydraGO portable soil sensor system every three days for each pot, and temperatures were be recorded by a sensitive thermometer. The lowest cumulative CH4 emission rate was observed in Norin 24, particularly under 40 to 50% soil moisture. Across the rice genotypes, 40-50% significantly reduced the cumulative CH4 , followed by irrigation of 70% soil moisture. During the tillering stage, no significant variation in tillering and plant height was observed between and 70% soil moisture. This study suggests that the cultivation of Norin 24 and Norin 1 under 70% soil irrigation could be effective at reducing the CH4 in rice fields.Keywords: methane, paddy fields, rice varieties, soil moisture
Procedia PDF Downloads 90469 An Intelligent Prediction Method for Annular Pressure Driven by Mechanism and Data
Authors: Zhaopeng Zhu, Xianzhi Song, Gensheng Li, Shuo Zhu, Shiming Duan, Xuezhe Yao
Abstract:
Accurate calculation of wellbore pressure is of great significance to prevent wellbore risk during drilling. The traditional mechanism model needs a lot of iterative solving procedures in the calculation process, which reduces the calculation efficiency and is difficult to meet the demand of dynamic control of wellbore pressure. In recent years, many scholars have introduced artificial intelligence algorithms into wellbore pressure calculation, which significantly improves the calculation efficiency and accuracy of wellbore pressure. However, due to the ‘black box’ property of intelligent algorithm, the existing intelligent calculation model of wellbore pressure is difficult to play a role outside the scope of training data and overreacts to data noise, often resulting in abnormal calculation results. In this study, the multi-phase flow mechanism is embedded into the objective function of the neural network model as a constraint condition, and an intelligent prediction model of wellbore pressure under the constraint condition is established based on more than 400,000 sets of pressure measurement while drilling (MPD) data. The constraint of the multi-phase flow mechanism makes the prediction results of the neural network model more consistent with the distribution law of wellbore pressure, which overcomes the black-box attribute of the neural network model to some extent. The main performance is that the accuracy of the independent test data set is further improved, and the abnormal calculation values basically disappear. This method is a prediction method driven by MPD data and multi-phase flow mechanism, and it is the main way to predict wellbore pressure accurately and efficiently in the future.Keywords: multiphase flow mechanism, pressure while drilling data, wellbore pressure, mechanism constraints, combined drive
Procedia PDF Downloads 171468 Influence of Pine Wood Ash as Pozzolanic Material on Compressive Strength of a Concrete
Authors: M. I. Nicolas, J. C. Cruz, Ysmael Verde, A.Yeladaqui-Tello
Abstract:
The manufacture of Portland cement has revolutionized the construction industry since the nineteenth century; however, the high cost and large amount of energy required on its manufacturing encouraged, from the seventies, the search of alternative materials to replace it partially or completely. Among the materials studied to replace the cement are the ashes. In the city of Chetumal, south of the Yucatan Peninsula in Mexico, there are no natural sources of pozzolanic ash. In the present study, the cementitious properties of artificial ash resulting from the combustion of waste pine wood were analyzed. The ash obtained was sieved through the screen and No.200 a fraction was analyzed using the technique of X-ray diffraction; with the aim of identifying the crystalline phases and particle sizes of pozzolanic material by the Debye-Scherrer equation. From the characterization of materials, mixtures for a concrete of f'c = 250 kg / cm2 were designed with the method ACI 211.1; for the pattern mixture and for partial replacements of Portland cement by 5%, 10% and 12% pine wood ash mixture. Simple resistance to axial compression of specimens prepared with each concrete mixture, at 3, 14 and 28 days of curing was evaluated. Pozzolanic activity was observed in the ash obtained, checking the presence of crystalline silica (SiO2 of 40.24 nm) and alumina (Al2O3 of 35.08 nm). At 28 days of curing, the specimens prepared with a 5% ash, reached a compression resistance 63% higher than design; for specimens with 10% ash, was 45%; and for specimens with 12% ash, only 36%. Compared to Pattern mixture, which after 28 days showed a f'c = 423.13 kg/cm2, the specimens reached only 97%, 86% and 82% of the compression resistance, for mixtures containing 5%, 10% ash and 12% respectively. The pozzolanic activity of pine wood ash influences the compression resistance, which indicates that it can replace up to 12% of Portland cement by ash without compromising its design strength, however, there is a decrease in strength compared to the pattern concrete.Keywords: concrete, pine wood ash, pozzolanic activity, X-ray
Procedia PDF Downloads 452467 Supervisory Controller with Three-State Energy Saving Mode for Induction Motor in Fluid Transportation
Authors: O. S. Ebrahim, K. O. Shawky, M. O. S. Ebrahim, P. K. Jain
Abstract:
Induction Motor (IM) driving pump is the main consumer of electricity in a typical fluid transportation system (FTS). It was illustrated that changing the connection of the stator windings from delta to star at no load could achieve noticeable active and reactive energy savings. This paper proposes a supervisory hysteresis liquid-level control with three-state energy saving mode (ESM) for IM in FTS including storage tank. The IM pump drive comprises modified star/delta switch and hydromantic coupler. Three-state ESM is defined, along with the normal running, and named analog to computer ESMs as follows: Sleeping mode in which the motor runs at no load with delta stator connection, hibernate mode in which the motor runs at no load with a star connection, and motor shutdown is the third energy saver mode. A logic flow-chart is synthesized to select the motor state at no-load for best energetic cost reduction, considering the motor thermal capacity used. An artificial neural network (ANN) state estimator, based on the recurrent architecture, is constructed and learned in order to provide fault-tolerant capability for the supervisory controller. Sequential test of Wald is used for sensor fault detection. Theoretical analysis, preliminary experimental testing and, computer simulations are performed to show the effectiveness of the proposed control in terms of reliability, power quality and energy/coenergy cost reduction with the suggestion of power factor correction.Keywords: ANN, ESM, IM, star/delta switch, supervisory control, FT, reliability, power quality
Procedia PDF Downloads 186