Search results for: automatic built-in-stabilizers
176 Identifying Artifacts in SEM-EDS of Fouled RO Membranes Used for the Treatment of Brackish Groundwater Through Raman and ICP-MS Analysis
Authors: Abhishek Soti, Aditya Sharma, Akhilendra Bhushan Gupta
Abstract:
Fouled reverse osmosis membranes are primarily characterized by Scanning Electron Microscopy (SEM) and Energy Dispersive X-ray Spectrometer (EDS) for a detailed investigation of foulants; however, this has severe limitations on several accounts. Apart from inaccuracy in spectral properties and inevitable interferences and interactions between sample and instrument, misidentification of elements due to overlapping peaks is a significant drawback of EDS. This paper discusses this limitation by analyzing fouled polyamide RO membranes derived from community RO plants of Rajasthan treating brackish water via a combination of results obtained from EDS and Raman spectroscopy and cross corroborating with ICP-MS analysis of water samples prepared by dissolving the deposited salts. The anomalous behavior of different morphic forms of CaCO₃ in aqueous suspensions tends to introduce false reporting of the presence of certain heavy metals and rare earth metals in the scales of the fouled RO membranes used for treating brackish groundwater when analyzed using the commonly adopted techniques like SEM-EDS or Raman spectrometry. Peaks of CaCO₃ reflected in EDS spectra of the membrane were found to be misinterpreted as Scandium due to the automatic assignment of elements by the software. Similarly, the morphic forms merged with the dominant peak of CaCO₃ might be reflected as a single peak of Molybdenum in the Raman spectrum. A subsequent ICP-MS analysis of the deposited salts showed that both Sc and Mo were below detectable levels. It is always essential to cross-confirm the results through a destructive analysis method to avoid such interferences. It is further recommended to study different morphic forms of CaCO₃ scales, as they exhibit anomalous properties like reverse solubility with temperature and hence altered precipitation tendencies, for an accurate description of the composition of scales, which is vital for the smooth functioning of RO systems.Keywords: reverse osmosis, foulant analysis, groundwater, EDS, artifacts
Procedia PDF Downloads 106175 The Effect of Penalizing Wrong Answers in the Computerized Modified Multiple Choice Testing System
Authors: Min Hae Song, Jooyong Park
Abstract:
Even though assessment using information and communication technology will most likely lead the future of educational assessment, there is little research on this topic. Computerized assessment will not only cut costs but also measure students' performance in ways not possible before. In this context, this study introduces a tool which can overcome the problems of multiple choice tests. Multiple-choice tests (MC) are efficient in automatic grading, however structural problems of multiple-choice tests allow students to find the correct answer from options even though they do not know the answer. A computerized modified multiple-choice testing system (CMMT) was developed using the interactivity of computers, that presents questions first, and options later for a short time when the student requests for them. This study was conducted to find out whether penalizing for wrong answers in CMMT could lower random guessing. In this study, we checked whether students knew the answers by having them respond to the short-answer tests before choosing the given options in CMMT or MC format. Ninety-four students were tested with the directions that they will be penalized for wrong answers, but not for no response. There were 4 experimental conditions: two conditions of high or low percentage of penalizing, each in traditional multiple-choice or CMMT format. In the low penalty condition, the penalty rate was the probability of getting the correct answer by random guessing. In the high penalty condition, students were penalized at twice the percentage of the low penalty condition. The results showed that the number of no response was significantly higher for the CMMT format and the number of random guesses was significantly lower for the CMMT format. There were no significant between the two penalty conditions. This result may be due to the fact that the actual score difference between the two conditions was too small. In the discussion, the possibility of applying CMMT format tests while penalizing wrong answers in actual testing settings was addressed.Keywords: computerized modified multiple choice test format, multiple-choice test format, penalizing, test format
Procedia PDF Downloads 168174 Analyzing Safety Incidents using the Fatigue Risk Index Calculator as an Indicator of Fatigue within a UK Rail Franchise
Authors: Michael Scott Evans, Andrew Smith
Abstract:
The feeling of fatigue at work could potentially have devastating consequences. The aim of this study was to investigate whether the well-established objective indicator of fatigue – the Fatigue Risk Index (FRI) calculator used by the rail industry is an effective indicator to the number of safety incidents, in which fatigue could have been a contributing factor. The study received ethics approval from Cardiff University’s Ethics Committee (EC.16.06.14.4547). A total of 901 safety incidents were recorded from a single British rail franchise between 1st June 2010 – 31st December 2016, into the Safety Management Information System (SMIS). The safety incident types identified that fatigue could have been a contributing factor were: Signal Passed at Danger (SPAD), Train Protection & Warning System (TPWS) activation, Automatic Warning System (AWS) slow to cancel, failed to call, and station overrun. From the 901 recorded safety incidents, the scheduling system CrewPlan was used to extract the Fatigue Index (FI) score and Risk Index (RI) score of all train drivers on the day of the safety incident. Only the working rosters of 64.2% (N = 578) (550 men and 28 female) ranging in age from 24 – 65 years old (M = 47.13, SD = 7.30) were accessible for analyses. Analysis from all 578 train drivers who were involved in safety incidents revealed that 99.8% (N = 577) of Fatigue Index (FI) scores fell within or below the identified guideline threshold of 45 as well as 97.9% (N = 566) of Risk Index (RI) scores falling below the 1.6 threshold range. Their scores represent good practice within the rail industry. These findings seem to indicate that the current objective indicator, i.e. the FRI calculator used in this study by the British rail franchise was not an effective predictor of train driver’s FI scores and RI scores, as safety incidents in which fatigue could have been a contributing factor represented only 0.2% of FI scores and 2.1% of RI scores. Further research is needed to determine whether there are other contributing factors that could provide a better indication as to why there is such a significantly large proportion of train drivers who are involved in safety incidents, in which fatigue could have been a contributing factor have such low FI and RI scores.Keywords: fatigue risk index calculator, objective indicator of fatigue, rail industry, safety incident
Procedia PDF Downloads 181173 Detection of Safety Goggles on Humans in Industrial Environment Using Faster-Region Based on Convolutional Neural Network with Rotated Bounding Box
Authors: Ankit Kamboj, Shikha Talwar, Nilesh Powar
Abstract:
To successfully deliver our products in the market, the employees need to be in a safe environment, especially in an industrial and manufacturing environment. The consequences of delinquency in wearing safety glasses while working in industrial plants could be high risk to employees, hence the need to develop a real-time automatic detection system which detects the persons (violators) not wearing safety glasses. In this study a convolutional neural network (CNN) algorithm called faster region based CNN (Faster RCNN) with rotated bounding box has been used for detecting safety glasses on persons; the algorithm has an advantage of detecting safety glasses with different orientation angles on the persons. The proposed method of rotational bounding boxes with a convolutional neural network first detects a person from the images, and then the method detects whether the person is wearing safety glasses or not. The video data is captured at the entrance of restricted zones of the industrial environment (manufacturing plant), which is further converted into images at 2 frames per second. In the first step, the CNN with pre-trained weights on COCO dataset is used for person detection where the detections are cropped as images. Then the safety goggles are labelled on the cropped images using the image labelling tool called roLabelImg, which is used to annotate the ground truth values of rotated objects more accurately, and the annotations obtained are further modified to depict four coordinates of the rectangular bounding box. Next, the faster RCNN with rotated bounding box is used to detect safety goggles, which is then compared with traditional bounding box faster RCNN in terms of detection accuracy (average precision), which shows the effectiveness of the proposed method for detection of rotatory objects. The deep learning benchmarking is done on a Dell workstation with a 16GB Nvidia GPU.Keywords: CNN, deep learning, faster RCNN, roLabelImg rotated bounding box, safety goggle detection
Procedia PDF Downloads 131172 Seashore Debris Detection System Using Deep Learning and Histogram of Gradients-Extractor Based Instance Segmentation Model
Authors: Anshika Kankane, Dongshik Kang
Abstract:
Marine debris has a significant influence on coastal environments, damaging biodiversity, and causing loss and damage to marine and ocean sector. A functional cost-effective and automatic approach has been used to look up at this problem. Computer vision combined with a deep learning-based model is being proposed to identify and categorize marine debris of seven kinds on different beach locations of Japan. This research compares state-of-the-art deep learning models with a suggested model architecture that is utilized as a feature extractor for debris categorization. The model is being proposed to detect seven categories of litter using a manually constructed debris dataset, with the help of Mask R-CNN for instance segmentation and a shape matching network called HOGShape, which can then be cleaned on time by clean-up organizations using warning notifications of the system. The manually constructed dataset for this system is created by annotating the images taken by fixed KaKaXi camera using CVAT annotation tool with seven kinds of category labels. A pre-trained HOG feature extractor on LIBSVM is being used along with multiple templates matching on HOG maps of images and HOG maps of templates to improve the predicted masked images obtained via Mask R-CNN training. This system intends to timely alert the cleanup organizations with the warning notifications using live recorded beach debris data. The suggested network results in the improvement of misclassified debris masks of debris objects with different illuminations, shapes, viewpoints and litter with occlusions which have vague visibility.Keywords: computer vision, debris, deep learning, fixed live camera images, histogram of gradients feature extractor, instance segmentation, manually annotated dataset, multiple template matching
Procedia PDF Downloads 107171 Changes in Blood Pressure in a Longitudinal Cohort of Vietnamese Women
Authors: Anh Vo Van Ha, Yun Zhao, Luat Cong Nguyen, Tan Khac Chu, Phung Hoang Nguyen, Minh Ngoc Pham, Colin W. Binns, Andy H. Lee
Abstract:
This study aims to study longitudinal changes in blood pressure (BP) during the 1-year postpartum period and to evaluate the influence of parity, maternal age at delivery, prepregnancy BMI, gestational weight gain, gestational age at delivery and postpartum maternal weight. A prospective longitudinal cohort study of 883 singleton Vietnamese women was conducted in Hanoi, Haiphong, and Ho Chi Minh City, Vietnam during 2015-2017. Women diagnosed with gestational diabetes mellitus at 24-28 weeks of gestation, pre-eclampsia, and hypoglycemia was excluded from analysis. BP was repeatedly measured at discharge, 6 and 12 months postpartum using automatic blood pressure monitors. Linear mixed model with repeated measures was used to describe changes occurring during pregnancy to 1-year postpartum. Parity, self-reported prepregnancy BMI, gestational weight gain, maternal age and gestational age at delivery will be treated as time-invariant variables and measured maternal weight will be treated as a time-varying variable in models. Women with higher measured postpartum weight had higher mean systolic blood pressure (SBP), 0.20 mmHg, 95% CI [0.12, 0.28]. Similarly, women with higher measured postpartum weight had higher mean diastolic blood pressure (DBP), 0.15 mmHg, 95% CI [0.08, 0.23]. These differences were both statistically significant, P < 0.001. There were no differences in SBP and DBP depending on parity, maternal age at delivery, prepregnancy BMI, gestational weight gain and gestational age at delivery. Compared with discharge measurement, SBP was significantly higher in 6 months postpartum, 6.91 mmHg, 95% CI [6.22, 7.59], and 12 months postpartum, 6.39 mmHg, 95% CI [5.64, 7.15]. Similarly, DBP was also significantly higher in 6 and months postpartum than at discharge, 10.46 mmHg 95% CI [9.75, 11.17], and 11.33 mmHg 95% CI [10.54, 12.12]. In conclusion, BP measured repeatedly during the postpartum period (6 and 12 months postpartum) showed a statistically significant increase, compared with after discharge from the hospital. Maternal weight was a significant predictor of postpartum blood pressure over the 1-year postpartum period.Keywords: blood pressure, maternal weight, postpartum, Vietnam
Procedia PDF Downloads 206170 Developing a Self-Healing Concrete Filler Using Poly(Methyl Methacrylate) Based Two-Part Adhesive
Authors: Shima Taheri, Simon Clark
Abstract:
Concrete is an essential building material used in the majority of structures. Degradation of concrete over time increases the life-cycle cost of an asset with an estimated annual cost of billions of dollars to national economies. Most of the concrete failure occurs due to cracks, which propagate through a structure and cause weakening leading to failure. Stopping crack propagation is thus the key to protecting concrete structures from failure and is the best way to prevent inconveniences and catastrophes. Furthermore, the majority of cracks occur deep within the concrete in inaccessible areas and are invisible to normal inspection. Few materials intrinsically possess self-healing ability, but one that does is concrete. However, self-healing in concrete is limited to small dormant cracks in a moist environment and is difficult to control. In this project, we developed a method for self-healing of nascent fractures in concrete components through the automatic release of self-curing healing agents encapsulated in breakable nano- and micro-structures. The Poly(methyl methacrylate) (PMMA) based two-part adhesive is encapsulated in core-shell structures with brittle/weak inert shell, synthesized via miniemulsion/solvent evaporation polymerization. Stress fields associated with propagating cracks can break these capsules releasing the healing agents at the point where they are needed. The shell thickness is playing an important role in preserving the content until the final setting of concrete. The capsules can also be surface functionalized with carboxyl groups to overcome the homogenous mixing issues. Currently, this formulated self-healing system can replace up to 1% of cement in a concrete formulation. Increasing this amount to 5-7% in the concrete formulation without compromising compression strength and shrinkage properties, is still under investigation. This self-healing system will not only increase the durability of structures by stopping crack propagation but also allow the use of less cement in concrete construction, thereby adding to the global effort for CO2 emission reduction.Keywords: self-healing concrete, concrete crack, concrete deterioration, durability
Procedia PDF Downloads 119169 Off-Line Text-Independent Arabic Writer Identification Using Optimum Codebooks
Authors: Ahmed Abdullah Ahmed
Abstract:
The task of recognizing the writer of a handwritten text has been an attractive research problem in the document analysis and recognition community with applications in handwriting forensics, paleography, document examination and handwriting recognition. This research presents an automatic method for writer recognition from digitized images of unconstrained writings. Although a great effort has been made by previous studies to come out with various methods, their performances, especially in terms of accuracy, are fallen short, and room for improvements is still wide open. The proposed technique employs optimal codebook based writer characterization where each writing sample is represented by a set of features computed from two codebooks, beginning and ending. Unlike most of the classical codebook based approaches which segment the writing into graphemes, this study is based on fragmenting a particular area of writing which are beginning and ending strokes. The proposed method starting with contour detection to extract significant information from the handwriting and the curve fragmentation is then employed to categorize the handwriting into Beginning and Ending zones into small fragments. The similar fragments of beginning strokes are grouped together to create Beginning cluster, and similarly, the ending strokes are grouped to create the ending cluster. These two clusters lead to the development of two codebooks (beginning and ending) by choosing the center of every similar fragments group. Writings under study are then represented by computing the probability of occurrence of codebook patterns. The probability distribution is used to characterize each writer. Two writings are then compared by computing distances between their respective probability distribution. The evaluations carried out on ICFHR standard dataset of 206 writers using Beginning and Ending codebooks separately. Finally, the Ending codebook achieved the highest identification rate of 98.23%, which is the best result so far on ICFHR dataset.Keywords: off-line text-independent writer identification, feature extraction, codebook, fragments
Procedia PDF Downloads 513168 A Bayesian Approach for Analyzing Academic Article Structure
Authors: Jia-Lien Hsu, Chiung-Wen Chang
Abstract:
Research articles may follow a simple and succinct structure of organizational patterns, called move. For example, considering extended abstracts, we observe that an extended abstract usually consists of five moves, including Background, Aim, Method, Results, and Conclusion. As another example, when publishing articles in PubMed, authors are encouraged to provide a structured abstract, which is an abstract with distinct and labeled sections (e.g., Introduction, Methods, Results, Discussions) for rapid comprehension. This paper introduces a method for computational analysis of move structures (i.e., Background-Purpose-Method-Result-Conclusion) in abstracts and introductions of research documents, instead of manually time-consuming and labor-intensive analysis process. In our approach, sentences in a given abstract and introduction are automatically analyzed and labeled with a specific move (i.e., B-P-M-R-C in this paper) to reveal various rhetorical status. As a result, it is expected that the automatic analytical tool for move structures will facilitate non-native speakers or novice writers to be aware of appropriate move structures and internalize relevant knowledge to improve their writing. In this paper, we propose a Bayesian approach to determine move tags for research articles. The approach consists of two phases, training phase and testing phase. In the training phase, we build a Bayesian model based on a couple of given initial patterns and the corpus, a subset of CiteSeerX. In the beginning, the priori probability of Bayesian model solely relies on initial patterns. Subsequently, with respect to the corpus, we process each document one by one: extract features, determine tags, and update the Bayesian model iteratively. In the testing phase, we compare our results with tags which are manually assigned by the experts. In our experiments, the promising accuracy of the proposed approach reaches 56%.Keywords: academic English writing, assisted writing, move tag analysis, Bayesian approach
Procedia PDF Downloads 332167 Pollution Associated with Combustion in Stove to Firewood (Eucalyptus) and Pellet (Radiate Pine): Effect of UVA Irradiation
Authors: Y. Vásquez, F. Reyes, P. Oyola, M. Rubio, J. Muñoz, E. Lissi
Abstract:
In several cities in Chile, there is significant urban pollution, particularly in Santiago and in cities in the south where biomass is used as fuel in heating and cooking in a large proportion of homes. This has generated interest in knowing what factors can be modulated to control the level of pollution. In this project was conditioned and set up a photochemical chamber (14m3) equipped with gas monitors e.g. CO, NOX, O3, others and PM monitors e.g. dustrack, DMPS, Harvard impactors, etc. This volume could be exposed to UVA lamps, producing a spectrum similar to that generated by the sun. In this chamber, PM and gas emissions associated with biomass burning were studied in the presence and absence of radiation. From the comparative analysis of wood stove (eucalyptus globulus) and pellet (radiata pine), it can be concluded that, in the first approximation, 9-nitroanthracene, 4-nitropyrene, levoglucosan, water soluble potassium and CO present characteristics of the tracers. However, some of them show properties that interfere with this possibility. For example, levoglucosan is decomposed by radiation. The 9-nitroanthracene, 4-nitropyrene are emitted and formed under radiation. The 9-nitroanthracene has a vapor pressure that involves a partition involving the gas phase and particulate matter. From this analysis, it can be concluded that K+ is compound that meets the properties known to be tracer. The PM2.5 emission measured in the automatic pellet stove that was used in this thesis project was two orders of magnitude smaller than that registered by the manual wood stove. This has led to encouraging the use of pellet stoves in indoor heating, particularly in south-central Chile. However, it should be considered, while the use of pellet is not without problems, due to pellet stove generate high concentrations of Nitro-HAP's (secondary organic contaminants). In particular, 4-nitropyrene, compound of high toxicity, also primary and secondary particulate matter, associated with pellet burning produce a decrease in the size distribution of the PM, which leads to a depth penetration of the particles and their toxic components in the respiratory system.Keywords: biomass burning, photochemical chamber, particulate matter, tracers
Procedia PDF Downloads 194166 Automatic Generation of Census Enumeration Area and National Sampling Frame to Achieve Sustainable Development Goals
Authors: Sarchil H. Qader, Andrew Harfoot, Mathias Kuepie, Sabrina Juran, Attila Lazar, Andrew J. Tatem
Abstract:
The need for high-quality, reliable, and timely population data, including demographic information, to support the achievement of the sustainable development goals (SDGs) in all countries was recognized by the United Nations' 2030 Agenda for sustainable development. However, many low and middle-income countries lack reliable and recent census data. To achieve reliable and accurate census and survey outputs, up-to-date census enumeration areas and digital national sampling frames are critical. Census enumeration areas (EAs) are the smallest geographic units for collection, disseminating, and analyzing census data and are often used as a national sampling frame to serve various socio-economic surveys. Even for countries that are wealthy and stable, creating and updating EAs is a difficult yet crucial step in preparing for a national census. Such a process is commonly done manually, either by digitizing small geographic units on high-resolution satellite imagery or walking the boundaries of units, both of which are extremely expensive. We have developed a user-friendly tool that could be employed to generate draft EA boundaries automatically. The tool is based on high-resolution gridded population and settlement datasets, GPS household locations, building footprints and uses publicly available natural, man-made and administrative boundaries. Initial outputs were produced in Burkina Faso, Paraguay, Somalia, Togo, Niger, Guinea, and Zimbabwe. The results indicate that the EAs are in line with international standards, including boundaries that are easily identifiable and follow ground features, have no overlaps, are compact and free of pockets and disjoints, and the boundaries are nested within administrative boundaries.Keywords: enumeration areas, national sampling frame, gridded population data, preEA tool
Procedia PDF Downloads 146165 Unsupervised Learning and Similarity Comparison of Water Mass Characteristics with Gaussian Mixture Model for Visualizing Ocean Data
Authors: Jian-Heng Wu, Bor-Shen Lin
Abstract:
The temperature-salinity relationship is one of the most important characteristics used for identifying water masses in marine research. Temperature-salinity characteristics, however, may change dynamically with respect to the geographic location and is quite sensitive to the depth at the same location. When depth is taken into consideration, however, it is not easy to compare the characteristics of different water masses efficiently for a wide range of areas of the ocean. In this paper, the Gaussian mixture model was proposed to analyze the temperature-salinity-depth characteristics of water masses, based on which comparison between water masses may be conducted. Gaussian mixture model could model the distribution of a random vector and is formulated as the weighting sum for a set of multivariate normal distributions. The temperature-salinity-depth data for different locations are first used to train a set of Gaussian mixture models individually. The distance between two Gaussian mixture models can then be defined as the weighting sum of pairwise Bhattacharyya distances among the Gaussian distributions. Consequently, the distance between two water masses may be measured fast, which allows the automatic and efficient comparison of the water masses for a wide range area. The proposed approach not only can approximate the distribution of temperature, salinity, and depth directly without the prior knowledge for assuming the regression family, but may restrict the complexity by controlling the number of mixtures when the amounts of samples are unevenly distributed. In addition, it is critical for knowledge discovery in marine research to represent, manage and share the temperature-salinity-depth characteristics flexibly and responsively. The proposed approach has been applied to a real-time visualization system of ocean data, which may facilitate the comparison of water masses by aggregating the data without degrading the discriminating capabilities. This system provides an interface for querying geographic locations with similar temperature-salinity-depth characteristics interactively and for tracking specific patterns of water masses, such as the Kuroshio near Taiwan or those in the South China Sea.Keywords: water mass, Gaussian mixture model, data visualization, system framework
Procedia PDF Downloads 145164 Artificial Intelligence Protecting Birds against Collisions with Wind Turbines
Authors: Aleksandra Szurlej-Kielanska, Lucyna Pilacka, Dariusz Górecki
Abstract:
The dynamic development of wind energy requires the simultaneous implementation of effective systems minimizing the risk of collisions between birds and wind turbines. Wind turbines are installed in more and more challenging locations, often close to the natural environment of birds. More and more countries and organizations are defining guidelines for the necessary functionality of such systems. The minimum bird detection distance, trajectory tracking, and shutdown time are key factors in eliminating collisions. Since 2020, we have continued the survey on the validation of the subsequent version of the BPS detection and reaction system. Bird protection system (BPS) is a fully automatic camera system which allows one to estimate the distance of the bird to the turbine, classify its size and autonomously undertake various actions depending on the bird's distance and flight path. The BPS was installed and tested in a real environment at a wind turbine in northern Poland and Central Spain. The performed validation showed that at a distance of up to 300 m, the BPS performs at least as well as a skilled ornithologist, and large bird species are successfully detected from over 600 m. In addition, data collected by BPS systems installed in Spain showed that 60% of the detections of all birds of prey were from individuals approaching the turbine, and these detections meet the turbine shutdown criteria. Less than 40% of the detections of birds of prey took place at wind speeds below 2 m/s while the turbines were not working. As shown by the analysis of the data collected by the system over 12 months, the system classified the improved size of birds with a wingspan of more than 1.1 m in 90% and the size of birds with a wingspan of 0.7 - 1 m in 80% of cases. The collected data also allow the conclusion that some species keep a certain distance from the turbines at a wind speed of over 8 m/s (Aquila sp., Buteo sp., Gyps sp.), but Gyps sp. and Milvus sp. remained active at this wind speed on the tested area. The data collected so far indicate that BPS is effective in detecting and stopping wind turbines in response to the presence of birds of prey with a wingspan of more than 1 m.Keywords: protecting birds, birds monitoring, wind farms, green energy, sustainable development
Procedia PDF Downloads 76163 Analysis of the Impact of Suez Canal on the Robustness of Global Shipping Networks
Abstract:
The Suez Canal plays an important role in global shipping networks and is one of the most frequently used waterways in the world. The 2021 canal obstruction by ship Ever Given in March 2021, however, completed blocked the Suez Canal for a week and caused significant disruption to world trade. Therefore, it is very important to quantitatively analyze the impact of the accident on the robustness of the global shipping network. However, the current research on maritime transportation networks is usually limited to local or small-scale networks in a certain region. Based on the complex network theory, this study establishes a global shipping complex network covering 2713 nodes and 137830 edges by using the real trajectory data of the global marine transport ship automatic identification system in 2018. At the same time, two attack modes, deliberate (Suez Canal Blocking) and random, are defined to calculate the changes in network node degree, eccentricity, clustering coefficient, network density, network isolated nodes, betweenness centrality, and closeness centrality under the two attack modes, and quantitatively analyze the actual impact of Suez Canal Blocking on the robustness of global shipping network. The results of the network robustness analysis show that Suez Canal blocking was more destructive to the shipping network than random attacks of the same scale. The network connectivity and accessibility decreased significantly, and the decline decreased with the distance between the port and the canal, showing the phenomenon of distance attenuation. This study further analyzes the impact of the blocking of the Suez Canal on Chinese ports and finds that the blocking of the Suez Canal significantly interferes withChina's shipping network and seriously affects China's normal trade activities. Finally, the impact of the global supply chain is analyzed, and it is found that blocking the canal will seriously damage the normal operation of the global supply chain.Keywords: global shipping networks, ship AIS trajectory data, main channel, complex network, eigenvalue change
Procedia PDF Downloads 184162 YOLO-Based Object Detection for the Automatic Classification of Intestinal Organoids
Authors: Luana Conte, Giorgio De Nunzio, Giuseppe Raso, Donato Cascio
Abstract:
The intestinal epithelium serves as a pivotal model for studying stem cell biology and diseases such as colorectal cancer. Intestinal epithelial organoids, which replicate many in vivo features of the intestinal epithelium, are increasingly used as research models. However, manual classification of organoids is labor-intensive and prone to subjectivity, limiting scalability. In this study, we developed an automated object-detection algorithm to classify intestinal organoids in transmitted-light microscopy images. Our approach utilizes the YOLOv10 medium model (YOLO10m), a state-of-the-art object-detection algorithm, to predict and classify objects within labeled bounding boxes. The model was fine-tuned on a publicly available dataset containing 840 manually annotated images with 23,066 total annotations, averaging 28.2 annotations per image (median: 21; range: 1–137). It was trained to identify four categories: cysts, early organoids, late organoids, and spheroids, using a 90:10 train-validation split over 150 epochs. Model performance was assessed using mean average precision (mAP), precision, and recall metrics. The mAP, a standard metric ranging from 0 to 1 (with 1 indicating perfect agreement with manual labeling), was calculated at a 50% overlap threshold (mAP=0.5). Optimal performance was achieved at epoch 80, with an mAP of 0.85, precision of 0.78, and recall of 0.80 on the validation dataset. Classspecific mAP values were highest for cysts (0.87), followed by late organoids (0.83), early organoids (0.76), and spheroids (0.68). Additionally, the model demonstrated the ability to measure organoid sizes and classify them with accuracy comparable to expert scientists, while operating significantly faster. This automated pipeline represents a robust tool for large-scale, high-throughput analysis of intestinal organoids, paving the way for more efficient research in organoid biology and related fields.Keywords: intestinal organoids, object detection, YOLOv10, transmitted-light microscopy
Procedia PDF Downloads 7161 Design of a Simple Smart Greenhouse for Optimized Pak choi Cultivation in Rural Tropical Areas
Authors: Dedie Tooy, Rio Kolibu, Rio Putra, Herry Frits Pinatik, Daniel P. M. Ludong
Abstract:
This study presents the design and development of a smart greenhouse prototype tailored to optimize Pak choi (Brassica chinensis L.) cultivation in tropical rural climates. Pak choi, a high-demand leafy vegetable in Indonesia, often experiences suboptimal growth due to elevated temperatures and humidity. The objective of this research is to design and develop an intelligent greenhouse to optimize pak choi cultivation in tropical rural climates. The design of a smart greenhouse provides a controlled environment to stabilize these conditions, but managing fluctuating temperature, humidity, and light in tropical regions remains challenging. This system regulates critical environmental factors, including temperature, humidity, irrigation system, and light, creating optimal conditions for Pak Choi. The prototype's effectiveness was evaluated by monitoring growth indicators such as leaf weight, freshness, and moisture content, alongside the consistency of the internal climate compared to external conditions. Results indicate that the smart greenhouse supports superior crop growth, enhances yield quality, and reduces environmental resource consumption. The irrigation control system test was carried out for 40 days. Researchers observed the results of the automatic system working according to the sensor value readings. The results of the temperature control system test work: when the air temperature in the greenhouse is more than 33 degrees, the condensation pump will turn on, and when the temperature is below 32 degrees, the pump will automatically turn itself off. The cycle repeats continuously. The results achieved pak coy can live up to 40 days. As part of our ongoing research, we are actively considering integrating double-layered roofs to improve insulation and reduce external temperature fluctuations, which could further enhance the effectiveness of the smart greenhouse.Keywords: smart greenhouse, horticulture, rural tropical climate, sustainable agriculture
Procedia PDF Downloads 14160 Off-Body Sub-GHz Wireless Channel Characterization for Dairy Cows in Barns
Authors: Said Benaissa, David Plets, Emmeric Tanghe, Jens Trogh, Luc Martens, Leen Vandaele, Annelies Van Nuffel, Frank A. M. Tuyttens, Bart Sonck, Wout Joseph
Abstract:
The herd monitoring and managing - in particular the detection of ‘attention animals’ that require care, treatment or assistance is crucial for effective reproduction status, health, and overall well-being of dairy cows. In large sized farms, traditional methods based on direct observation or analysis of video recordings become labour-intensive and time-consuming. Thus, automatic monitoring systems using sensors have become increasingly important to continuously and accurately track the health status of dairy cows. Wireless sensor networks (WSNs) and internet-of-things (IoT) can be effectively used in health tracking of dairy cows to facilitate herd management and enhance the cow welfare. Since on-cow measuring devices are energy-constrained, a proper characterization of the off-body wireless channel between the on-cow sensor nodes and the back-end base station is required for a power-optimized deployment of these networks in barns. The aim of this study was to characterize the off-body wireless channel in indoor (barns) environment at 868 MHz using LoRa nodes. LoRa is an emerging wireless technology mainly targeted at WSNs and IoT networks. Both large scale fading (i.e., path loss) and temporal fading were investigated. The obtained path loss values as a function of the transmitter-receiver separation were well fitted by a lognormal path loss model. The path loss showed an additional increase of 4 dB when the wireless node was actually worn by the cow. The temporal fading due to movement of other cows was well described by Rician distributions with a K-factor of 8.5 dB. Based on this characterization, network planning and energy consumption optimization of the on-body wireless nodes could be performed, which enables the deployment of reliable dairy cow monitoring systems.Keywords: channel, channel modelling, cow monitoring, dairy cows, health monitoring, IoT, LoRa, off-body propagation, PLF, propagation
Procedia PDF Downloads 319159 Robustness of the Deep Chroma Extractor and Locally-Normalized Quarter Tone Filters in Automatic Chord Estimation under Reverberant Conditions
Authors: Luis Alvarado, Victor Poblete, Isaac Gonzalez, Yetzabeth Gonzalez
Abstract:
In MIREX 2016 (http://www.music-ir.org/mirex), the deep neural network (DNN)-Deep Chroma Extractor, proposed by Korzeniowski and Wiedmer, reached the highest score in an audio chord recognition task. In the present paper, this tool is assessed under acoustic reverberant environments and distinct source-microphone distances. The evaluation dataset comprises The Beatles and Queen datasets. These datasets are sequentially re-recorded with a single microphone in a real reverberant chamber at four reverberation times (0 -anechoic-, 1, 2, and 3 s, approximately), as well as four source-microphone distances (32, 64, 128, and 256 cm). It is expected that the performance of the trained DNN will dramatically decrease under these acoustic conditions with signals degraded by room reverberation and distance to the source. Recently, the effect of the bio-inspired Locally-Normalized Cepstral Coefficients (LNCC), has been assessed in a text independent speaker verification task using speech signals degraded by additive noise at different signal-to-noise ratios with variations of recording distance, and it has also been assessed under reverberant conditions with variations of recording distance. LNCC showed a performance so high as the state-of-the-art Mel Frequency Cepstral Coefficient filters. Based on these results, this paper proposes a variation of locally-normalized triangular filters called Locally-Normalized Quarter Tone (LNQT) filters. By using the LNQT spectrogram, robustness improvements of the trained Deep Chroma Extractor are expected, compared with classical triangular filters, and thus compensating the music signal degradation improving the accuracy of the chord recognition system.Keywords: chord recognition, deep neural networks, feature extraction, music information retrieval
Procedia PDF Downloads 234158 Development of a Robot Assisted Centrifugal Casting Machine for Manufacturing Multi-Layer Journal Bearing and High-Tech Machine Components
Authors: Mohammad Syed Ali Molla, Mohammed Azim, Mohammad Esharuzzaman
Abstract:
Centrifugal-casting machine is used in manufacturing special machine components like multi-layer journal bearing used in all internal combustion engine, steam, gas turbine and air craft turboengine where isotropic properties and high precisions are desired. Moreover, this machine can be used in manufacturing thin wall hightech machine components like cylinder liners and piston rings of IC engine and other machine parts like sleeves, and bushes. Heavy-duty machine component like railway wheel can also be prepared by centrifugal casting. A lot of technological developments are required in casting process for production of good casted machine body and machine parts. Usually defects like blowholes, surface roughness, chilled surface etc. are found in sand casted machine parts. But these can be removed by centrifugal casting machine using rotating metallic die. Moreover, die rotation, its temperature control, and good pouring practice can contribute to the quality of casting because of the fact that the soundness of a casting in large part depends upon how the metal enters into the mold or dies and solidifies. Poor pouring practice leads to variety of casting defects such as temperature loss, low quality casting, excessive turbulence, over pouring etc. Besides these, handling of molten metal is very unsecured and dangerous for the workers. In order to get rid of all these problems, the need of an automatic pouring device arises. In this research work, a robot assisted pouring device and a centrifugal casting machine are designed, developed constructed and tested experimentally which are found to work satisfactorily. The robot assisted pouring device is further modified and developed for using it in actual metal casting process. Lot of settings and tests are required to control the system and ultimately it can be used in automation of centrifugal casting machine to produce high-tech machine parts with desired precision.Keywords: bearing, centrifugal casting, cylinder liners, robot
Procedia PDF Downloads 416157 Quality of So-Called Organic Fertilizers in Vietnam's Market
Authors: Hoang Thi Quynh, Shima Kazuto
Abstract:
Organic farming is gaining interest in Vietnam. However, organic fertilizer production is not sufficiently regulated, resulting in unknown quality. This study investigated characteristics of so-called organic fertilizers in the Vietnam’s market and their mineralization in soil-plant system. We collected 15 commercial products (11 domestic and 4 imported) which labelled 'organic fertilizer' in the market to analyze nutrients composition. A 20 day-incubation experiment was carried on with 80 g sandy-textured soil, amended with the fertilizer at a rate of 109.4 mgN.kg⁻¹soil in 150 mL glass bottle at 25℃. We categorized them according to nutrients content and mineralization rate, and then selected 8 samples for cultivation experiment. The experiment was conducted by growing Komatsuna (Brassica campestris) in sandy-textured soil using an automatic watering apparatus in a greenhouse. The fertilizers were applied to the top one-third of the soil stratum at a rate of 200 mgN.kg⁻¹ soil. Our study also analyzed material flow of coffee husk compost in Central Highland of Vietnam. Total N, P, K, Ca, Mg and C: N ratio varied greatly cross the domestic products, whereas they were quite similar among the imported materials. The proportion of inorganic-N to T-N of domestic products was higher than 25% in 8 of 11 samples. These indicate that N concentration increased dramatically in most domestic products compared with their raw materials. Additionally, most domestic products contained less P, and their proportions of Truog-P to T-P were greatly different. These imply that some manufactures were interested in adjusting P concentration, but some ones were not. Furthermore, the compost was made by mixing with chemical substances to increase nutrients content (N, P), and also added construction surplus soil to gain weight before packing product to sell in the market as 'organic fertilizer'. There was a negative correlation between C:N ratio and mineralization rate of the fertilizers. There was a significant difference in N efficiency among the fertilizer treatments. N efficiency of most domestic products was higher than chemical fertilizer and imported organic fertilizers. These results suggest regulations on organic fertilizers production needed to support organic farming that is based on internationally accepted standards in Vietnam.Keywords: inorganic N, mineralization, N efficiency, so-called organic fertilizers, Vietnam’s market
Procedia PDF Downloads 182156 An Efficient Hardware/Software Workflow for Multi-Cores Simulink Applications
Authors: Asma Rebaya, Kaouther Gasmi, Imen Amari, Salem Hasnaoui
Abstract:
Over these last years, applications such as telecommunications, signal processing, digital communication with advanced features (Multi-antenna, equalization..) witness a rapid evaluation accompanied with an increase of user exigencies in terms of latency, the power of computation… To satisfy these requirements, the use of hardware/software systems is a common solution; where hardware is composed of multi-cores and software is represented by models of computation, synchronous data flow (SDF) graph for instance. Otherwise, the most of the embedded system designers utilize Simulink for modeling. The issue is how to simplify the c code generation, for a multi-cores platform, of an application modeled by Simulink. To overcome this problem, we propose a workflow allowing an automatic transformation from the Simulink model to the SDF graph and providing an efficient schedule permitting to optimize the number of cores and to minimize latency. This workflow goes from a Simulink application and a hardware architecture described by IP.XACT language. Based on the synchronous and hierarchical behavior of both models, the Simulink block diagram is automatically transformed into an SDF graph. Once this process is successfully achieved, the scheduler calculates the optimal cores’ number needful by minimizing the maximum density of the whole application. Then, a core is chosen to execute a specific graph task in a specific order and, subsequently, a compatible C code is generated. In order to perform this proposal, we extend Preesm, a rapid prototyping tool, to take the Simulink model as entry input and to support the optimal schedule. Afterward, we compared our results to this tool results, using a simple illustrative application. The comparison shows that our results strictly dominate the Preesm results in terms of number of cores and latency. In fact, if Preesm needs m processors and latency L, our workflow need processors and latency L'< L.Keywords: hardware/software system, latency, modeling, multi-cores platform, scheduler, SDF graph, Simulink model, workflow
Procedia PDF Downloads 270155 150 KVA Multifunction Laboratory Test Unit Based on Power-Frequency Converter
Authors: Bartosz Kedra, Robert Malkowski
Abstract:
This paper provides description and presentation of laboratory test unit built basing on 150 kVA power frequency converter and Simulink RealTime platform. Assumptions, based on criteria which load and generator types may be simulated using discussed device, are presented, as well as control algorithm structure. As laboratory setup contains transformer with thyristor controlled tap changer, a wider scope of setup capabilities is presented. Information about used communication interface, data maintenance, and storage solution as well as used Simulink real-time features is presented. List and description of all measurements are provided. Potential of laboratory setup modifications is evaluated. For purposes of Rapid Control Prototyping, a dedicated environment was used Simulink RealTime. Therefore, load model Functional Unit Controller is based on a PC computer with I/O cards and Simulink RealTime software. Simulink RealTime was used to create real-time applications directly from Simulink models. In the next step, applications were loaded on a target computer connected to physical devices that provided opportunity to perform Hardware in the Loop (HIL) tests, as well as the mentioned Rapid Control Prototyping process. With Simulink RealTime, Simulink models were extended with I/O cards driver blocks that made automatic generation of real-time applications and performing interactive or automated runs on a dedicated target computer equipped with a real-time kernel, multicore CPU, and I/O cards possible. Results of performed laboratory tests are presented. Different load configurations are described and experimental results are presented. This includes simulation of under frequency load shedding, frequency and voltage dependent characteristics of groups of load units, time characteristics of group of different load units in a chosen area and arbitrary active and reactive power regulation basing on defined schedule.Keywords: MATLAB, power converter, Simulink Real-Time, thyristor-controlled tap changer
Procedia PDF Downloads 325154 An Advanced Automated Brain Tumor Diagnostics Approach
Authors: Berkan Ural, Arif Eser, Sinan Apaydin
Abstract:
Medical image processing is generally become a challenging task nowadays. Indeed, processing of brain MRI images is one of the difficult parts of this area. This study proposes a hybrid well-defined approach which is consisted from tumor detection, extraction and analyzing steps. This approach is mainly consisted from a computer aided diagnostics system for identifying and detecting the tumor formation in any region of the brain and this system is commonly used for early prediction of brain tumor using advanced image processing and probabilistic neural network methods, respectively. For this approach, generally, some advanced noise removal functions, image processing methods such as automatic segmentation and morphological operations are used to detect the brain tumor boundaries and to obtain the important feature parameters of the tumor region. All stages of the approach are done specifically with using MATLAB software. Generally, for this approach, firstly tumor is successfully detected and the tumor area is contoured with a specific colored circle by the computer aided diagnostics program. Then, the tumor is segmented and some morphological processes are achieved to increase the visibility of the tumor area. Moreover, while this process continues, the tumor area and important shape based features are also calculated. Finally, with using the probabilistic neural network method and with using some advanced classification steps, tumor area and the type of the tumor are clearly obtained. Also, the future aim of this study is to detect the severity of lesions through classes of brain tumor which is achieved through advanced multi classification and neural network stages and creating a user friendly environment using GUI in MATLAB. In the experimental part of the study, generally, 100 images are used to train the diagnostics system and 100 out of sample images are also used to test and to check the whole results. The preliminary results demonstrate the high classification accuracy for the neural network structure. Finally, according to the results, this situation also motivates us to extend this framework to detect and localize the tumors in the other organs.Keywords: image processing algorithms, magnetic resonance imaging, neural network, pattern recognition
Procedia PDF Downloads 419153 Development of a Tilt-Rotor Aircraft Model Using System Identification Technique
Authors: Ferdinando Montemari, Antonio Vitale, Nicola Genito, Giovanni Cuciniello
Abstract:
The introduction of tilt-rotor aircraft into the existing civilian air transportation system will provide beneficial effects due to tilt-rotor capability to combine the characteristics of a helicopter and a fixed-wing aircraft into one vehicle. The disposability of reliable tilt-rotor simulation models supports the development of such vehicle. Indeed, simulation models are required to design automatic control systems that increase safety, reduce pilot's workload and stress, and ensure the optimal aircraft configuration with respect to flight envelope limits, especially during the most critical flight phases such as conversion from helicopter to aircraft mode and vice versa. This article presents a process to build a simplified tilt-rotor simulation model, derived from the analysis of flight data. The model aims to reproduce the complex dynamics of tilt-rotor during the in-flight conversion phase. It uses a set of scheduled linear transfer functions to relate the autopilot reference inputs to the most relevant rigid body state variables. The model also computes information about the rotor flapping dynamics, which are useful to evaluate the aircraft control margin in terms of rotor collective and cyclic commands. The rotor flapping model is derived through a mixed theoretical-empirical approach, which includes physical analytical equations (applicable to helicopter configuration) and parametric corrective functions. The latter are introduced to best fit the actual rotor behavior and balance the differences existing between helicopter and tilt-rotor during flight. Time-domain system identification from flight data is exploited to optimize the model structure and to estimate the model parameters. The presented model-building process was applied to simulated flight data of the ERICA Tilt-Rotor, generated by using a high fidelity simulation model implemented in FlightLab environment. The validation of the obtained model was very satisfying, confirming the validity of the proposed approach.Keywords: flapping dynamics, flight dynamics, system identification, tilt-rotor modeling and simulation
Procedia PDF Downloads 200152 A Normalized Non-Stationary Wavelet Based Analysis Approach for a Computer Assisted Classification of Laryngoscopic High-Speed Video Recordings
Authors: Mona K. Fehling, Jakob Unger, Dietmar J. Hecker, Bernhard Schick, Joerg Lohscheller
Abstract:
Voice disorders origin from disturbances of the vibration patterns of the two vocal folds located within the human larynx. Consequently, the visual examination of vocal fold vibrations is an integral part within the clinical diagnostic process. For an objective analysis of the vocal fold vibration patterns, the two-dimensional vocal fold dynamics are captured during sustained phonation using an endoscopic high-speed camera. In this work, we present an approach allowing a fully automatic analysis of the high-speed video data including a computerized classification of healthy and pathological voices. The approach bases on a wavelet-based analysis of so-called phonovibrograms (PVG), which are extracted from the high-speed videos and comprise the entire two-dimensional vibration pattern of each vocal fold individually. Using a principal component analysis (PCA) strategy a low-dimensional feature set is computed from each phonovibrogram. From the PCA-space clinically relevant measures can be derived that quantify objectively vibration abnormalities. In the first part of the work it will be shown that, using a machine learning approach, the derived measures are suitable to distinguish automatically between healthy and pathological voices. Within the approach the formation of the PCA-space and consequently the extracted quantitative measures depend on the clinical data, which were used to compute the principle components. Therefore, in the second part of the work we proposed a strategy to achieve a normalization of the PCA-space by registering the PCA-space to a coordinate system using a set of synthetically generated vibration patterns. The results show that owing to the normalization step potential ambiguousness of the parameter space can be eliminated. The normalization further allows a direct comparison of research results, which bases on PCA-spaces obtained from different clinical subjects.Keywords: Wavelet-based analysis, Multiscale product, normalization, computer assisted classification, high-speed laryngoscopy, vocal fold analysis, phonovibrogram
Procedia PDF Downloads 266151 Smart Automated Furrow Irrigation: A Preliminary Evaluation
Authors: Jasim Uddin, Rod Smith, Malcolm Gillies
Abstract:
Surface irrigation is the most popular irrigation method all over the world. However, two issues: low efficiency and huge labour involvement concern irrigators due to scarcity in recent years. To address these issues, a smart automated furrow is conceptualised that can be operated using digital devices like smartphone, iPad or computer and a preliminary evaluation was conducted in this study. The smart automated system is the integration of commercially available software and hardware. It includes real-time surface irrigation optimisation software (SISCO) and Rubicon Water’s surface irrigation automation hardware and software. The automated system consists of automatic water delivery system with 300 mm flexible pipes attached to both sides of a remotely controlled valve to operate the irrigation. A water level sensor to obtain the real-time inflow rate from the measured head in the channel, advance sensors to measure the advance time to particular points of an irrigated field, a solar-powered telemetry system including a base station to communicate all the field sensors with the main server. On the basis of field data, the software (SISCO) is optimised the ongoing irrigation and determine the optimum cut-off for particular irrigation and send this information to the control valve to stop the irrigation in a particular (cut-off) time. The preliminary evaluation shows that the automated surface irrigation worked reasonably well without manual intervention. The evaluation of farmers managed irrigation events show the potentials to save a significant amount of water and labour. A substantial amount of economic and social benefits are expected in rural industries by adopting this system. The future outcome of this work would be a fully tested commercial adaptive real-time furrow irrigation system able to compete with the pressurised alternative of centre pivot or lateral move machines on capital cost, water and labour savings but without the massive energy costs.Keywords: furrow irrigation, smart automation, infiltration, SISCO, real-time irrigation, adoptive control
Procedia PDF Downloads 453150 Investigating Visual Statistical Learning during Aging Using the Eye-Tracking Method
Authors: Zahra Kazemi Saleh, Bénédicte Poulin-Charronnat, Annie Vinter
Abstract:
This study examines the effects of aging on visual statistical learning, using eye-tracking techniques to investigate this cognitive phenomenon. Visual statistical learning is a fundamental brain function that enables the automatic and implicit recognition, processing, and internalization of environmental patterns over time. Some previous research has suggested the robustness of this learning mechanism throughout the aging process, underscoring its importance in the context of education and rehabilitation for the elderly. The study included three distinct groups of participants, including 21 young adults (Mage: 19.73), 20 young-old adults (Mage: 67.22), and 17 old-old adults (Mage: 79.34). Participants were exposed to a series of 12 arbitrary black shapes organized into 6 pairs, each with different spatial configurations and orientations (horizontal, vertical, and oblique). These pairs were not explicitly revealed to the participants, who were instructed to passively observe 144 grids presented sequentially on the screen for a total duration of 7 min. In the subsequent test phase, participants performed a two-alternative forced-choice task in which they had to identify the most familiar pair from 48 trials, each consisting of a base pair and a non-base pair. Behavioral analysis using t-tests revealed notable findings. The mean score for the first group was significantly above chance, indicating the presence of visual statistical learning. Similarly, the second group also performed significantly above chance, confirming the persistence of visual statistical learning in young-old adults. Conversely, the third group, consisting of old-old adults, showed a mean score that was not significantly above chance. This lack of statistical learning in the old-old adult group suggests a decline in this cognitive ability with age. Preliminary eye-tracking results showed a decrease in the number and duration of fixations during the exposure phase for all groups. The main difference was that older participants focused more often on empty cases than younger participants, likely due to a decline in the ability to ignore irrelevant information, resulting in a decrease in statistical learning performance.Keywords: aging, eye tracking, implicit learning, visual statistical learning
Procedia PDF Downloads 78149 Evaluation of Alpha-Glucosidase Inhibitory Effect of Two Plants from Brazilian Cerrado
Authors: N. A. P. Camaforte, P. M. P. Vareda, L. L. Saldanha, A. L. Dokkedal, J. M. Rezende-Neto, M. R. Senger, F. P. Silva-Jr, J. R. Bosqueiro
Abstract:
Diabetes mellitus is a disease characterized by deficiency of insulin secretion and/or action which results in hyperglycemia. Nowadays, acarbose is a medicine used by diabetic people to inhibit alpha-glucosidases leading to the decreasing of post-feeding glycaemia, but with low effectiveness and many side effects. Medicinal plants have been used for the treatment of many diseases including diabetes and their action occurs through the modulation of insulin-depending processes, pancreas regeneration or inhibiting glucose absorption by the intestine. Previous studies in our laboratory showed that the treatment using two crude extracts of plants from Brazilian cerrado was able to decrease fasting blood glucose and improve glucose tolerance in streptozotocin-diabetic mice. Because of this and the importance of the search for new alternatives to decrease the hyperglycemia, we decided to evaluate the inhibitory action of two plants from Brazilian cerrado - B.H. and Myrcia bella. The enzymatic assay was performed in 50 µL of final volume using pancreatic α-amylase and maltase together with theirs commercial substrates. The inhibition potency (IC50) was determined by the incubation of eight different concentrations of both extracts and the enzymes for 5 minutes at 37ºC. After, the substrate was added to start the reaction. Glucosidases assay was evaluated measuring the quantity of p-nitrophenol in 405 nmin 384 wells automatic reader. The in vitro assay with the extracts of B.H. and M. bella showed an IC50 of 28,04µg/mL and 16,93 µg/mL for α-amilase, and 43,01µg/mL and 17 µg/mL for maltase, respectively. M. bella extract showed a higher inhibitory activity for those enzymes than B.H. extract. The crude extracts tested showed a higher inhibition rate to α-amylase, but were less effective against maltase in comparison to acarbose (IC50 36µg/mL and 9 µg/mL, respectively). In conclusion, the crude extract of B.H. and M. bella showed a potent inhibitory effect against α-amylase and showed promising results to the possible development of new medicines to treat diabetes with less or even without side effects.Keywords: alfa-glucosidases, diabetes mellitus, glycaemia, medicinal plants
Procedia PDF Downloads 238148 Corpus-Based Neural Machine Translation: Empirical Study Multilingual Corpus for Machine Translation of Opaque Idioms - Cloud AutoML Platform
Authors: Khadija Refouh
Abstract:
Culture bound-expressions have been a bottleneck for Natural Language Processing (NLP) and comprehension, especially in the case of machine translation (MT). In the last decade, the field of machine translation has greatly advanced. Neural machine translation NMT has recently achieved considerable development in the quality of translation that outperformed previous traditional translation systems in many language pairs. Neural machine translation NMT is an Artificial Intelligence AI and deep neural networks applied to language processing. Despite this development, there remain some serious challenges that face neural machine translation NMT when translating culture bounded-expressions, especially for low resources language pairs such as Arabic-English and Arabic-French, which is not the case with well-established language pairs such as English-French. Machine translation of opaque idioms from English into French are likely to be more accurate than translating them from English into Arabic. For example, Google Translate Application translated the sentence “What a bad weather! It runs cats and dogs.” to “يا له من طقس سيء! تمطر القطط والكلاب” into the target language Arabic which is an inaccurate literal translation. The translation of the same sentence into the target language French was “Quel mauvais temps! Il pleut des cordes.” where Google Translate Application used the accurate French corresponding idioms. This paper aims to perform NMT experiments towards better translation of opaque idioms using high quality clean multilingual corpus. This Corpus will be collected analytically from human generated idiom translation. AutoML translation, a Google Neural Machine Translation Platform, is used as a custom translation model to improve the translation of opaque idioms. The automatic evaluation of the custom model will be compared to the Google NMT using Bilingual Evaluation Understudy Score BLEU. BLEU is an algorithm for evaluating the quality of text which has been machine-translated from one natural language to another. Human evaluation is integrated to test the reliability of the Blue Score. The researcher will examine syntactical, lexical, and semantic features using Halliday's functional theory.Keywords: multilingual corpora, natural language processing (NLP), neural machine translation (NMT), opaque idioms
Procedia PDF Downloads 151147 A Qualitative Study Exploring Factors Influencing the Uptake of and Engagement with Health and Wellbeing Smartphone Apps
Authors: D. Szinay, O. Perski, A. Jones, T. Chadborn, J. Brown, F. Naughton
Abstract:
Background: The uptake of health and wellbeing smartphone apps is largely influenced by popularity indicators (e.g., rankings), rather than evidence-based content. Rapid disengagement is common. This study aims to explore how and why potential users 1) select and 2) engage with such apps, and 3) how increased engagement could be promoted. Methods: Semi-structured interviews and a think-aloud approach were used to allow participants to verbalise their thoughts whilst searching for a health or wellbeing app online, followed by a guided search in the UK National Health Service (NHS) 'Apps Library' and Public Health England’s (PHE) 'One You' website. Recruitment took place between June and August 2019. Adults interested in using an app for behaviour change were recruited through social media. Data were analysed using the framework approach. The analysis is both inductive and deductive, with the coding framework being informed by the Theoretical Domains Framework. The results are further mapped onto the COM-B (Capability, Opportunity, Motivation - Behaviour) model. The study protocol is registered on the Open Science Framework (https://osf.io/jrkd3/). Results: The following targets were identified as playing a key role in increasing the uptake of and engagement with health and wellbeing apps: 1) psychological capability (e.g., reduced cognitive load); 2) physical opportunity (e.g., low financial cost); 3) social opportunity (e.g., embedded social media); 4) automatic motivation (e.g., positive feedback). Participants believed that the promotion of evidence-based apps on NHS-related websites could be enhanced through active promotion on social media, adverts on the internet, and in general practitioner practices. Future Implications: These results can inform the development of interventions aiming to promote the uptake of and engagement with evidence-based health and wellbeing apps, a priority within the UK NHS Long Term Plan ('digital first'). The targets identified across the COM-B domains could help organisations that provide platforms for such apps to increase impact through better selection of apps.Keywords: behaviour change, COM-B model, digital health, mhealth
Procedia PDF Downloads 168