Search results for: multi-objective genetic algorithm
387 Pharmacokinetic Modeling of Valsartan in Dog following a Single Oral Administration
Authors: In-Hwan Baek
Abstract:
Valsartan is a potent and highly selective antagonist of the angiotensin II type 1 receptor, and is widely used for the treatment of hypertension. The aim of this study was to investigate the pharmacokinetic properties of the valsartan in dogs following oral administration of a single dose using quantitative modeling approaches. Forty beagle dogs were randomly divided into two group. Group A (n=20) was administered a single oral dose of valsartan 80 mg (Diovan® 80 mg), and group B (n=20) was administered a single oral dose of valsartan 160 mg (Diovan® 160 mg) in the morning after an overnight fast. Blood samples were collected into heparinized tubes before and at 0.5, 1, 1.5, 2, 2.5, 3, 4, 6, 8, 12 and 24 h following oral administration. The plasma concentrations of the valsartan were determined using LC-MS/MS. Non-compartmental pharmacokinetic analyses were performed using WinNonlin Standard Edition software, and modeling approaches were performed using maximum-likelihood estimation via the expectation maximization (MLEM) algorithm with sampling using ADAPT 5 software. After a single dose of valsartan 80 mg, the mean value of maximum concentration (Cmax) was 2.68 ± 1.17 μg/mL at 1.83 ± 1.27 h. The area under the plasma concentration-versus-time curve from time zero to the last measurable concentration (AUC24h) value was 13.21 ± 6.88 μg·h/mL. After dosing with valsartan 160 mg, the mean Cmax was 4.13 ± 1.49 μg/mL at 1.80 ± 1.53 h, the AUC24h was 26.02 ± 12.07 μg·h/mL. The Cmax and AUC values increased in proportion to the increment in valsartan dose, while the pharmacokinetic parameters of elimination rate constant, half-life, apparent of total clearance, and apparent of volume of distribution were not significantly different between the doses. Valsartan pharmacokinetic analysis fits a one-compartment model with first-order absorption and elimination following a single dose of valsartan 80 mg and 160 mg. In addition, high inter-individual variability was identified in the absorption rate constant. In conclusion, valsartan displays the dose-dependent pharmacokinetics in dogs, and Subsequent quantitative modeling approaches provided detailed pharmacokinetic information of valsartan. The current findings provide useful information in dogs that will aid future development of improved formulations or fixed-dose combinations.Keywords: dose-dependent, modeling, pharmacokinetics, valsartan
Procedia PDF Downloads 297386 Vitamin B9 Separation by Synergic Pertraction
Authors: Blaga Alexandra Cristina, Kloetzer Lenuta, Bompa Amalia Stela, Galaction Anca Irina, Cascaval Dan
Abstract:
Vitamin B9 is an important member of vitamins B group, being a growth factor, important for making genetic material as DNA and RNA, red blood cells, for building muscle tissues, especially during periods of infancy, adolescence and pregnancy. Its production by biosynthesis is based on the high metabolic potential of mutant Bacillus subtilis, due to a superior biodisponibility compared to that obtained by chemical pathways. Pertraction, defined as the extraction and transport through liquid membranes consists in the transfer of a solute between two aqueous phases of different pH-values, phases that are separated by a solvent layer of various sizes. The pertraction efficiency and selectivity could be significantly enhanced by adding a carrier in the liquid membrane, such as organophosphoric compounds, long chain amines or crown-ethers etc., the separation process being called facilitated pertraction. The aim of the work is to determine the impact of the presence of two extractants/carriers in the bulk liquid membrane, i.e. di(2-ethylhexyl) phosphoric acid (D2EHPA) and lauryltrialkylmetilamine (Amberlite LA2) on the transport kinetics of vitamin B9. The experiments have been carried out using two pertraction equipments for a free liquid membrane or bulk liquid membrane. One pertraction cell consists on a U-shaped glass pipe (used for the dichloromethane membrane) and the second one is an H-shaped glass pipe (used for h-heptane), having 45 mm inner diameter of the total volume of 450 mL, the volume of each compartment being of 150 mL. The aqueous solutions are independently mixed by means of double blade stirrers with 6 mm diameter and 3 mm height, having the rotation speed of 500 rpm. In order to reach high diffusional rates through the solvent layer, the organic phase has been mixed with a similar stirrer, at a similar rotation speed (500 rpm). The area of mass transfer surface, both for extraction and for reextraction, was of 1.59x10-³ m2. The study on facilitated pertraction with the mixture of two carriers, namely D2EHPA and Amberlite LA-2, dissolved in two solvents with different polarities: n-heptane and dichloromethane, indicated the possibility to obtain the synergic effect. The synergism has been analyzed by considering the vitamin initial and final mass flows, as well as the permeability factors through liquid membrane. The synergic effect has been observed at low D2EHPA concentrations and high Amberlite LA-2 concentrations, being more important for the low-polar solvent (n-heptane). The results suggest that the mechanism of synergic pertraction consists on the reaction between the organophosphoric carrier and vitamin B9 at the interface between the feed and membrane phases, while the aminic carrier enhances the hydrophobicity of this compound by solvation. However, the formation of this complex reduced the reextraction rate and, consequently, affects the synergism related to the final mass flows and permeability factor. For describing the influences of carriers concentrations on the synergistic coefficients, some equations have been proposed by taking into account the vitamin mass flows or permeability factors, with an average deviations between 4.85% and 10.73%.Keywords: pertraction, synergism, vitamin B9, Amberlite LA-2, di(2-ethylhexyl) phosphoric acid
Procedia PDF Downloads 275385 Untangling the Greek Seafood Market: Authentication of Crustacean Products Using DNA-Barcoding Methodologies
Authors: Z. Giagkazoglou, D. Loukovitis, C. Gubili, A. Imsiridou
Abstract:
Along with the increase in human population, demand for seafood has increased. Despite the strict labeling regulations that exist for most marketed species in the European Union, seafood substitution remains a persistent global issue. Food fraud occurs when food products are traded in a false or misleading way. Mislabeling occurs when one species is substituted and traded under the name of another, and it can be intentional or unintentional. Crustaceans are one of the most regularly consumed seafood in Greece. Shrimps, prawns, lobsters, crayfish, and crabs are considered a delicacy and can be encountered in a variety of market presentations (fresh, frozen, pre-cooked, peeled, etc.). With most of the external traits removed, products as such are susceptible to species substitution. DNA barcoding has proven to be the most accurate method for the detection of fraudulent seafood products. To our best knowledge, the DNA barcoding methodology is used for the first time in Greece, in order to investigate the labeling practices for crustacean products available in the market. A total of 100 tissue samples were collected from various retailers and markets across four Greek cities. In an effort to cover the highest range of products possible, different market presentations were targeted (fresh, frozen and cooked). Genomic DNA was extracted using the DNeasy Blood & Tissue Kit, according to the manufacturer's instructions. The mitochondrial gene selected as the target region of the analysis was the cytochrome c oxidase subunit I (COI). PCR products were purified and sequenced using an ABI 3500 Genetic Analyzer. Sequences were manually checked and edited using BioEdit software and compared against the ones available in GenBank and BOLD databases. Statistical analyses were conducted in R and PAST software. For most samples, COI amplification was successful, and species-level identification was possible. The preliminary results estimate moderate mislabeling rates (25%) in the identified samples. Mislabeling was most commonly detected in fresh products, with 50% of the samples in this category labeled incorrectly. Overall, the mislabeling rates detected by our study probably relate to some degree of unintentional misidentification, and lack of knowledge surrounding the legal designations by both retailers and consumers. For some species of crustaceans (i.e. Squila mantis) the mislabeling appears to be also affected by the local labeling practices. Across Greece, S. mantis is sold in the market under two common names, but only one is recognized by the country's legislation, and therefore any mislabeling is probably not profit-motivated. However, the substitution of the speckled shrimp (Metapenaus monoceros) for the distinct, giant river prawn (Macrobranchium rosenbergii), is a clear example of deliberate fraudulent substitution, aiming for profit. To our best knowledge, no scientific study investigating substitution and mislabeling rates in crustaceans has been conducted in Greece. For a better understanding of Greece's seafood market, similar DNA barcoding studies in other regions with increased touristic importance (e.g., the Greek islands) should be conducted. Regardless, the expansion of the list of species-specific designations for crustaceans in the country is advised.Keywords: COI gene, food fraud, labelling control, molecular identification
Procedia PDF Downloads 67384 Advancing Agriculture through Technology: An Abstract of Research Findings
Authors: Eugene Aninagyei-Bonsu
Abstract:
Introduction: Agriculture has been a cornerstone of human civilization, ensuring food security and livelihoods for billions of people worldwide. In recent decades, rapid advancements in technology have revolutionized the agricultural sector, offering innovative solutions to enhance productivity, sustainability, and efficiency. This abstract summarizes key findings from a research study that explores the impacts of technology in modern agriculture and its implications for future food production systems. Methodologies: The research study employed a mixed-methods approach, combining quantitative data analysis with qualitative interviews and surveys to gain a comprehensive understanding of the role of technology in agriculture. Data was collected from various stakeholders, including farmers, agricultural technicians, and industry experts, to capture diverse perspectives on the adoption and utilization of agricultural technologies. The study also utilized case studies and literature reviews to contextualize the findings within the broader agricultural landscape. Major Findings: The research findings reveal that technology plays a pivotal role in transforming traditional farming practices and driving innovation in agriculture. Advanced technologies such as precision agriculture, drone technology, genetic engineering, and smart irrigation systems have significantly improved crop yields, reduced environmental impact, and optimized resource utilization. Farmers who have embraced these technologies have reported increased productivity, enhanced profitability, and improved resilience to environmental challenges. Furthermore, the study highlights the importance of accessible and affordable technology solutions for smallholder farmers in developing countries. Mobile applications, sensor technologies, and digital platforms have enabled small-scale farmers to access market information, weather forecasts, and agricultural best practices, empowering them to make informed decisions and improve their livelihoods. The research emphasizes the need for targeted policies and investments to bridge the digital divide and promote equitable technology adoption in agriculture. Conclusion: In conclusion, this research underscores the transformative potential of technology in agriculture and its critical role in advancing sustainable food production systems. The findings suggest that harnessing technology can address key challenges facing the agricultural sector, including climate change, resource scarcity, and food insecurity. By embracing innovation and leveraging technology, farmers can enhance their productivity, profitability, and resilience in a rapidly evolving global food system. Moving forward, policymakers, researchers, and industry stakeholders must collaborate to facilitate the adoption of appropriate technologies, support capacity building, and promote sustainable agricultural practices for a more resilient and food-secure future.Keywords: technology development in modern agriculture, the influence of information technology access in agriculture, analyzing agricultural technology development, analyzing of the frontier technology of agriculture loT
Procedia PDF Downloads 35383 Geometric Nonlinear Dynamic Analysis of Cylindrical Composite Sandwich Shells Subjected to Underwater Blast Load
Authors: Mustafa Taskin, Ozgur Demir, M. Mert Serveren
Abstract:
The precise study of the impact of underwater explosions on structures is of great importance in the design and engineering calculations of floating structures, especially those used for military purposes, as well as power generation facilities such as offshore platforms that can become a target in case of war. Considering that ship and submarine structures are mostly curved surfaces, it is extremely important and interesting to examine the destructive effects of underwater explosions on curvilinear surfaces. In this study, geometric nonlinear dynamic analysis of cylindrical composite sandwich shells subjected to instantaneous pressure load is performed. The instantaneous pressure load is defined as an underwater explosion and the effects of the liquid medium are taken into account. There are equations in the literature for pressure due to underwater explosions, but these equations have been obtained for flat plates. For this reason, the instantaneous pressure load equations are arranged to be suitable for curvilinear structures before proceeding with the analyses. Fluid-solid interaction is defined by using Taylor's Plate Theory. The lower and upper layers of the cylindrical composite sandwich shell are modeled as composite laminate and the middle layer consists of soft core. The geometric nonlinear dynamic equations of the shell are obtained by Hamilton's principle, taken into account the von Kàrmàn theory of large displacements. Then, time dependent geometric nonlinear equations of motion are solved with the help of generalized differential quadrature method (GDQM) and dynamic behavior of cylindrical composite sandwich shells exposed to underwater explosion is investigated. An algorithm that can work parametrically for the solution has been developed within the scope of the study.Keywords: cylindrical composite sandwich shells, generalized differential quadrature method, geometric nonlinear dynamic analysis, underwater explosion
Procedia PDF Downloads 192382 Exploring the Connectedness of Ad Hoc Mesh Networks in Rural Areas
Authors: Ibrahim Obeidat
Abstract:
Reaching a fully-connected network of mobile nodes in rural areas got a great attention between network researchers. This attention rose due to the complexity and high costs while setting up the needed infrastructures for these networks, in addition to the low transmission range these nodes has. Terranet technology, as an example, employs ad-hoc mesh network where each node has a transmission range not exceed one kilometer, this means that every two nodes are able to communicate with each other if they are just one kilometer far from each other, otherwise a third-party will play the role of the “relay”. In Terranet, and as an idea to reduce network setup cost, every node in the network will be considered as a router that is responsible of forwarding data between other nodes which result in a decentralized collaborative environment. Most researches on Terranet presents the idea of how to encourage mobile nodes to become more cooperative by letting their devices in “ON” state as long as possible while accepting to play the role of relay (router). This research presents the issue of finding the percentage of nodes in ad-hoc mesh network within rural areas that should play the role of relay at every time slot, relating to what is the actual area coverage of nodes in order to have the network reach the fully-connectivity. Far from our knowledge, till now there is no current researches discussed this issue. The research is done by making an implementation that depends on building adjacency matrix as an indicator to the connectivity between network members. This matrix is continually updated until each value in it refers to the number of hubs that should be followed to reach from one node to another. After repeating the algorithm on different area sizes, different coverage percentages for each size, and different relay percentages for several times, results extracted shows that for area coverage less than 5% we need to have 40% of the nodes to be relays, where 10% percentage is enough for areas with node coverage greater than 5%.Keywords: ad-hoc mesh networks, network connectivity, mobile ad-hoc networks, Terranet, adjacency matrix, simulator, wireless sensor networks, peer to peer networks, vehicular Ad hoc networks, relay
Procedia PDF Downloads 282381 Assessment of Cytogenetic Damage as a Function of Radiofrequency Electromagnetic Radiations Exposure Measured by Electric Field Strength: A Gender Based Study
Authors: Ramanpreet, Gursatej Gandhi
Abstract:
Background: Dependence on electromagnetic radiations involved in communication and information technologies has incredibly increased in the personal and professional world. Among the numerous radiations, sources are fixed site transmitters, mobile phone base stations, and power lines beside indoor devices like cordless phones, WiFi, Bluetooth, TV, radio, microwave ovens, etc. Rather there is the continuous emittance of radiofrequency radiations (RFR) even to those not using the devices from mobile phone base stations. The consistent and widespread usage of wireless devices has build-up electromagnetic fields everywhere. In fact, the radiofrequency electromagnetic field (RF-EMF) has insidiously become a part of the environment and like any contaminant may pose to be health-hazardous requiring assessment. Materials and Methods: In the present study, cytogenetic damage was assessed using the Buccal Micronucleus Cytome (BMCyt) assay as a function of radiation exposure after Institutional Ethics Committee clearance of the study and written voluntary informed consent from the participants. On a pre-designed questionnaire, general information lifestyle patterns (diet, physical activity, smoking, drinking, use of mobile phones, internet, Wi-Fi usage, etc.) genetic, reproductive (pedigrees) and medical histories were recorded. For this, 24 hour-personal exposimeter measurements (PEM) were recorded for unrelated 60 healthy adults (40 cases residing in the vicinity of mobile phone base stations since their installation and 20 controls residing in areas with no base stations). The personal exposimeter collects information from all the sources generating EMF (TETRA, GSM, UMTS, DECT, and WLAN) as total RF-EMF uplink and downlink. Findings: The cases (n=40; 23-90 years) and the controls (n=20; 19-65 years) matched for alcohol drinking, smoking habits, and mobile and cordless phone usage. The PEM in cases (149.28 ± 8.98 mV/m) revealed significantly higher (p=0.000) electric field strength compared to the recorded value (80.40 ± 0.30 mV/m) in controls. The GSM 900 uplink (p=0.000), GSM 1800 downlink (p=0.000),UMTS (both uplink; p=0.013 and downlink; p=0.001) and DECT (p=0.000) electric field strength were significantly elevated in the cases as compared to controls. The electric field strength in the cases was significantly from GSM1800 (52.26 ± 4.49mV/m) followed by GSM900 (45.69 ± 4.98mV/m), UMTS (25.03 ± 3.33mV/m), DECT (18.02 ± 2.14mV/m) and was least from WLAN (8.26 ± 2.35mV/m). The higher significantly (p=0.000) increased exposure to the cases was from GSM (97.96 ± 6.97mV/m) in comparison to UMTS, DECT, and WLAN. The frequencies of micronuclei (1.86X, p=0.007), nuclear buds (2.95X, p=0.002) and cell death parameter (condensed chromatin cells) were significantly (1.75X, p=0.007) elevated in cases compared to that in controls probably as a function of radiofrequency radiation exposure. Conclusion: In the absence of other exposure(s), any cytogenetic damage if unrepaired is a cause of concern as it can cause malignancy. Larger sample size with the clinical assessment will prove more insightful of such an effect.Keywords: Buccal micronucleus cytome assay, cytogenetic damage, electric field strength, personal exposimeter
Procedia PDF Downloads 158380 Multi Biomertric Personal Identification System Based On Hybird Intellegence Method
Authors: Laheeb M. Ibrahim, Ibrahim A. Salih
Abstract:
Biometrics is a technology that has been widely used in many official and commercial identification applications. The increased concerns in security during recent years (especially during the last decades) have essentially resulted in more attention being given to biometric-based verification techniques. Here, a novel fusion approach of palmprint, dental traits has been suggested. These traits which are authentication techniques have been employed in a range of biometric applications that can identify any postmortem PM person and antemortem AM. Besides improving the accuracy, the fusion of biometrics has several advantages such as increasing, deterring spoofing activities and reducing enrolment failure. In this paper, a first unimodel biometric system has been made by using (palmprint and dental) traits, for each one classification applying an artificial neural network and a hybrid technique that combines swarm intelligence and neural network together, then attempt has been made to combine palmprint and dental biometrics. Principally, the fusion of palmprint and dental biometrics and their potential application has been explored as biometric identifiers. To address this issue, investigations have been carried out about the relative performance of several statistical data fusion techniques for integrating the information in both unimodal and multimodal biometrics. Also the results of the multimodal approach have been compared with each one of these two traits authentication approaches. This paper studies the features and decision fusion levels in multimodal biometrics. To determine the accuracy of GAR to parallel system decision-fusion including (AND, OR, Majority fating) has been used. The backpropagation method has been used for classification and has come out with result (92%, 99%, 97%) respectively for GAR, while the GAR) for this algorithm using hybrid technique for classification (95%, 99%, 98%) respectively. To determine the accuracy of the multibiometric system for feature level fusion has been used, while the same preceding methods have been used for classification. The results have been (98%, 99%) respectively while to determine the GAR of feature level different methods have been used and have come out with (98%).Keywords: back propagation neural network BP ANN, multibiometric system, parallel system decision-fusion, practical swarm intelligent PSO
Procedia PDF Downloads 533379 Deep Learning Based Text to Image Synthesis for Accurate Facial Composites in Criminal Investigations
Authors: Zhao Gao, Eran Edirisinghe
Abstract:
The production of an accurate sketch of a suspect based on a verbal description obtained from a witness is an essential task for most criminal investigations. The criminal investigation system employs specifically trained professional artists to manually draw a facial image of the suspect according to the descriptions of an eyewitness for subsequent identification. Within the advancement of Deep Learning, Recurrent Neural Networks (RNN) have shown great promise in Natural Language Processing (NLP) tasks. Additionally, Generative Adversarial Networks (GAN) have also proven to be very effective in image generation. In this study, a trained GAN conditioned on textual features such as keywords automatically encoded from a verbal description of a human face using an RNN is used to generate photo-realistic facial images for criminal investigations. The intention of the proposed system is to map corresponding features into text generated from verbal descriptions. With this, it becomes possible to generate many reasonably accurate alternatives to which the witness can use to hopefully identify a suspect from. This reduces subjectivity in decision making both by the eyewitness and the artist while giving an opportunity for the witness to evaluate and reconsider decisions. Furthermore, the proposed approach benefits law enforcement agencies by reducing the time taken to physically draw each potential sketch, thus increasing response times and mitigating potentially malicious human intervention. With publically available 'CelebFaces Attributes Dataset' (CelebA) and additionally providing verbal description as training data, the proposed architecture is able to effectively produce facial structures from given text. Word Embeddings are learnt by applying the RNN architecture in order to perform semantic parsing, the output of which is fed into the GAN for synthesizing photo-realistic images. Rather than the grid search method, a metaheuristic search based on genetic algorithms is applied to evolve the network with the intent of achieving optimal hyperparameters in a fraction the time of a typical brute force approach. With the exception of the ‘CelebA’ training database, further novel test cases are supplied to the network for evaluation. Witness reports detailing criminals from Interpol or other law enforcement agencies are sampled on the network. Using the descriptions provided, samples are generated and compared with the ground truth images of a criminal in order to calculate the similarities. Two factors are used for performance evaluation: The Structural Similarity Index (SSIM) and the Peak Signal-to-Noise Ratio (PSNR). A high percentile output from this performance matrix should attribute to demonstrating the accuracy, in hope of proving that the proposed approach can be an effective tool for law enforcement agencies. The proposed approach to criminal facial image generation has potential to increase the ratio of criminal cases that can be ultimately resolved using eyewitness information gathering.Keywords: RNN, GAN, NLP, facial composition, criminal investigation
Procedia PDF Downloads 162378 Effects of Cacao Agroforestry and Landscape Composition on Farm Biodiversity and Household Dietary Diversity
Authors: Marlene Yu Lilin Wätzold, Wisnu Harto Adiwijoyo, Meike Wollni
Abstract:
Land-use conversion from tropical forests to cash crop production in the form of monocultures has drastic consequences for biodiversity. Meanwhile, high dependence on cash crop production is often associated with a decrease in other food crop production, thereby affecting household dietary diversity. Additionally, deforestation rates have been found to reduce households’ dietary diversity, as forests often offer various food sources. Agroforestry systems are seen as a potential solution to improve local biodiversity as well as provide a range of provisioning ecosystem services, such as timber and other food crops. While a number of studies have analyzed the effects of agroforestry on biodiversity, as well as household livelihood indicators, little is understood between potential trade-offs or synergies between the two. This interdisciplinary study aims to fill this gap by assessing cacao agroforestry’s role in enhancing local bird diversity, as well as farm household dietary diversity. Additionally, we will take a landscape perspective and investigate in what ways the landscape composition, such as the proximity to forests and forest patches, are able to contribute to the local bird diversity, as well as households’ dietary diversity. Our study will take place in two agro-ecological zones in Ghana, based on household surveys of 500 cacao farm households. Using a subsample of 120 cacao plots, we will assess the degree of shade tree diversity and density using drone flights and a computer vision tree detection algorithm. Bird density and diversity will be assessed using sound recordings that will be kept in the cacao plots for 24 hours. Landscape compositions will be assessed via remote sensing images. The results of our study are of high importance as they will allow us to understand the effects of agroforestry and landscape composition in improving simultaneous ecosystem services.Keywords: agroforestry, biodiversity, landscape composition, nutrition
Procedia PDF Downloads 113377 VeriFy: A Solution to Implement Autonomy Safely and According to the Rules
Authors: Michael Naderhirn, Marco Pavone
Abstract:
Problem statement, motivation, and aim of work: So far, the development of control algorithms was done by control engineers in a way that the controller would fit a specification by testing. When it comes to the certification of an autonomous car in highly complex scenarios, the challenge is much higher since such a controller must mathematically guarantee to implement the rules of the road while on the other side guarantee aspects like safety and real time executability. What if it becomes reality to solve this demanding problem by combining Formal Verification and System Theory? The aim of this work is to present a workflow to solve the above mentioned problem. Summary of the presented results / main outcomes: We show the usage of an English like language to transform the rules of the road into system specification for an autonomous car. The language based specifications are used to define system functions and interfaces. Based on that a formal model is developed which formally correctly models the specifications. On the other side, a mathematical model describing the systems dynamics is used to calculate the systems reachability set which is further used to determine the system input boundaries. Then a motion planning algorithm is applied inside the system boundaries to find an optimized trajectory in combination with the formal specification model while satisfying the specifications. The result is a control strategy which can be applied in real time independent of the scenario with a mathematical guarantee to satisfy a predefined specification. We demonstrate the applicability of the method in simulation driving scenarios and a potential certification. Originality, significance, and benefit: To the authors’ best knowledge, it is the first time that it is possible to show an automated workflow which combines a specification in an English like language and a mathematical model in a mathematical formal verified way to synthesizes a controller for potential real time applications like autonomous driving.Keywords: formal system verification, reachability, real time controller, hybrid system
Procedia PDF Downloads 241376 Biochemical Effects of Low Dose Dimethyl Sulfoxide on HepG2 Liver Cancer Cell Line
Authors: Esra Sengul, R. G. Aktas, M. E. Sitar, H. Isan
Abstract:
Hepatocellular carcinoma (HCC) is a hepatocellular tumor commonly found on the surface of the chronic liver. HepG2 is the most commonly used cell type in HCC studies. The main proteins remaining in the blood serum after separation of plasma fibrinogen are albumin and globulin. The fact that the albumin showed hepatocellular damage and reflect the synthesis capacity of the liver was the main reason for our use. Alpha-Fetoprotein (AFP) is an albumin-like structural embryonic globulin found in the embryonic cortex, cord blood, and fetal liver. It has been used as a marker in the follow-up of tumor growth in various malign tumors and in the efficacy of surgical-medical treatments, so it is a good protein to look at with albumins. We have seen the morphological changes of dimethyl sulfoxide (DMSO) on HepG2 and decided to investigate its biochemical effects. We examined the effects of DMSO, which is used in cell cultures, on albumin, AFP and total protein at low doses. Material Method: Cell Culture: Medium was prepared in cell culture using Dulbecco's Modified Eagle Media (DMEM), Fetal Bovine Serum Dulbecco's (FBS), Phosphate Buffered Saline and trypsin maintained at -20 ° C. Fixation of Cells: HepG2 cells, which have been appropriately developed at the end of the first week, were fixed with acetone. We stored our cells in PBS at + 4 ° C until the fixation was completed. Area Calculation: The areas of the cells are calculated in the ImageJ (IJ). Microscope examination: The examination was performed with a Zeiss Inverted Microscope. Daytime photographs were taken at 40x, 100x 200x and 400x. Biochemical Tests: Protein (Total): Serum sample was analyzed by a spectrophotometric method in autoanalyzer. Albumin: Serum sample was analyzed by a spectrophotometric method in autoanalyzer. Alpha-fetoprotein: Serum sample was analyzed by ECLIA method. Results: When liver cancer cells were cultured in medium with 1% DMSO for 4 weeks, a significant difference was observed when compared with the control group. As a result, we have seen that DMSO can be used as an important agent in the treatment of liver cancer. Cell areas were reduced in the DMSO group compared to the control group and the confluency ratio increased. The ability to form spheroids was also significantly higher in the DMSO group. Alpha-fetoprotein was lower than the values of an ordinary liver cancer patient and the total protein amount increased to the reference range of the normal individual. Because the albumin sample was below the specimen value, the numerical results could not be obtained on biochemical examinations. We interpret all these results as making DMSO a caretaking aid. Since each one was not enough alone we used 3 parameters and the results were positive when we refer to the values of a normal healthy individual in parallel. We hope to extend the study further by adding new parameters and genetic analyzes, by increasing the number of samples, and by using DMSO as an adjunct agent in the treatment of liver cancer.Keywords: hepatocellular carcinoma, HepG2, dimethyl sulfoxide, cell culture, ELISA
Procedia PDF Downloads 135375 Semantic Search Engine Based on Query Expansion with Google Ranking and Similarity Measures
Authors: Ahmad Shahin, Fadi Chakik, Walid Moudani
Abstract:
Our study is about elaborating a potential solution for a search engine that involves semantic technology to retrieve information and display it significantly. Semantic search engines are not used widely over the web as the majorities are still in Beta stage or under construction. Many problems face the current applications in semantic search, the major problem is to analyze and calculate the meaning of query in order to retrieve relevant information. Another problem is the ontology based index and its updates. Ranking results according to concept meaning and its relation with query is another challenge. In this paper, we are offering a light meta-engine (QESM) which uses Google search, and therefore Google’s index, with some adaptations to its returned results by adding multi-query expansion. The mission was to find a reliable ranking algorithm that involves semantics and uses concepts and meanings to rank results. At the beginning, the engine finds synonyms of each query term entered by the user based on a lexical database. Then, query expansion is applied to generate different semantically analogous sentences. These are generated randomly by combining the found synonyms and the original query terms. Our model suggests the use of semantic similarity measures between two sentences. Practically, we used this method to calculate semantic similarity between each query and the description of each page’s content generated by Google. The generated sentences are sent to Google engine one by one, and ranked again all together with the adapted ranking method (QESM). Finally, our system will place Google pages with higher similarities on the top of the results. We have conducted experimentations with 6 different queries. We have observed that most ranked results with QESM were altered with Google’s original generated pages. With our experimented queries, QESM generates frequently better accuracy than Google. In some worst cases, it behaves like Google.Keywords: semantic search engine, Google indexing, query expansion, similarity measures
Procedia PDF Downloads 425374 Agile Software Effort Estimation Using Regression Techniques
Authors: Mikiyas Adugna
Abstract:
Effort estimation is among the activities carried out in software development processes. An accurate model of estimation leads to project success. The method of agile effort estimation is a complex task because of the dynamic nature of software development. Researchers are still conducting studies on agile effort estimation to enhance prediction accuracy. Due to these reasons, we investigated and proposed a model on LASSO and Elastic Net regression to enhance estimation accuracy. The proposed model has major components: preprocessing, train-test split, training with default parameters, and cross-validation. During the preprocessing phase, the entire dataset is normalized. After normalization, a train-test split is performed on the dataset, setting training at 80% and testing set to 20%. We chose two different phases for training the two algorithms (Elastic Net and LASSO) regression following the train-test-split. In the first phase, the two algorithms are trained using their default parameters and evaluated on the testing data. In the second phase, the grid search technique (the grid is used to search for tuning and select optimum parameters) and 5-fold cross-validation to get the final trained model. Finally, the final trained model is evaluated using the testing set. The experimental work is applied to the agile story point dataset of 21 software projects collected from six firms. The results show that both Elastic Net and LASSO regression outperformed the compared ones. Compared to the proposed algorithms, LASSO regression achieved better predictive performance and has acquired PRED (8%) and PRED (25%) results of 100.0, MMRE of 0.0491, MMER of 0.0551, MdMRE of 0.0593, MdMER of 0.063, and MSE of 0.0007. The result implies LASSO regression algorithm trained model is the most acceptable, and higher estimation performance exists in the literature.Keywords: agile software development, effort estimation, elastic net regression, LASSO
Procedia PDF Downloads 71373 A Numerical Model for Simulation of Blood Flow in Vascular Networks
Authors: Houman Tamaddon, Mehrdad Behnia, Masud Behnia
Abstract:
An accurate study of blood flow is associated with an accurate vascular pattern and geometrical properties of the organ of interest. Due to the complexity of vascular networks and poor accessibility in vivo, it is challenging to reconstruct the entire vasculature of any organ experimentally. The objective of this study is to introduce an innovative approach for the reconstruction of a full vascular tree from available morphometric data. Our method consists of implementing morphometric data on those parts of the vascular tree that are smaller than the resolution of medical imaging methods. This technique reconstructs the entire arterial tree down to the capillaries. Vessels greater than 2 mm are obtained from direct volume and surface analysis using contrast enhanced computed tomography (CT). Vessels smaller than 2mm are reconstructed from available morphometric and distensibility data and rearranged by applying Murray’s Laws. Implementation of morphometric data to reconstruct the branching pattern and applying Murray’s Laws to every vessel bifurcation simultaneously, lead to an accurate vascular tree reconstruction. The reconstruction algorithm generates full arterial tree topography down to the first capillary bifurcation. Geometry of each order of the vascular tree is generated separately to minimize the construction and simulation time. The node-to-node connectivity along with the diameter and length of every vessel segment is established and order numbers, according to the diameter-defined Strahler system, are assigned. During the simulation, we used the averaged flow rate for each order to predict the pressure drop and once the pressure drop is predicted, the flow rate is corrected to match the computed pressure drop for each vessel. The final results for 3 cardiac cycles is presented and compared to the clinical data.Keywords: blood flow, morphometric data, vascular tree, Strahler ordering system
Procedia PDF Downloads 272372 Identification and Characterization of Small Peptides Encoded by Small Open Reading Frames using Mass Spectrometry and Bioinformatics
Authors: Su Mon Saw, Joe Rothnagel
Abstract:
Short open reading frames (sORFs) located in 5’UTR of mRNAs are known as uORFs. Characterization of uORF-encoded peptides (uPEPs) i.e., a subset of short open reading frame encoded peptides (sPEPs) and their translation regulation lead to understanding of causes of genetic disease, proteome complexity and development of treatments. Existence of uORFs within cellular proteome could be detected by LC-MS/MS. The ability of uORF to be translated into uPEP and achievement of uPEP identification will allow uPEP’s characterization, structures, functions, subcellular localization, evolutionary maintenance (conservation in human and other species) and abundance in cells. It is hypothesized that a subset of sORFs are translatable and that their encoded sPEPs are functional and are endogenously expressed contributing to the eukaryotic cellular proteome complexity. This project aimed to investigate whether sORFs encode functional peptides. Liquid chromatography-mass spectrometry (LC-MS) and bioinformatics were thus employed. Due to probable low abundance of sPEPs and small in sizes, the need for efficient peptide enrichment strategies for enriching small proteins and depleting the sub-proteome of large and abundant proteins is crucial for identifying sPEPs. Low molecular weight proteins were extracted using SDS-PAGE from Human Embryonic Kidney (HEK293) cells and Strong Cation Exchange Chromatography (SCX) from secreted HEK293 cells. Extracted proteins were digested by trypsin to peptides, which were detected by LC-MS/MS. The MS/MS data obtained was searched against Swiss-Prot using MASCOT version 2.4 to filter out known proteins, and all unmatched spectra were re-searched against human RefSeq database. ProteinPilot v5.0.1 was used to identify sPEPs by searching against human RefSeq, Vanderperre and Human Alternative Open Reading Frame (HaltORF) databases. Potential sPEPs were analyzed by bioinformatics. Since SDS PAGE electrophoresis could not separate proteins <20kDa, this could not identify sPEPs. All MASCOT-identified peptide fragments were parts of main open reading frame (mORF) by ORF Finder search and blastp search. No sPEP was detected and existence of sPEPs could not be identified in this study. 13 translated sORFs in HEK293 cells by mass spectrometry in previous studies were characterized by bioinformatics. Identified sPEPs from previous studies were <100 amino acids and <15 kDa. Bioinformatics results showed that sORFs are translated to sPEPs and contribute to proteome complexity. uPEP translated from uORF of SLC35A4 was strongly conserved in human and mouse while uPEP translated from uORF of MKKS was strongly conserved in human and Rhesus monkey. Cross-species conserved uORFs in association with protein translation strongly suggest evolutionary maintenance of coding sequence and indicate probable functional expression of peptides encoded within these uORFs. Translation of sORFs was confirmed by mass spectrometry and sPEPs were characterized with bioinformatics.Keywords: bioinformatics, HEK293 cells, liquid chromatography-mass spectrometry, ProteinPilot, Strong Cation Exchange Chromatography, SDS-PAGE, sPEPs
Procedia PDF Downloads 188371 Computational Modeling of Load Limits of Carbon Fibre Composite Laminates Subjected to Low-Velocity Impact Utilizing Convolution-Based Fast Fourier Data Filtering Algorithms
Authors: Farhat Imtiaz, Umar Farooq
Abstract:
In this work, we developed a computational model to predict ply level failure in impacted composite laminates. Data obtained from physical testing from flat and round nose impacts of 8-, 16-, 24-ply laminates were considered. Routine inspections of the tested laminates were carried out to approximate ply by ply inflicted damage incurred. Plots consisting of load–time, load–deflection, and energy–time history were drawn to approximate the inflicted damages. Impact test generated unwanted data logged due to restrictions on testing and logging systems were also filtered. Conventional filters (built-in, statistical, and numerical) reliably predicted load thresholds for relatively thin laminates such as eight and sixteen ply panels. However, for relatively thick laminates such as twenty-four ply laminates impacted by flat nose impact generated clipped data which can just be de-noised using oscillatory algorithms. The literature search reveals that modern oscillatory data filtering and extrapolation algorithms have scarcely been utilized. This investigation reports applications of filtering and extrapolation of the clipped data utilising fast Fourier Convolution algorithm to predict load thresholds. Some of the results were related to the impact-induced damage areas identified with Ultrasonic C-scans and found to be in acceptable agreement. Based on consistent findings, utilizing of modern data filtering and extrapolation algorithms to data logged by the existing machines has efficiently enhanced data interpretations without resorting to extra resources. The algorithms could be useful for impact-induced damage approximations of similar cases.Keywords: fibre reinforced laminates, fast Fourier algorithms, mechanical testing, data filtering and extrapolation
Procedia PDF Downloads 135370 Numerical Solution of Portfolio Selecting Semi-Infinite Problem
Authors: Alina Fedossova, Jose Jorge Sierra Molina
Abstract:
SIP problems are part of non-classical optimization. There are problems in which the number of variables is finite, and the number of constraints is infinite. These are semi-infinite programming problems. Most algorithms for semi-infinite programming problems reduce the semi-infinite problem to a finite one and solve it by classical methods of linear or nonlinear programming. Typically, any of the constraints or the objective function is nonlinear, so the problem often involves nonlinear programming. An investment portfolio is a set of instruments used to reach the specific purposes of investors. The risk of the entire portfolio may be less than the risks of individual investment of portfolio. For example, we could make an investment of M euros in N shares for a specified period. Let yi> 0, the return on money invested in stock i for each dollar since the end of the period (i = 1, ..., N). The logical goal here is to determine the amount xi to be invested in stock i, i = 1, ..., N, such that we maximize the period at the end of ytx value, where x = (x1, ..., xn) and y = (y1, ..., yn). For us the optimal portfolio means the best portfolio in the ratio "risk-return" to the investor portfolio that meets your goals and risk ways. Therefore, investment goals and risk appetite are the factors that influence the choice of appropriate portfolio of assets. The investment returns are uncertain. Thus we have a semi-infinite programming problem. We solve a semi-infinite optimization problem of portfolio selection using the outer approximations methods. This approach can be considered as a developed Eaves-Zangwill method applying the multi-start technique in all of the iterations for the search of relevant constraints' parameters. The stochastic outer approximations method, successfully applied previously for robotics problems, Chebyshev approximation problems, air pollution and others, is based on the optimal criteria of quasi-optimal functions. As a result we obtain mathematical model and the optimal investment portfolio when yields are not clear from the beginning. Finally, we apply this algorithm to a specific case of a Colombian bank.Keywords: outer approximation methods, portfolio problem, semi-infinite programming, numerial solution
Procedia PDF Downloads 309369 Determination of Crustal Structure and Moho Depth within the Jammu and Kashmir Region, Northwest Himalaya through Receiver Function
Authors: Shiv Jyoti Pandey, Shveta Puri, G. M. Bhat, Neha Raina
Abstract:
The Jammu and Kashmir (J&K) region of Northwest Himalaya has a long history of earthquake activity which falls within Seismic Zones IV and V. To know the crustal structure beneath this region, we utilized teleseismic receiver function method. This paper presents the results of the analyses of the teleseismic earthquake waves recorded by 10 seismic observatories installed in the vicinity of major thrusts and faults. The teleseismic waves at epicentral distance between 30o and 90o with moment magnitudes greater than or equal to 5.5 that contains large amount of information about the crust and upper mantle structure directly beneath a receiver has been used. The receiver function (RF) technique has been widely applied to investigate crustal structures using P-to-S converted (Ps) phases from velocity discontinuities. The arrival time of the Ps, PpPs and PpSs+ PsPs converted and reverberated phases from the Moho can be combined to constrain the mean crustal thickness and Vp/Vs ratio. Over 500 receiver functions from 10 broadband stations located in the Jammu & Kashmir region of Northwest Himalaya were analyzed. With the help of H-K stacking method, we determined the crustal thickness (H) and average crustal Vp/Vs ratio (K) in this region. We also used Neighbourhood algorithm technique to verify our results. The receiver function results for these stations show that the crustal thickness under Jammu & Kashmir ranges from 45.0 to 53.6 km with an average value of 50.01 km. The Vp/Vs ratio varies from 1.63 to 1.99 with an average value of 1.784 which corresponds to an average Poisson’s ratio of 0.266 with a range from 0.198 to 0.331. High Poisson’s ratios under some stations may be related to partial melting in the crust near the uppermost mantle. The crustal structure model developed from this study can be used to refine the velocity model used in the precise epicenter location in the region, thereby increasing the knowledge to understand current seismicity in the region.Keywords: H-K stacking, Poisson’s ratios, receiver function, teleseismic
Procedia PDF Downloads 248368 A Comparison Between Different Discretization Techniques for the Doyle-Fuller-Newman Li+ Battery Model
Authors: Davide Gotti, Milan Prodanovic, Sergio Pinilla, David Muñoz-Torrero
Abstract:
Since its proposal, the Doyle-Fuller-Newman (DFN) lithium-ion battery model has gained popularity in the electrochemical field. In fact, this model provides the user with theoretical support for designing the lithium-ion battery parameters, such as the material particle or the diffusion coefficient adjustment direction. However, the model is mathematically complex as it is composed of several partial differential equations (PDEs) such as Fick’s law of diffusion, the MacInnes and Ohm’s equations, among other phenomena. Thus, to efficiently use the model in a time-domain simulation environment, the selection of the discretization technique is of a pivotal importance. There are several numerical methods available in the literature that can be used to carry out this task. In this study, a comparison between the explicit Euler, Crank-Nicolson, and Chebyshev discretization methods is proposed. These three methods are compared in terms of accuracy, stability, and computational times. Firstly, the explicit Euler discretization technique is analyzed. This method is straightforward to implement and is computationally fast. In this work, the accuracy of the method and its stability properties are shown for the electrolyte diffusion partial differential equation. Subsequently, the Crank-Nicolson method is considered. It represents a combination of the implicit and explicit Euler methods that has the advantage of being of the second order in time and is intrinsically stable, thus overcoming the disadvantages of the simpler Euler explicit method. As shown in the full paper, the Crank-Nicolson method provides accurate results when applied to the DFN model. Its stability does not depend on the integration time step, thus it is feasible for both short- and long-term tests. This last remark is particularly important as this discretization technique would allow the user to implement parameter estimation and optimization techniques such as system or genetic parameter identification methods using this model. Finally, the Chebyshev discretization technique is implemented in the DFN model. This discretization method features swift convergence properties and, as other spectral methods used to solve differential equations, achieves the same accuracy with a smaller number of discretization nodes. However, as shown in the literature, these methods are not suitable for handling sharp gradients, which are common during the first instants of the charge and discharge phases of the battery. The numerical results obtained and presented in this study aim to provide the guidelines on how to select the adequate discretization technique for the DFN model according to the type of application to be performed, highlighting the pros and cons of the three methods. Specifically, the non-eligibility of the simple Euler method for longterm tests will be presented. Afterwards, the Crank-Nicolson and the Chebyshev discretization methods will be compared in terms of accuracy and computational times under a wide range of battery operating scenarios. These include both long-term simulations for aging tests, and short- and mid-term battery charge/discharge cycles, typically relevant in battery applications like grid primary frequency and inertia control and electrical vehicle breaking and acceleration.Keywords: Doyle-Fuller-Newman battery model, partial differential equations, discretization, numerical methods
Procedia PDF Downloads 23367 Genetic Polymorphism and Insilico Study Epitope Block 2 MSP1 Gene of Plasmodium falciparum Isolate Endemic Jayapura
Authors: Arsyam Mawardi, Sony Suhandono, Azzania Fibriani, Fifi Fitriyah Masduki
Abstract:
Malaria is an infectious disease caused by Plasmodium sp. This disease has a high prevalence in Indonesia, especially in Jayapura. The vaccine that is currently being developed has not been effective in overcoming malaria. This is due to the high polymorphism in the Plasmodium genome especially in areas that encode Plasmodium surface proteins. Merozoite Surface Protein 1 (MSP1) Plasmodium falciparum is a surface protein that plays a role in the invasion process in human erythrocytes through the interaction of Glycophorin A protein receptors and sialic acid in erythrocytes with Reticulocyte Binding Proteins (RBP) and Duffy Adhesion Protein (DAP) ligands in merozoites. MSP1 can be targeted to be a specific antigen and predicted epitope area which will be used for the development of diagnostic and malaria vaccine therapy. MSP1 consists of 17 blocks, each block is dimorphic, and has been marked as the K1 and MAD20 alleles. Exceptions only in block 2, because it has 3 alleles, among others K1, MAD20 and RO33. These polymorphisms cause allelic variations and implicate the severity of patients infected P. falciparum. In addition, polymorphism of MSP1 in Jayapura isolates has not been reported so it is interesting to be further identified and projected as a specific antigen. Therefore, in this study, we analyzed the allele polymorphism as well as detected the MSP1 epitope antigen candidate on block 2 P. falciparum. Clinical samples of selected malaria patients followed the consecutive sampling method, examining malaria parasites with blood preparations on glass objects observed through a microscope. Plasmodium DNA was isolated from the blood of malarial positive patients. The block 2 MSP1 gene was amplified using PCR method and cloned using the pGEM-T easy vector then transformed to TOP'10 E.coli. Positive colonies selection was performed with blue-white screening. The existence of target DNA was confirmed by PCR colonies and DNA sequencing methods. Furthermore, DNA sequence analysis was done through alignment and formation of a phylogenetic tree using MEGA 6 software and insilico analysis using IEDB software to predict epitope candidate for P. falciparum. A total of 15 patient samples have been isolated from Plasmodium DNA. PCR amplification results show the target gene size about ± 1049 bp. The results of MSP1 nucleotide alignment analysis reveal that block 2 MSP1 genes derived from the sample of malarial patients were distributed in four different allele family groups, K1 (7), MAD20 (1), RO33 (0) and MSP1_Jayapura (10) alleles. The most commonly appears of the detected allele is MSP1_Jayapura single allele. There was no significant association between sex variables, age, the density of parasitemia and alel variation (Mann Whitney, U > 0.05), while symptomatic signs have a significant difference as a trigger of detectable allele variation (U < 0.05). In this research, insilico study shows that there is a new epitope antigen candidate from the MSP1_Jayapura allele and it is predicted to be recognized by B cells with 17 amino acid lengths in the amino acid sequence 187 to 203.Keywords: epitope candidate, insilico analysis, MSP1 P. falciparum, polymorphism
Procedia PDF Downloads 180366 Dynamic Control Theory: A Behavioral Modeling Approach to Demand Forecasting amongst Office Workers Engaged in a Competition on Energy Shifting
Authors: Akaash Tawade, Manan Khattar, Lucas Spangher, Costas J. Spanos
Abstract:
Many grids are increasing the share of renewable energy in their generation mix, which is causing the energy generation to become less controllable. Buildings, which consume nearly 33% of all energy, are a key target for demand response: i.e., mechanisms for demand to meet supply. Understanding the behavior of office workers is a start towards developing demand response for one sector of building technology. The literature notes that dynamic computational modeling can be predictive of individual action, especially given that occupant behavior is traditionally abstracted from demand forecasting. Recent work founded on Social Cognitive Theory (SCT) has provided a promising conceptual basis for modeling behavior, personal states, and environment using control theoretic principles. Here, an adapted linear dynamical system of latent states and exogenous inputs is proposed to simulate energy demand amongst office workers engaged in a social energy shifting game. The energy shifting competition is implemented in an office in Singapore that is connected to a minigrid of buildings with a consistent 'price signal.' This signal is translated into a 'points signal' by a reinforcement learning (RL) algorithm to influence participant energy use. The dynamic model functions at the intersection of the points signals, baseline energy consumption trends, and SCT behavioral inputs to simulate future outcomes. This study endeavors to analyze how the dynamic model trains an RL agent and, subsequently, the degree of accuracy to which load deferability can be simulated. The results offer a generalizable behavioral model for energy competitions that provides the framework for further research on transfer learning for RL, and more broadly— transactive control.Keywords: energy demand forecasting, social cognitive behavioral modeling, social game, transfer learning
Procedia PDF Downloads 107365 Using Machine Learning to Classify Different Body Parts and Determine Healthiness
Authors: Zachary Pan
Abstract:
Our general mission is to solve the problem of classifying images into different body part types and deciding if each of them is healthy or not. However, for now, we will determine healthiness for only one-sixth of the body parts, specifically the chest. We will detect pneumonia in X-ray scans of those chest images. With this type of AI, doctors can use it as a second opinion when they are taking CT or X-ray scans of their patients. Another ad-vantage of using this machine learning classifier is that it has no human weaknesses like fatigue. The overall ap-proach to this problem is to split the problem into two parts: first, classify the image, then determine if it is healthy. In order to classify the image into a specific body part class, the body parts dataset must be split into test and training sets. We can then use many models, like neural networks or logistic regression models, and fit them using the training set. Now, using the test set, we can obtain a realistic accuracy the models will have on images in the real world since these testing images have never been seen by the models before. In order to increase this testing accuracy, we can also apply many complex algorithms to the models, like multiplicative weight update. For the second part of the problem, to determine if the body part is healthy, we can have another dataset consisting of healthy and non-healthy images of the specific body part and once again split that into the test and training sets. We then use another neural network to train on those training set images and use the testing set to figure out its accuracy. We will do this process only for the chest images. A major conclusion reached is that convolutional neural networks are the most reliable and accurate at image classification. In classifying the images, the logistic regression model, the neural network, neural networks with multiplicative weight update, neural networks with the black box algorithm, and the convolutional neural network achieved 96.83 percent accuracy, 97.33 percent accuracy, 97.83 percent accuracy, 96.67 percent accuracy, and 98.83 percent accuracy, respectively. On the other hand, the overall accuracy of the model that de-termines if the images are healthy or not is around 78.37 percent accuracy.Keywords: body part, healthcare, machine learning, neural networks
Procedia PDF Downloads 103364 Aeromagnetic Data Interpretation and Source Body Evaluation Using Standard Euler Deconvolution Technique in Obudu Area, Southeastern Nigeria
Authors: Chidiebere C. Agoha, Chukwuebuka N. Onwubuariri, Collins U.amasike, Tochukwu I. Mgbeojedo, Joy O. Njoku, Lawson J. Osaki, Ifeyinwa J. Ofoh, Francis B. Akiang, Dominic N. Anuforo
Abstract:
In order to interpret the airborne magnetic data and evaluate the approximate location, depth, and geometry of the magnetic sources within Obudu area using the standard Euler deconvolution method, very high-resolution aeromagnetic data over the area was acquired, processed digitally and analyzed using Oasis Montaj 8.5 software. Data analysis and enhancement techniques, including reduction to the equator, horizontal derivative, first and second vertical derivatives, upward continuation and regional-residual separation, were carried out for the purpose of detailed data Interpretation. Standard Euler deconvolution for structural indices of 0, 1, 2, and 3 was also carried out and respective maps were obtained using the Euler deconvolution algorithm. Results show that the total magnetic intensity ranges from -122.9nT to 147.0nT, regional intensity varies between -106.9nT to 137.0nT, while residual intensity ranges between -51.5nT to 44.9nT clearly indicating the masking effect of deep-seated structures over surface and shallow subsurface magnetic materials. Results also indicated that the positive residual anomalies have an NE-SW orientation, which coincides with the trend of major geologic structures in the area. Euler deconvolution for all the considered structural indices has depth to magnetic sources ranging from the surface to more than 2000m. Interpretation of the various structural indices revealed the locations and depths of the source bodies and the existence of geologic models, including sills, dykes, pipes, and spherical structures. This area is characterized by intrusive and very shallow basement materials and represents an excellent prospect for solid mineral exploration and development.Keywords: Euler deconvolution, horizontal derivative, Obudu, structural indices
Procedia PDF Downloads 81363 Against the Philosophical-Scientific Racial Project of Biologizing Race
Authors: Anthony F. Peressini
Abstract:
The concept of race has recently come prominently back into discussion in the context of medicine and medical science, along with renewed effort to biologize racial concepts. This paper argues that this renewed effort to biologize race by way of medicine and population genetics fail on their own terms, and more importantly, that the philosophical project of biologizing race ought to be recognized for what it is—a retrograde racial project—and abandoned. There is clear agreement that standard racial categories and concepts cannot be grounded in the old way of racial naturalism, which understand race as a real, interest-independent biological/metaphysical category in which its members share “physical, moral, intellectual, and cultural characteristics.” But equally clear is the very real and pervasive presence of racial concepts in individual and collective consciousness and behavior, and so it remains a pressing area in which to seek deeper understanding. Recent philosophical work has endeavored to reconcile these two observations by developing a “thin” conception of race, grounded in scientific concepts but without the moral and metaphysical content. Such “thin,” science-based analyses take the “commonsense” or “folk” sense of race as it functions in contemporary society as the starting point for their philosophic-scientific projects to biologize racial concepts. A “philosophic-scientific analysis” is a special case of the cornerstone of analytic philosophy: a conceptual analysis. That is, a rendering of a concept into the more perspicuous concepts that constitute it. Thus a philosophic-scientific account of a concept is an attempt to work out an analysis of a concept that makes use of empirical science's insights to ground, legitimate and explicate the target concept in terms of clearer concepts informed by empirical results. The focus in this paper is on three recent philosophic-scientific cases for retaining “race” that all share this general analytic schema, but that make use of “medical necessity,” population genetics, and human genetic clustering, respectively. After arguing that each of these three approaches suffers from internal difficulties, the paper considers the general analytic schema employed by such biologizations of race. While such endeavors are inevitably prefaced with the disclaimer that the theory to follow is non-essentialist and non-racialist, the case will be made that such efforts are not neutral scientific or philosophical projects but rather are what sociologists call a racial project, that is, one of many competing efforts that conjoin a representation of what race means to specific efforts to determine social and institutional arrangements of power, resources, authority, etc. Accordingly, philosophic-scientific biologizations of race, since they begin from and condition their analyses on “folk” conceptions, cannot pretend to be “prior to” other disciplinary insights, nor to transcend the social-political dynamics involved in formulating theories of race. As a result, such traditional philosophical efforts can be seen to be disciplinarily parochial and to address only a caricature of a large and important human problem—and thereby further contributing to the unfortunate isolation of philosophical thinking about race from other disciplines.Keywords: population genetics, ontology of race, race-based medicine, racial formation theory, racial projects, racism, social construction
Procedia PDF Downloads 273362 A General Form of Characteristics Method Applied on Minimum Length Nozzles Design
Authors: Merouane Salhi, Mohamed Roudane, Abdelkader Kirad
Abstract:
In this work, we present a new form of characteristics method, which is a technique for solving partial differential equations. Typically, it applies to first-order equations; the aim of this method is to reduce a partial differential equation to a family of ordinary differential equations along which the solution can be integrated from some initial data. This latter developed under the real gas theory, because when the thermal and the caloric imperfections of a gas increases, the specific heat and their ratio do not remain constant anymore and start to vary with the gas parameters. The gas doesn’t stay perfect. Its state equation change and it becomes for a real gas. The presented equations of the characteristics remain valid whatever area or field of study. Here we need have inserted the developed Prandtl Meyer function in the mathematical system to find a new model when the effect of stagnation pressure is taken into account. In this case, the effects of molecular size and intermolecular attraction forces intervene to correct the state equation, the thermodynamic parameters and the value of Prandtl Meyer function. However, with the assumptions that Berthelot’s state equation accounts for molecular size and intermolecular force effects, expressions are developed for analyzing the supersonic flow for thermally and calorically imperfect gas. The supersonic parameters depend directly on the stagnation parameters of the combustion chamber. The resolution has been made by the finite differences method using the corrector predictor algorithm. As results, the developed mathematical model used to design 2D minimum length nozzles under effect of the stagnation parameters of fluid flow. A comparison for air with the perfect gas PG and high temperature models on the one hand and our results by the real gas theory on the other of nozzles shapes and characteristics are made.Keywords: numerical methods, nozzles design, real gas, stagnation parameters, supersonic expansion, the characteristics method
Procedia PDF Downloads 243361 Evaluation of Golden Beam Data for the Commissioning of 6 and 18 MV Photons Beams in Varian Linear Accelerator
Authors: Shoukat Ali, Abdul Qadir Jandga, Amjad Hussain
Abstract:
Objective: The main purpose of this study is to compare the Percent Depth dose (PDD) and In-plane and cross-plane profiles of Varian Golden beam data to the measured data of 6 and 18 MV photons for the commissioning of Eclipse treatment planning system. Introduction: Commissioning of treatment planning system requires an extensive acquisition of beam data for the clinical use of linear accelerators. Accurate dose delivery require to enter the PDDs, Profiles and dose rate tables for open and wedges fields into treatment planning system, enabling to calculate the MUs and dose distribution. Varian offers a generic set of beam data as a reference data, however not recommend for clinical use. In this study, we compared the generic beam data with the measured beam data to evaluate the reliability of generic beam data to be used for the clinical purpose. Methods and Material: PDDs and Profiles of Open and Wedge fields for different field sizes and at different depths measured as per Varian’s algorithm commissioning guideline. The measurement performed with PTW 3D-scanning water phantom with semi-flex ion chamber and MEPHYSTO software. The online available Varian Golden Beam Data compared with the measured data to evaluate the accuracy of the golden beam data to be used for the commissioning of Eclipse treatment planning system. Results: The deviation between measured vs. golden beam data was in the range of 2% max. In PDDs, the deviation increases more in the deeper depths than the shallower depths. Similarly, profiles have the same trend of increasing deviation at large field sizes and increasing depths. Conclusion: Study shows that the percentage deviation between measured and golden beam data is within the acceptable tolerance and therefore can be used for the commissioning process; however, verification of small subset of acquired data with the golden beam data should be mandatory before clinical use.Keywords: percent depth dose, flatness, symmetry, golden beam data
Procedia PDF Downloads 489360 Comparison of Direction of Arrival Estimation Method for Drone Based on Phased Microphone Array
Authors: Jiwon Lee, Yeong-Ju Go, Jong-Soo Choi
Abstract:
Drones were first developed for military use and were used in World War 1. But recently drones have been used in a variety of fields. Several companies actively utilize drone technology to strengthen their services, and in agriculture, drones are used for crop monitoring and sowing. Other people use drones for hobby activities such as photography. However, as the range of use of drones expands rapidly, problems caused by drones such as improperly flying, privacy and terrorism are also increasing. As the need for monitoring and tracking of drones increases, researches are progressing accordingly. The drone detection system estimates the position of the drone using the physical phenomena that occur when the drones fly. The drone detection system measures being developed utilize many approaches, such as radar, infrared camera, and acoustic detection systems. Among the various drone detection system, the acoustic detection system is advantageous in that the microphone array system is small, inexpensive, and easy to operate than other systems. In this paper, the acoustic signal is acquired by using minimum microphone when drone is flying, and direction of drone is estimated. When estimating the Direction of Arrival(DOA), there is a method of calculating the DOA based on the Time Difference of Arrival(TDOA) and a method of calculating the DOA based on the beamforming. The TDOA technique requires less number of microphones than the beamforming technique, but is weak in noisy environments and can only estimate the DOA of a single source. The beamforming technique requires more microphones than the TDOA technique. However, it is strong against the noisy environment and it is possible to simultaneously estimate the DOA of several drones. When estimating the DOA using acoustic signals emitted from the drone, it is impossible to measure the position of the drone, and only the direction can be estimated. To overcome this problem, in this work we show how to estimate the position of drones by arranging multiple microphone arrays. The microphone array used in the experiments was four tetrahedral microphones. We simulated the performance of each DOA algorithm and demonstrated the simulation results through experiments.Keywords: acoustic sensing, direction of arrival, drone detection, microphone array
Procedia PDF Downloads 160359 A Computational Study of Very High Turbulent Flow and Heat Transfer Characteristics in Circular Duct with Hemispherical Inline Baffles
Authors: Dipak Sen, Rajdeep Ghosh
Abstract:
This paper presents a computational study of steady state three dimensional very high turbulent flow and heat transfer characteristics in a constant temperature-surfaced circular duct fitted with 900 hemispherical inline baffles. The computations are based on realizable k-ɛ model with standard wall function considering the finite volume method, and the SIMPLE algorithm has been implemented. Computational Study are carried out for Reynolds number, Re ranging from 80000 to 120000, Prandtl Number, Pr of 0.73, Pitch Ratios, PR of 1,2,3,4,5 based on the hydraulic diameter of the channel, hydrodynamic entry length, thermal entry length and the test section. Ansys Fluent 15.0 software has been used to solve the flow field. Study reveals that circular pipe having baffles has a higher Nusselt number and friction factor compared to the smooth circular pipe without baffles. Maximum Nusselt number and friction factor are obtained for the PR=5 and PR=1 respectively. Nusselt number increases while pitch ratio increases in the range of study; however, friction factor also decreases up to PR 3 and after which it becomes almost constant up to PR 5. Thermal enhancement factor increases with increasing pitch ratio but with slightly decreasing Reynolds number in the range of study and becomes almost constant at higher Reynolds number. The computational results reveal that optimum thermal enhancement factor of 900 inline hemispherical baffle is about 1.23 for pitch ratio 5 at Reynolds number 120000.It also shows that the optimum pitch ratio for which the baffles can be installed in such very high turbulent flows should be 5. Results show that pitch ratio and Reynolds number play an important role on both fluid flow and heat transfer characteristics.Keywords: friction factor, heat transfer, turbulent flow, circular duct, baffle, pitch ratio
Procedia PDF Downloads 372358 Embryonic Aneuploidy – Morphokinetic Behaviors as a Potential Diagnostic Biomarker
Authors: Banafsheh Nikmehr, Mohsen Bahrami, Yueqiang Song, Anuradha Koduru, Ayse K. Vuruskan, Hongkun Lu, Mallory Pitts, Tolga B. Mesen, Tamer M. Yalcinkaya
Abstract:
The number of people who receive in vitro fertilization (IVF) treatment has increased on a startling trajectory over the past two decades. Despite advances in this field, particularly the introduction of intracytoplasmic sperm injection (ICSI) and the preimplantation genetic screening (PGS), the IVF success remains low. A major factor contributing to IVF failure is embryonic aneuploidy (abnormal chromosome content), which often results in miscarriage and birth defects. Although PGS is often used as the standard diagnostic tool to identify aneuploid embryos, it is an invasive approach that could affect the embryo development, and yet inaccessible to many patients due its high costs. As such, there is a clear need for a non-invasive cost-effective approach to identify euploid embryos for single embryo transfer (SET). The reported differences between morphokinetic behaviors of aneuploid and euploid embryos has shown promise to address this need. However, current literature is inconclusive and further research is urgently needed to translate current findings into clinical diagnostics. In this ongoing study, we found significant differences between morphokinetic behaviors of euploid and aneuploid embryos that provides important insights and reaffirms the promise of such behaviors for developing non-invasive methodologies. Methodology—A total of 242 embryos (euploid: 149, aneuploid: 93) from 74 patients who underwent IVF treatment in Carolinas Fertility Clinics in Winston-Salem, NC, were analyzed. All embryos were incubated in an EmbryoScope incubator. The patients were randomly selected from January 2019 to June 2021 with most patients having both euploid and aneuploid embryos. All embryos reached the blastocyst stage and had known PGS outcomes. The ploidy assessment was done by a third-party testing laboratory on day 5-7 embryo biopsies. The morphokinetic variables of each embryo were measured by the EmbryoViewer software (Uniesense FertiliTech) on time-lapse images using 7 focal depths. We compared the time to: pronuclei fading (tPNf), division to 2,3,…,9 cells (t2, t3,…,t9), start of embryo compaction (tSC), Morula formation (tM), start of blastocyst formation (tSC), blastocyst formation (tB), and blastocyst expansion (tEB), as well as intervals between them (e.g., c23 = t3 – t2). We used a mixed regression method for our statistical analyses to account for the correlation between multiple embryos per patient. Major Findings— The average age of the patients was 35.04 yrs. The average patient age associated with euploid and aneuploid embryos was not different (P = 0.6454). We found a significant difference in c45 = t5-t4 (P = 0.0298). Our results indicated this interval on average lasts significantly longer for aneuploid embryos - c45(aneuploid) = 11.93hr vs c45(euploid) = 7.97hr. In a separate analysis limited to embryos from the same patients (patients = 47, total embryos=200, euploid=112, aneuploid=88), we obtained the same results (P = 0.0316). The statistical power for this analysis exceeded 87%. No other variable was different between the two groups. Conclusion— Our results demonstrate the importance of morphokinetic variables as potential biomarkers that could aid in non-invasively characterizing euploid and aneuploid embryos. We seek to study a larger population of embryos and incorporate the embryo quality in future studies.Keywords: IVF, embryo, euploidy, aneuploidy, morphokinteic
Procedia PDF Downloads 88