Search results for: recurrent artificial neural network
3471 Ceratocystis manginecans Causal Agent of a Destructive Mangoes in Pakistan
Authors: Asma Rashid, Shazia Iram, Iftikhar Ahmad
Abstract:
Mango sudden death is an emerging problem in Pakistan. As its prevalence is observed in almost all mango growing areas and severity varied from 2-5% in Punjab and 5-10% in Sindh. Symptoms on affected trees include bark splitting, discoloration of the vascular tissue, wilting, gummosis and at the end rapid death. Total of n= 45 isolates were isolated from different mango growing areas of Punjab and Sindh. Pathogenicity of these fungal isolates was tested through artificial inoculation method on different hosts (potato tubers, detached mango leaves, detached mango twigs and mango plants) under controlled conditions and all were proved pathogenic with varying degree of aggressiveness in reference to control. The findings of the present study proved that out of these four methods, potato tubers inoculation method was the most ideal as this fix the inoculums on the target site. Increased fungal growth and spore numbers may be due to soft tissues of potato tubers from which Ceratocystis isolates can easily pass. Lesion area on potato tubers was in the range of 7.09-0.14 cm2 followed by detached mango twigs which were ranged from 0.48-0.09 cm2). All pathological results were proved highly significant at P<0.05 through ANOVA but isolate to isolate showed non-significant behaviour but they have the positive effect on lesion area. Re-isolation of respective fungi was achieved with 100 percent success which results in the verification of Koch’s postulates. DNA of fungal pathogens was successfully extracted through phenol chloroform method. Amplification was done through ITS, b-tubulin gene, and Transcription Elongation Factor (EF1-a) gene primers and the amplified amplicons were sequenced and compared from NCBI which showed 99-100 % similarity with Ceratocystis manginecans. Fungus Ceratocystis manginecans formed one of strongly supported sub-clades through phylogenetic tree. Results obtained through this work would be supportive in establishment of relation of isolates with their region and will give information about pathogenicity level of isolates that would be useful to develop the management policies to reduce the afflictions in orchards caused by mango sudden death.Keywords: artificial inoculation, mango, Ceratocystis manginecans, phylogenetic, screening
Procedia PDF Downloads 2503470 Transcriptional Response of Honey Bee to Differential Nutritional Status and Nosema Infection
Authors: Farida Azzouz-Olden, Arthur G. Hunt, Gloria Degrandi-Hoffman
Abstract:
Bees are confronting several environmental challenges, including the intermingled effects of malnutrition and disease. Intuitively, pollen is the healthiest nutritional choice; however, commercial substitutes, such as BeePro and MegaBee, are widely used. Herein we examined how feeding natural and artificial diets shapes transcription in the abdomen of the honey bee, and how transcription shifts in combination with Nosema parasitism. Gene ontology enrichment revealed that, compared with poor diet (carbohydrates (C)), bees fed pollen (P > C), BeePro (B > C), and MegaBee (M > C) showed a broad upregulation of metabolic processes, especially lipids; however, pollen feeding promoted more functions and superior proteolysis. The superiority of the pollen diet was also evident through the remarkable overexpression of vitellogenin in bees fed pollen instead of MegaBee or BeePro. Upregulation of bioprocesses under carbohydrates feeding compared to pollen (C > P) provided a clear poor nutritional status, uncovering stark expression changes that were slight or absent relatively to BeePro (C > B) or MegaBee (C > M). Poor diet feeding (C > P) induced starvation response genes and hippo signaling pathway, while it repressed growth through different mechanisms. Carbohydrate feeding (C > P) also elicited ‘adult behavior’, and developmental processes suggesting transition to foraging. Finally, it altered the ‘circadian rhythm’, reflecting the role of this mechanism in the adaptation to nutritional stress in mammals. Nosema-infected bees fed pollen compared to carbohydrates (PN > CN) upheld certain bioprocesses of uninfected bees (P > C). Poor nutritional status was more apparent against pollen (CN > PN) than BeePro (CN > BN) or MegaBee (CN > MN). Nosema accentuated the effects of malnutrition since more starvation-response genes and stress response mechanisms were upregulated in CN > PN compared to C > P. The bioprocess ‘Macromolecular complex assembly’ was also enriched in CN > PN, and involved genes associated with human HIV and/or influenza, thus providing potential candidates for bee-Nosema interactions. Finally, the enzyme Duox emerged as essential for guts defense in bees, similarly to Drosophila. These results provide evidence of the superior nutritional status of bees fed pollen instead of artificial substitutes in terms of overall health, even in the presence of a pathogen.Keywords: honeybee, immunity, Nosema, nutrition, RNA-seq
Procedia PDF Downloads 1593469 In Vitro Evaluation of an Artificial Venous Valve
Authors: Joon Hock Yeo, Munirah Ismail
Abstract:
Chronic venous insufficiency is a condition where the venous wall or venous valves fail to operate properly. As such, it is difficult for the blood to return from the lower extremities back to the heart. Chronic venous insufficiency affects many people worldwide. In last decade, there have been many new and innovative designs of prosthetic venous valves to replace the malfunction native venous valves. However, thus far, to the authors’ knowledge, there is no successful prosthetic venous valve. In this project, we have developed a venous valve which could operate under low pressure. While further testing is warranted, this unique valve could potentially alleviate problems associated with chronic venous insufficiency.Keywords: prosthetic venous valve, bi-leaflet valve, chronic venous insufficiency, valve hemodynamics
Procedia PDF Downloads 2003468 Electric Arc Furnaces as a Source of Voltage Fluctuations in the Power System
Authors: Zbigniew Olczykowski
Abstract:
The paper presents the impact of work on the electric arc furnace power grid. The arc furnace operating will be modeled at different power conditions of steelworks. The paper will describe how to determine the increase in voltage fluctuations caused by working in parallel arc furnaces. The analysis of indicators characterizing the quality of electricity recorded during several cycles of measurement made at the same time at three points grid, with different power and different short-circuit rated voltage, will be carried out. The measurements analysis presented in this paper were conducted in the mains of one of the Polish steel. The indicators characterizing the quality of electricity was recorded during several cycles of measurement while making measurements at three points of different power network short-circuit power and various voltage ratings. Measurements of power quality indices included the one-week measurement cycles in accordance with the EN-50160. Data analysis will include the results obtained during the simultaneous measurement of three-point grid. This will determine the actual propagation of interference generated by the device. Based on the model studies and measurements of quality indices of electricity we will establish the effect of a specific arc on the mains. The short-circuit power network’s minimum value will also be estimated, this is necessary to limit the voltage fluctuations generated by arc furnaces.Keywords: arc furnaces, long-term flicker, measurement and modeling of power quality, voltage fluctuations
Procedia PDF Downloads 2923467 Lessons Learned in Developing a Clinical Information System and Electronic Health Record (EHR) System That Meet the End User Needs and State of Qatar's Emerging Regulations
Authors: Darshani Premaratne, Afshin Kandampath Puthiyadath
Abstract:
The Government of Qatar is taking active steps in improving quality of health care industry in the state of Qatar. In this initiative development and market introduction of Clinical Information System and Electronic Health Record (EHR) system are proved to be a highly challenging process. Along with an organization specialized on EHR system development and with the blessing of Health Ministry of Qatar the process of introduction of EHR system in Qatar healthcare industry was undertaken. Initially a market survey was carried out to understand the requirements. Secondly, the available government regulations, needs and possible upcoming regulations were carefully studied before deployment of resources for software development. Sufficient flexibility was allowed to cater for both the changes in the market and the regulations. As the first initiative a system that enables integration of referral network where referral clinic and laboratory system for all single doctor (and small scale) clinics was developed. Setting of isolated single doctor clinics all over the state to bring in to an integrated referral network along with a referral hospital need a coherent steering force and a solid top down framework. This paper discusses about the lessons learned in developing, in obtaining approval of the health ministry and in introduction to the industry of the single doctor referral network along with an EHR system. It was concluded that development of this nature required continues balance between the market requirements and upcoming regulations. Further accelerating the development based on the emerging needs, implementation based on the end user needs while tallying with the regulations, diffusion, and uptake of demand-driven and evidence-based products, tools, strategies, and proper utilization of findings were equally found paramount in successful development of end product. Development of full scale Clinical Information System and EHR system are underway based on the lessons learned. The Government of Qatar is taking active steps in improving quality of health care industry in the state of Qatar. In this initiative development and market introduction of Clinical Information System and Electronic Health Record (EHR) system are proved to be a highly challenging process. Along with an organization specialized on EHR system development and with the blessing of Health Ministry of Qatar the process of introduction of EHR system in Qatar healthcare industry was undertaken. Initially a market survey was carried out to understand the requirements. Secondly the available government regulations, needs and possible upcoming regulations were carefully studied before deployment of resources for software development. Sufficient flexibility was allowed to cater for both the changes in the market and the regulations. As the first initiative a system that enables integration of referral network where referral clinic and laboratory system for all single doctor (and small scale) clinics was developed. Setting of isolated single doctor clinics all over the state to bring in to an integrated referral network along with a referral hospital need a coherent steering force and a solid top down framework. This paper discusses about the lessons learned in developing, in obtaining approval of the health ministry and in introduction to the industry of the single doctor referral network along with an EHR system. It was concluded that development of this nature required continues balance between the market requirements and upcoming regulations. Further accelerating the development based on the emerging needs, implementation based on the end user needs while tallying with the regulations, diffusion, and uptake of demand-driven and evidence-based products, tools, strategies, and proper utilization of findings were equally found paramount in successful development of end product. Development of full scale Clinical Information System and EHR system are underway based on the lessons learned.Keywords: clinical information system, electronic health record, state regulations, integrated referral network of clinics
Procedia PDF Downloads 3643466 Performance Evaluation of Wideband Code Division Multiplication Network
Authors: Osama Abdallah Mohammed Enan, Amin Babiker A/Nabi Mustafa
Abstract:
The aim of this study is to evaluate and analyze different parameters of WCDMA (wideband code division multiplication). Moreover, this study also incorporates brief yet throughout analysis of WCDMA’s components as well as its internal architecture. This study also examines different power controls. These power controls may include open loop power control, closed or inner group loop power control and outer loop power control. Different handover techniques or methods of WCDMA are also illustrated in this study. These handovers may include hard handover, inter system handover and soft and softer handover. Different duplexing techniques are also described in the paper. This study has also presented an idea about different parameters of WCDMA that leads the system towards QoS issues. This may help the operator in designing and developing adequate network configuration. In addition to this, the study has also investigated various parameters including Bit Energy per Noise Spectral Density (Eb/No), Noise rise, and Bit Error Rate (BER). After simulating these parameters, using MATLAB environment, it was investigated that, for a given Eb/No value the system capacity increase by increasing the reuse factor. Besides that, it was also analyzed that, noise rise is decreasing for lower data rates and for lower interference levels. Finally, it was examined that, BER increase by using one type of modulation technique than using other type of modulation technique.Keywords: duplexing, handover, loop power control, WCDMA
Procedia PDF Downloads 2173465 Understanding the Basics of Information Security: An Act of Defense
Authors: Sharon Q. Yang, Robert J. Congleton
Abstract:
Information security is a broad concept that covers any issues and concerns about the proper access and use of information on the Internet, including measures and procedures to protect intellectual property and private data from illegal access and online theft; the act of hacking; and any defensive technologies that contest such cybercrimes. As more research and commercial activities are conducted online, cybercrimes have increased significantly, putting sensitive information at risk. Information security has become critically important for organizations and private citizens alike. Hackers scan for network vulnerabilities on the Internet and steal data whenever they can. Cybercrimes disrupt our daily life, cause financial losses, and instigate fear in the public. Since the start of the pandemic, most data related cybercrimes targets have been either financial or health information from companies and organizations. Libraries also should have a high interest in understanding and adopting information security methods to protect their patron data and copyrighted materials. But according to information security professionals, higher education and cultural organizations, including their libraries, are the least prepared entities for cyberattacks. One recent example is that of Steven’s Institute of Technology in New Jersey in the US, which had its network hacked in 2020, with the hackers demanding a ransom. As a result, the network of the college was down for two months, causing serious financial loss. There are other cases where libraries, colleges, and universities have been targeted for data breaches. In order to build an effective defense, we need to understand the most common types of cybercrimes, including phishing, whaling, social engineering, distributed denial of service (DDoS) attacks, malware and ransomware, and hacker profiles. Our research will focus on each hacking technique and related defense measures; and the social background and reasons/purpose of hacker and hacking. Our research shows that hacking techniques will continue to evolve as new applications, housing information, and data on the Internet continue to be developed. Some cybercrimes can be stopped with effective measures, while others present challenges. It is vital that people understand what they face and the consequences when not prepared.Keywords: cybercrimes, hacking technologies, higher education, information security, libraries
Procedia PDF Downloads 1353464 Identification of Hedgerows in the Agricultural Landscapes of Mugada within Bartın Province, Turkey
Authors: Yeliz Sarı Nayim, B. Niyami Nayim
Abstract:
Biotopes such as forest areas rich in biodiversity, wetlands, hedgerows and woodlands play important ecological roles in agricultural landscapes. Of these semi-natural areas and features, hedgerows are the most common landscape elements. Their most significant features are that they serve as a barrier between the agricultural lands, serve as shelter, add aesthetical value to the landscape and contribute significantly to the wildlife and biodiversity. Hedgerows surrounding agricultural landscapes also provide an important habitat for pollinators which are important for agricultural production. This study looks into the identification of hedgerows in agricultural lands in the Mugada rural area within Bartın province, Turkey. From field data and-and satellite images, it is clear that in this area, especially around rural settlements, large forest areas have been cleared for settlement and agriculture. A network of hedgerows is also apparent, which might potentially play an important role in the otherwise open agricultural landscape. We found that these hedgerows serve as an ecological and biological corridor, linking forest ecosystems. Forest patches of different sizes and creating a habitat network across the landscape. Some examples of this will be presented. The overall conclusion from the study is that ecologically, biologically and aesthetically important hedge biotopes should be maintained in the long term in agricultural landscapes such as this. Some suggestions are given for how they could be managed sustainably into the future.Keywords: agricultural biotopes, Hedgerows, landscape ecology, Turkey
Procedia PDF Downloads 3083463 The Correlation between Air Pollution and Tourette Syndrome
Authors: Mengnan Sun
Abstract:
It is unclear about the association between air pollution and Tourette Syndrome (TS), although people have suspected that air pollution might trigger TS. TS is a type of neural system disease usually found among children. The number of TS patients has significantly increased in recent decades, suggesting an importance and urgency to examine the possible triggers or conditions that are associated with TS. In this study, the correlation between air pollution and three allergic diseases---asthma, allergic conjunctivitis (AC), and allergic rhinitis (AR)---is examined. Then, a correlation between these allergic diseases and TS is proved. In this way, this study establishes a positive correlation between air pollution and TS. Measures the public can take to help TS patients are also analyzed at the end of this article. The article hopes to raise people’s awareness to reduce air pollution for the good of TS patients or people with other disorders that are associated with air pollution.Keywords: air pollution, allergic diseases, climate change, Tourette Syndrome
Procedia PDF Downloads 653462 Human Identification and Detection of Suspicious Incidents Based on Outfit Colors: Image Processing Approach in CCTV Videos
Authors: Thilini M. Yatanwala
Abstract:
CCTV (Closed-Circuit-Television) Surveillance System is being used in public places over decades and a large variety of data is being produced every moment. However, most of the CCTV data is stored in isolation without having integrity. As a result, identification of the behavior of suspicious people along with their location has become strenuous. This research was conducted to acquire more accurate and reliable timely information from the CCTV video records. The implemented system can identify human objects in public places based on outfit colors. Inter-process communication technologies were used to implement the CCTV camera network to track people in the premises. The research was conducted in three stages and in the first stage human objects were filtered from other movable objects available in public places. In the second stage people were uniquely identified based on their outfit colors and in the third stage an individual was continuously tracked in the CCTV network. A face detection algorithm was implemented using cascade classifier based on the training model to detect human objects. HAAR feature based two-dimensional convolution operator was introduced to identify features of the human face such as region of eyes, region of nose and bridge of the nose based on darkness and lightness of facial area. In the second stage outfit colors of human objects were analyzed by dividing the area into upper left, upper right, lower left, lower right of the body. Mean color, mod color and standard deviation of each area were extracted as crucial factors to uniquely identify human object using histogram based approach. Color based measurements were written in to XML files and separate directories were maintained to store XML files related to each camera according to time stamp. As the third stage of the approach, inter-process communication techniques were used to implement an acknowledgement based CCTV camera network to continuously track individuals in a network of cameras. Real time analysis of XML files generated in each camera can determine the path of individual to monitor full activity sequence. Higher efficiency was achieved by sending and receiving acknowledgments only among adjacent cameras. Suspicious incidents such as a person staying in a sensitive area for a longer period or a person disappeared from the camera coverage can be detected in this approach. The system was tested for 150 people with the accuracy level of 82%. However, this approach was unable to produce expected results in the presence of group of people wearing similar type of outfits. This approach can be applied to any existing camera network without changing the physical arrangement of CCTV cameras. The study of human identification and suspicious incident detection using outfit color analysis can achieve higher level of accuracy and the project will be continued by integrating motion and gait feature analysis techniques to derive more information from CCTV videos.Keywords: CCTV surveillance, human detection and identification, image processing, inter-process communication, security, suspicious detection
Procedia PDF Downloads 1843461 Low-Cost Parking Lot Mapping and Localization for Home Zone Parking Pilot
Authors: Hongbo Zhang, Xinlu Tang, Jiangwei Li, Chi Yan
Abstract:
Home zone parking pilot (HPP) is a fast-growing segment in low-speed autonomous driving applications. It requires the car automatically cruise around a parking lot and park itself in a range of up to 100 meters inside a recurrent home/office parking lot, which requires precise parking lot mapping and localization solution. Although Lidar is ideal for SLAM, the car OEMs favor a low-cost fish-eye camera based visual SLAM approach. Recent approaches have employed segmentation models to extract semantic features and improve mapping accuracy, but these AI models are memory unfriendly and computationally expensive, making deploying on embedded ADAS systems difficult. To address this issue, we proposed a new method that utilizes object detection models to extract robust and accurate parking lot features. The proposed method could reduce computational costs while maintaining high accuracy. Once combined with vehicles’ wheel-pulse information, the system could construct maps and locate the vehicle in real-time. This article will discuss in detail (1) the fish-eye based Around View Monitoring (AVM) with transparent chassis images as the inputs, (2) an Object Detection (OD) based feature point extraction algorithm to generate point cloud, (3) a low computational parking lot mapping algorithm and (4) the real-time localization algorithm. At last, we will demonstrate the experiment results with an embedded ADAS system installed on a real car in the underground parking lot.Keywords: ADAS, home zone parking pilot, object detection, visual SLAM
Procedia PDF Downloads 693460 Criticality Assessment Model for Water Pipelines Using Fuzzy Analytical Network Process
Abstract:
Water networks (WNs) are responsible of providing adequate amounts of safe, high quality, water to the public. As other critical infrastructure systems, WNs are subjected to deterioration which increases the number of breaks and leaks and lower water quality. In Canada, 35% of water assets require critical attention and there is a significant gap between the needed and the implemented investments. Thus, the need for efficient rehabilitation programs is becoming more urgent given the paradigm of aging infrastructure and tight budget. The first step towards developing such programs is to formulate a Performance Index that reflects the current condition of water assets along with its criticality. While numerous studies in the literature have focused on various aspects of condition assessment and reliability, limited efforts have investigated the criticality of such components. Critical water mains are those whose failure cause significant economic, environmental or social impacts on a community. Inclusion of criticality in computing the performance index will serve as a prioritizing tool for the optimum allocating of the available resources and budget. In this study, several social, economic, and environmental factors that dictate the criticality of a water pipelines have been elicited from analyzing the literature. Expert opinions were sought to provide pairwise comparisons of the importance of such factors. Subsequently, Fuzzy Logic along with Analytical Network Process (ANP) was utilized to calculate the weights of several criteria factors. Multi Attribute Utility Theories (MAUT) was then employed to integrate the aforementioned weights with the attribute values of several pipelines in Montreal WN. The result is a criticality index, 0-1, that quantifies the severity of the consequence of failure of each pipeline. A novel contribution of this approach is that it accounts for both the interdependency between criteria factors as well as the inherited uncertainties in calculating the criticality. The practical value of the current study is represented by the automated tool, Excel-MATLAB, which can be used by the utility managers and decision makers in planning for future maintenance and rehabilitation activities where high-level efficiency in use of materials and time resources is required.Keywords: water networks, criticality assessment, asset management, fuzzy analytical network process
Procedia PDF Downloads 1483459 A Cloud-Based Federated Identity Management in Europe
Authors: Jesus Carretero, Mario Vasile, Guillermo Izquierdo, Javier Garcia-Blas
Abstract:
Currently, there is a so called ‘identity crisis’ in cybersecurity caused by the substantial security, privacy and usability shortcomings encountered in existing systems for identity management. Federated Identity Management (FIM) could be solution for this crisis, as it is a method that facilitates management of identity processes and policies among collaborating entities without enforcing a global consistency, that is difficult to achieve when there are ID legacy systems. To cope with this problem, the Connecting Europe Facility (CEF) initiative proposed in 2014 a federated solution in anticipation of the adoption of the Regulation (EU) N°910/2014, the so-called eIDAS Regulation. At present, a network of eIDAS Nodes is being deployed at European level to allow that every citizen recognized by a member state is to be recognized within the trust network at European level, enabling the consumption of services in other member states that, until now were not allowed, or whose concession was tedious. This is a very ambitious approach, since it tends to enable cross-border authentication of Member States citizens without the need to unify the authentication method (eID Scheme) of the member state in question. However, this federation is currently managed by member states and it is initially applied only to citizens and public organizations. The goal of this paper is to present the results of a European Project, named eID@Cloud, that focuses on the integration of eID in 5 cloud platforms belonging to authentication service providers of different EU Member States to act as Service Providers (SP) for private entities. We propose an initiative based on a private eID Scheme both for natural and legal persons. The methodology followed in the eID@Cloud project is that each Identity Provider (IdP) is subscribed to an eIDAS Node Connector, requesting for authentication, that is subscribed to an eIDAS Node Proxy Service, issuing authentication assertions. To cope with high loads, load balancing is supported in the eIDAS Node. The eID@Cloud project is still going on, but we already have some important outcomes. First, we have deployed the federation identity nodes and tested it from the security and performance point of view. The pilot prototype has shown the feasibility of deploying this kind of systems, ensuring good performance due to the replication of the eIDAS nodes and the load balance mechanism. Second, our solution avoids the propagation of identity data out of the native domain of the user or entity being identified, which avoids problems well known in cybersecurity due to network interception, man in the middle attack, etc. Last, but not least, this system allows to connect any country or collectivity easily, providing incremental development of the network and avoiding difficult political negotiations to agree on a single authentication format (which would be a major stopper).Keywords: cybersecurity, identity federation, trust, user authentication
Procedia PDF Downloads 1683458 Analysis of Histogram Asymmetry for Waste Recognition
Authors: Janusz Bobulski, Kamila Pasternak
Abstract:
Despite many years of effort and research, the problem of waste management is still current. So far, no fully effective waste management system has been developed. Many programs and projects improve statistics on the percentage of waste recycled every year. In these efforts, it is worth using modern Computer Vision techniques supported by artificial intelligence. In the article, we present a method of identifying plastic waste based on the asymmetry analysis of the histogram of the image containing the waste. The method is simple but effective (94%), which allows it to be implemented on devices with low computing power, in particular on microcomputers. Such de-vices will be used both at home and in waste sorting plants.Keywords: waste management, environmental protection, image processing, computer vision
Procedia PDF Downloads 1233457 Assignment of Legal Personality to Robots: A Premature Meditation
Authors: Solomon Okorley
Abstract:
With the emergence of artificial intelligence, a proposition that has been made with increasing conviction is the need to assign legal personhood to robots. A major problem that arises when dealing with robots is the issue of liability: who do it hold liable when a robot causes harm? The suggestion to assign legal personality to robots has been made to aid in the assignment of liability. This paper contends that it is premature to assign legal personhood to robots. The paper employed the doctrinal and comparative research methodology. The paper first discusses the various theories that underpin the granting of legal personhood to juridical personalities to ascertain whether these theories can aid in the proposition to assign legal personhood to robots. These theories include fiction theory, aggregate theory, realist theory, and organism theory. Except for the aggregate theory, the fiction theory, the realist theory and the organism theory provide a good foundation to the proposal for legal personhood to be assigned to robots. The paper considers whether robots should be assigned legal personhood from a jurisprudential approach. The legal positivists assert that no metaphysical presuppositions are needed to determine who could be a legal person: the sole deciding factor is the engagement in legal relations and this prerequisite could be fulfilled by robots. However, rationalists, religionists and naturalists assert that the satisfaction of the metaphysical criteria is the basis of legal personality and since robots do not possess this feature, they cannot be assigned legal personhood. This differing perspective shows that the jurisprudential school of thought to which one belongs influences the decision whether to assign legal personhood to robots. The paper makes arguments for and against the assigning of legal personhood to robots. Assigning legal personhood to robots is necessary for the assigning of liability; and since robots are independent in their operation, they should be assigned legal personhood. However, it is argued that the degree of autonomy is insufficient. Robots do not understand legal obligations; they do not have a will of their own and the purported autonomy that they possess is an ‘imputed autonomy’. A crucial question to be asked is ‘whether it is desirable to confer legal personhood on robots’ and not ‘whether legal personhood should be assigned to robots’. This is due to the subjective nature of the responses to such a question as well as the peculiarities of countries in response to this question. The main argument in support of assigning legal personhood to robots is to aid in assigning liability. However, it is argued conferring legal personhood on robots is not the only way to deal with liability issues. Since any of the stakeholders involved with the robot system can be held liable for an accident, it is not desirable to assign legal personhood to robot. It is forecasted that in the epoch of strong artificial intelligence, granting robots legal personhood is plausible; however, in the current era, it is premature.Keywords: autonomy, legal personhood, premature, jurisprudential
Procedia PDF Downloads 733456 Hybrid Weighted Multiple Attribute Decision Making Handover Method for Heterogeneous Networks
Authors: Mohanad Alhabo, Li Zhang, Naveed Nawaz
Abstract:
Small cell deployment in 5G networks is a promising technology to enhance capacity and coverage. However, unplanned deployment may cause high interference levels and high number of unnecessary handovers, which in turn will result in an increase in the signalling overhead. To guarantee service continuity, minimize unnecessary handovers, and reduce signalling overhead in heterogeneous networks, it is essential to properly model the handover decision problem. In this paper, we model the handover decision according to Multiple Attribute Decision Making (MADM) method, specifically Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS). In this paper, we propose a hybrid TOPSIS method to control the handover in heterogeneous network. The proposed method adopts a hybrid weighting, which is a combination of entropy and standard deviation. A hybrid weighting control parameter is introduced to balance the impact of the standard deviation and entropy weighting on the network selection process and the overall performance. Our proposed method shows better performance, in terms of the number of frequent handovers and the mean user throughput, compared to the existing methods.Keywords: handover, HetNets, interference, MADM, small cells, TOPSIS, weight
Procedia PDF Downloads 1523455 Pattern of Cybercrime Among Adolescents: An Exploratory Study
Authors: Mohamamd Shahjahan
Abstract:
Background: Cybercrime is common phenomenon at present both developed and developing countries. Young generation, especially adolescents now engaged internet frequently and they commit cybercrime frequently in Bangladesh. Objective: In this regard, the present study on the pattern of cybercrime among youngers of Bangladesh has been conducted. Methods and tools: This study was a cross-sectional study, descriptive in nature. Non-probability accidental sampling technique has been applied to select the sample because of the nonfinite population and the sample size was 167. A printed semi-structured questionnaire was used to collect data. Results: The study shows that adolescents mainly do hacking (94.6%), pornography (88.6%), software piracy (85 %), cyber theft (82.6%), credit card fraud (81.4%), cyber defamation (75.6%), sweet heart swindling (social network) (65.9%) etc. as cybercrime. According to findings the major causes of cybercrime among the respondents in Bangladesh were- weak laws (88.0%), defective socialization (81.4%), peer group influence (80.2%), easy accessibility to internet (74.3%), corruption (62.9%), unemployment (58.7%), and poverty (24.6%) etc. It is evident from the study that 91.0% respondents used password cracker as the techniques of cyber criminality. About 76.6%, 72.5%, 71.9%, 68.3% and 60.5% respondents’ technique was key loggers, network sniffer, exploiting, vulnerability scanner and port scanner consecutively. Conclusion: The study concluded that pattern of cybercrimes is frequently changing and increasing dramatically. Finally, it is recommending that the private public partnership and execution of existing laws can be controlling this crime.Keywords: cybercrime, adolescents, pattern, internet
Procedia PDF Downloads 823454 An Integrated Approach to the Carbonate Reservoir Modeling: Case Study of the Eastern Siberia Field
Authors: Yana Snegireva
Abstract:
Carbonate reservoirs are known for their heterogeneity, resulting from various geological processes such as diagenesis and fracturing. These complexities may cause great challenges in understanding fluid flow behavior and predicting the production performance of naturally fractured reservoirs. The investigation of carbonate reservoirs is crucial, as many petroleum reservoirs are naturally fractured, which can be difficult due to the complexity of their fracture networks. This can lead to geological uncertainties, which are important for global petroleum reserves. The problem outlines the key challenges in carbonate reservoir modeling, including the accurate representation of fractures and their connectivity, as well as capturing the impact of fractures on fluid flow and production. Traditional reservoir modeling techniques often oversimplify fracture networks, leading to inaccurate predictions. Therefore, there is a need for a modern approach that can capture the complexities of carbonate reservoirs and provide reliable predictions for effective reservoir management and production optimization. The modern approach to carbonate reservoir modeling involves the utilization of the hybrid fracture modeling approach, including the discrete fracture network (DFN) method and implicit fracture network, which offer enhanced accuracy and reliability in characterizing complex fracture systems within these reservoirs. This study focuses on the application of the hybrid method in the Nepsko-Botuobinskaya anticline of the Eastern Siberia field, aiming to prove the appropriateness of this method in these geological conditions. The DFN method is adopted to model the fracture network within the carbonate reservoir. This method considers fractures as discrete entities, capturing their geometry, orientation, and connectivity. But the method has significant disadvantages since the number of fractures in the field can be very high. Due to limitations in the amount of main memory, it is very difficult to represent these fractures explicitly. By integrating data from image logs (formation micro imager), core data, and fracture density logs, a discrete fracture network (DFN) model can be constructed to represent fracture characteristics for hydraulically relevant fractures. The results obtained from the DFN modeling approaches provide valuable insights into the East Siberia field's carbonate reservoir behavior. The DFN model accurately captures the fracture system, allowing for a better understanding of fluid flow pathways, connectivity, and potential production zones. The analysis of simulation results enables the identification of zones of increased fracturing and optimization opportunities for reservoir development with the potential application of enhanced oil recovery techniques, which were considered in further simulations on the dual porosity and dual permeability models. This approach considers fractures as separate, interconnected flow paths within the reservoir matrix, allowing for the characterization of dual-porosity media. The case study of the East Siberia field demonstrates the effectiveness of the hybrid model method in accurately representing fracture systems and predicting reservoir behavior. The findings from this study contribute to improved reservoir management and production optimization in carbonate reservoirs with the use of enhanced and improved oil recovery methods.Keywords: carbonate reservoir, discrete fracture network, fracture modeling, dual porosity, enhanced oil recovery, implicit fracture model, hybrid fracture model
Procedia PDF Downloads 773453 Methodology: A Review in Modelling and Predictability of Embankment in Soft Ground
Authors: Bhim Kumar Dahal
Abstract:
Transportation network development in the developing country is in rapid pace. The majority of the network belongs to railway and expressway which passes through diverse topography, landform and geological conditions despite the avoidance principle during route selection. Construction of such networks demand many low to high embankment which required improvement in the foundation soil. This paper is mainly focused on the various advanced ground improvement techniques used to improve the soft soil, modelling approach and its predictability for embankments construction. The ground improvement techniques can be broadly classified in to three groups i.e. densification group, drainage and consolidation group and reinforcement group which are discussed with some case studies. Various methods were used in modelling of the embankments from simple 1-dimensional to complex 3-dimensional model using variety of constitutive models. However, the reliability of the predictions is not found systematically improved with the level of sophistication. And sometimes the predictions are deviated more than 60% to the monitored value besides using same level of erudition. This deviation is found mainly due to the selection of constitutive model, assumptions made during different stages, deviation in the selection of model parameters and simplification during physical modelling of the ground condition. This deviation can be reduced by using optimization process, optimization tools and sensitivity analysis of the model parameters which will guide to select the appropriate model parameters.Keywords: cement, improvement, physical properties, strength
Procedia PDF Downloads 1763452 Use of Artificial Intelligence and Two Object-Oriented Approaches (k-NN and SVM) for the Detection and Characterization of Wetlands in the Centre-Val de Loire Region, France
Authors: Bensaid A., Mostephaoui T., Nedjai R.
Abstract:
Nowadays, wetlands are the subject of contradictory debates opposing scientific, political and administrative meanings. Indeed, given their multiple services (drinking water, irrigation, hydrological regulation, mineral, plant and animal resources...), wetlands concentrate many socio-economic and biodiversity issues. In some regions, they can cover vast areas (>100 thousand ha) of the landscape, such as the Camargue area in the south of France, inside the Rhone delta. The high biological productivity of wetlands, the strong natural selection pressures and the diversity of aquatic environments have produced many species of plants and animals that are found nowhere else. These environments are tremendous carbon sinks and biodiversity reserves depending on their age, composition and surrounding environmental conditions, wetlands play an important role in global climate projections. Covering more than 3% of the earth's surface, wetlands have experienced since the beginning of the 1990s a tremendous revival of interest, which has resulted in the multiplication of inventories, scientific studies and management experiments. The geographical and physical characteristics of the wetlands of the central region conceal a large number of natural habitats that harbour a great biological diversity. These wetlands, one of the natural habitats, are still influenced by human activities, especially agriculture, which affects its layout and functioning. In this perspective, decision-makers need to delimit spatial objects (natural habitats) in a certain way to be able to take action. Thus, wetlands are no exception to this rule even if it seems to be a difficult exercise to delimit a type of environment as whose main characteristic is often to occupy the transition between aquatic and terrestrial environment. However, it is possible to map wetlands with databases, derived from the interpretation of photos and satellite images, such as the European database Corine Land cover, which allows quantifying and characterizing for each place the characteristic wetland types. Scientific studies have shown limitations when using high spatial resolution images (SPOT, Landsat, ASTER) for the identification and characterization of small wetlands (1 hectare). To address this limitation, it is important to note that these wetlands generally represent spatially complex features. Indeed, the use of very high spatial resolution images (>3m) is necessary to map small and large areas. However, with the recent evolution of artificial intelligence (AI) and deep learning methods for satellite image processing have shown a much better performance compared to traditional processing based only on pixel structures. Our research work is also based on spectral and textural analysis on THR images (Spot and IRC orthoimage) using two object-oriented approaches, the nearest neighbour approach (k-NN) and the Super Vector Machine approach (SVM). The k-NN approach gave good results for the delineation of wetlands (wet marshes and moors, ponds, artificial wetlands water body edges, ponds, mountain wetlands, river edges and brackish marshes) with a kappa index higher than 85%.Keywords: land development, GIS, sand dunes, segmentation, remote sensing
Procedia PDF Downloads 753451 Carbon Capture and Storage by Continuous Production of CO₂ Hydrates Using a Network Mixing Technology
Authors: João Costa, Francisco Albuquerque, Ricardo J. Santos, Madalena M. Dias, José Carlos B. Lopes, Marcelo Costa
Abstract:
Nowadays, it is well recognized that carbon dioxide emissions, together with other greenhouse gases, are responsible for the dramatic climate changes that have been occurring over the past decades. Gas hydrates are currently seen as a promising and disruptive set of materials that can be used as a basis for developing new technologies for CO₂ capture and storage. Its potential as a clean and safe pathway for CCS is tremendous since it requires only water and gas to be mixed under favorable temperatures and mild high pressures. However, the hydrates formation process is highly exothermic; it releases about 2 MJ per kilogram of CO₂, and it only occurs in a narrow window of operational temperatures (0 - 10 °C) and pressures (15 to 40 bar). Efficient continuous hydrate production at a specific temperature range necessitates high heat transfer rates in mixing processes. Past technologies often struggled to meet this requirement, resulting in low productivity or extended mixing/contact times due to inadequate heat transfer rates, which consistently posed a limitation. Consequently, there is a need for more effective continuous hydrate production technologies in industrial applications. In this work, a network mixing continuous production technology has been shown to be viable for producing CO₂ hydrates. The structured mixer used throughout this work consists of a network of unit cells comprising mixing chambers interconnected by transport channels. These mixing features result in enhanced heat and mass transfer rates and high interfacial surface area. The mixer capacity emerges from the fact that, under proper hydrodynamic conditions, the flow inside the mixing chambers becomes fully chaotic and self-sustained oscillatory flow, inducing intense local laminar mixing. The device presents specific heat transfer rates ranging from 107 to 108 W⋅m⁻³⋅K⁻¹. A laboratory scale pilot installation was built using a device capable of continuously capturing 1 kg⋅h⁻¹ of CO₂, in an aqueous slurry of up to 20% in mass. The strong mixing intensity has proven to be sufficient to enhance dissolution and initiate hydrate crystallization without the need for external seeding mechanisms and to achieve, at the device outlet, conversions of 99% in CO₂. CO₂ dissolution experiments revealed that the overall liquid mass transfer coefficient is orders of magnitude larger than in similar devices with the same purpose, ranging from 1 000 to 12 000 h⁻¹. The present technology has shown itself to be capable of continuously producing CO₂ hydrates. Furthermore, the modular characteristics of the technology, where scalability is straightforward, underline the potential development of a modular hydrate-based CO₂ capture process for large-scale applications.Keywords: network, mixing, hydrates, continuous process, carbon dioxide
Procedia PDF Downloads 543450 Transforming Breast Density Measurement with Artificial Intelligence: Population-Level Insights from BreastScreen NSW
Authors: Douglas Dunn, Ricahrd Walton, Matthew Warner-Smith, Chirag Mistry, Kan Ren, David Roder
Abstract:
Introduction: Breast density is a risk factor for breast cancer, both due to increased fibro glandular tissue that can harbor malignancy and the masking of lesions on mammography. Therefore, evaluation of breast density measurement is useful for risk stratification on an individual and population level. This study investigates the performance of Lunit INSIGHT MMG for automated breast density measurement. We analyze the reliability of Lunit compared to breast radiologists, explore density variations across the BreastScreen NSW population, and examine the impact of breast implants on density measurements. Methods: 15,518 mammograms were utilized for a comparative analysis of intra- and inter-reader reliability between Lunit INSIGHT MMG and breast radiologists. Subsequently, Lunit was used to evaluate 624,113 mammograms for investigation of density variations according to age and birth country, providing insights into diverse population subgroups. Finally, we compared breast density in 4,047 clients with implants to clients without implants, controlling for age and birth country. Results: Inter-reader variability between Lunit and Breast Radiologists weighted kappa coefficient was 0.72 (95%CI 0.71-0.73). Highest breast densities were seen in women with a North-East Asia background, whilst those of Aboriginal background had the lowest density. Across all backgrounds, density was demonstrated to reduce with age, though at different rates according to country of birth. Clients with implants had higher density relative to the age-matched no-implant strata. Conclusion: Lunit INSIGHT MMG demonstrates reasonable inter- and intra-observer reliability for automated breast density measurement. The scale of this study is significantly larger than any previous study assessing breast density due to the ability to process large volumes of data using AI. As a result, it provides valuable insights into population-level density variations. Our findings highlight the influence of age, birth country, and breast implants on density, emphasizing the need for personalized risk assessment and screening approaches. The large-scale and diverse nature of this study enhances the generalisability of our results, offering valuable information for breast cancer screening programs internationally.Keywords: breast cancer, screening, breast density, artificial intelligence, mammography
Procedia PDF Downloads 173449 Economics of Open and Distance Education in the University of Ibadan, Nigeria
Authors: Babatunde Kasim Oladele
Abstract:
One of the major objectives of the Nigeria national policy on education is the provision of equal educational opportunities to all citizens at different levels of education. With regards to higher education, an aspect of the policy encourages distance learning to be organized and delivered by tertiary institutions in Nigeria. This study therefore, determines how much of the Government resources are committed, how the resources are utilized and what alternative sources of funding are available for this system of education. This study investigated the trends in recurrent costs between 2004/2005 and 2013/2014 at University of Ibadan Distance Learning Centre (DLC). A descriptive survey research design was employed for the study. Questionnaire was the research instrument used for the collection of data. The population of the study was 280 current distance learning education students, 70 academic staff and 50 administrative staff. Only 354 questionnaires were correctly filled and returned. Data collected were analyzed and coded using the frequencies, ratio, average and percentages were used to answer all the research questions. The study revealed that staff salaries and allowances of academic and non-academic staff represent the most important variable that influences the cost of education. About 55% of resources were allocated to this sector alone. The study also indicates that costs rise every year with increase in enrolment representing a situation of diseconomies of scale. This study recommends that Universities who operates distance learning program should strive to explore other internally generated revenue option to boost their revenue. University of Ibadan, being the premier university in Nigeria, should be given foreign aid and home support, both financially and materially, to enable the institute to run a formidable distance education program that would measure up in planning and implementation with those of developed nation.Keywords: open education, distance education, University of Ibadan, Nigeria, cost of education
Procedia PDF Downloads 1803448 Integrating Inference, Simulation and Deduction in Molecular Domain Analysis and Synthesis with Peculiar Attention to Drug Discovery
Authors: Diego Liberati
Abstract:
Standard molecular modeling is traditionally done through Schroedinger equations via the help of powerful tools helping to manage them atom by atom, often needing High Performance Computing. Here, a full portfolio of new tools, conjugating statistical inference in the so called eXplainable Artificial Intelligence framework (in the form of Machine Learning of understandable rules) to the more traditional modeling and simulation control theory of mixed dynamic logic hybrid processes, is offered as quite a general purpose even if making an example to a popular chemical physics set of problems.Keywords: understandable rules ML, k-means, PCA, PieceWise Affine Auto Regression with eXogenous input
Procedia PDF Downloads 333447 A Machine Learning Approach for Classification of Directional Valve Leakage in the Hydraulic Final Test
Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter
Abstract:
Due to increasing cost pressure in global markets, artificial intelligence is becoming a technology that is decisive for competition. Predictive quality enables machinery and plant manufacturers to ensure product quality by using data-driven forecasts via machine learning models as a decision-making basis for test results. The use of cross-process Bosch production data along the value chain of hydraulic valves is a promising approach to classifying the quality characteristics of workpieces.Keywords: predictive quality, hydraulics, machine learning, classification, supervised learning
Procedia PDF Downloads 2343446 The Soft and Hard Palate Cleft’s Impact on the Auditory Tube Function
Authors: Fedor Semenov
Abstract:
One of the most widespread facial bones’ malformations – the congenital palatoschisis – significant impact on drainage and ventilation of the middle ear through the incorrect work of soft palate muscles, which results in recurrent middle ear inflammation and subsequently leads to the hearing dysfunction. The purpose of this research is to evaluate the auditory tube function and hearing condition before the operative treatment (uranoplasty) and after 3 and 12 months. 42 patients aged from 6 months to 17 years who had soft and hard palate cleft and B and C type tympanogram were included in that study. The examination includes otoscopy, pure tone audiometry (for patients older than 8 years – 11 patients), tympanometry. According to the otoscopy results all the patients were divided into two groups: those who had a retracted eardrum and those who had a normal one. The results of pure tone audiometry showed that there were six patients with an air-bone gap of more than 10 dB and the five with normal audiograms. According to the results of this research, uranoplasty demonstrated strongly positive effects on the auditory tube function: normalization of eardrum view upon otoscopy was observed in 64% of children with a retracted eardrum three month after surgery and 85 % twelve months. The quantity of patients with A-type of tympanogram improved in 25 children out of 41 in 3 month and in 35 out of 41 in twelve months after operation. While before the operative treatment, six patients older than 8 years had had an air-bone gap of more than 10 dB; only two of them still had it in 12 months, and the others’ audiograms were normal. To sum it up, the uranoplasty showed a significant contribution in the restoration of auditory tube functioning. Some patients had signs of auditory dysfunction even after the operative treatment. That group of children needs further treatment by an otorhinolaryngologist.Keywords: auditory tube dysfunction, palatoschisis, uranoplasy, otitis
Procedia PDF Downloads 123445 Cross Attention Fusion for Dual-Stream Speech Emotion Recognition
Authors: Shaode Yu, Jiajian Meng, Bing Zhu, Hang Yu, Qiurui Sun
Abstract:
Speech emotion recognition (SER) is for recognizing human subjective emotions through audio data in-depth analysis. From speech audios, how to comprehensively extract emotional information and how to effectively fuse extracted features remain challenging. This paper presents a dual-stream SER framework that embraces both full training and transfer learning of different networks for thorough feature encoding. Besides, a plug-and-play cross-attention fusion (CAF) module is implemented for the valid integration of the dual-stream encoder output. The effectiveness of the proposed CAF module is compared to the other three fusion modules (feature summation, feature concatenation, and feature-wise linear modulation) on two databases (RAVDESS and IEMO-CAP) using different dual-stream encoders (full training network, DPCNN or TextRCNN; transfer learning network, HuBERT or Wav2Vec2). Experimental results suggest that the CAF module can effectively reconcile conflicts between features from different encoders and outperform the other three feature fusion modules on the SER task. In the future, the plug-and-play CAF module can be extended for multi-branch feature fusion, and the dual-stream SER framework can be widened for multi-stream data representation to improve the recognition performance and generalization capacity.Keywords: speech emotion recognition, cross-attention fusion, dual-stream, pre-trained
Procedia PDF Downloads 813444 An Adaptive Oversampling Technique for Imbalanced Datasets
Authors: Shaukat Ali Shahee, Usha Ananthakumar
Abstract:
A data set exhibits class imbalance problem when one class has very few examples compared to the other class, and this is also referred to as between class imbalance. The traditional classifiers fail to classify the minority class examples correctly due to its bias towards the majority class. Apart from between-class imbalance, imbalance within classes where classes are composed of a different number of sub-clusters with these sub-clusters containing different number of examples also deteriorates the performance of the classifier. Previously, many methods have been proposed for handling imbalanced dataset problem. These methods can be classified into four categories: data preprocessing, algorithmic based, cost-based methods and ensemble of classifier. Data preprocessing techniques have shown great potential as they attempt to improve data distribution rather than the classifier. Data preprocessing technique handles class imbalance either by increasing the minority class examples or by decreasing the majority class examples. Decreasing the majority class examples lead to loss of information and also when minority class has an absolute rarity, removing the majority class examples is generally not recommended. Existing methods available for handling class imbalance do not address both between-class imbalance and within-class imbalance simultaneously. In this paper, we propose a method that handles between class imbalance and within class imbalance simultaneously for binary classification problem. Removing between class imbalance and within class imbalance simultaneously eliminates the biases of the classifier towards bigger sub-clusters by minimizing the error domination of bigger sub-clusters in total error. The proposed method uses model-based clustering to find the presence of sub-clusters or sub-concepts in the dataset. The number of examples oversampled among the sub-clusters is determined based on the complexity of sub-clusters. The method also takes into consideration the scatter of the data in the feature space and also adaptively copes up with unseen test data using Lowner-John ellipsoid for increasing the accuracy of the classifier. In this study, neural network is being used as this is one such classifier where the total error is minimized and removing the between-class imbalance and within class imbalance simultaneously help the classifier in giving equal weight to all the sub-clusters irrespective of the classes. The proposed method is validated on 9 publicly available data sets and compared with three existing oversampling techniques that rely on the spatial location of minority class examples in the euclidean feature space. The experimental results show the proposed method to be statistically significantly superior to other methods in terms of various accuracy measures. Thus the proposed method can serve as a good alternative to handle various problem domains like credit scoring, customer churn prediction, financial distress, etc., that typically involve imbalanced data sets.Keywords: classification, imbalanced dataset, Lowner-John ellipsoid, model based clustering, oversampling
Procedia PDF Downloads 4183443 A Methodology Based on Image Processing and Deep Learning for Automatic Characterization of Graphene Oxide
Authors: Rafael do Amaral Teodoro, Leandro Augusto da Silva
Abstract:
Originated from graphite, graphene is a two-dimensional (2D) material that promises to revolutionize technology in many different areas, such as energy, telecommunications, civil construction, aviation, textile, and medicine. This is possible because its structure, formed by carbon bonds, provides desirable optical, thermal, and mechanical characteristics that are interesting to multiple areas of the market. Thus, several research and development centers are studying different manufacturing methods and material applications of graphene, which are often compromised by the scarcity of more agile and accurate methodologies to characterize the material – that is to determine its composition, shape, size, and the number of layers and crystals. To engage in this search, this study proposes a computational methodology that applies deep learning to identify graphene oxide crystals in order to characterize samples by crystal sizes. To achieve this, a fully convolutional neural network called U-net has been trained to segment SEM graphene oxide images. The segmentation generated by the U-net is fine-tuned with a standard deviation technique by classes, which allows crystals to be distinguished with different labels through an object delimitation algorithm. As a next step, the characteristics of the position, area, perimeter, and lateral measures of each detected crystal are extracted from the images. This information generates a database with the dimensions of the crystals that compose the samples. Finally, graphs are automatically created showing the frequency distributions by area size and perimeter of the crystals. This methodological process resulted in a high capacity of segmentation of graphene oxide crystals, presenting accuracy and F-score equal to 95% and 94%, respectively, over the test set. Such performance demonstrates a high generalization capacity of the method in crystal segmentation, since its performance considers significant changes in image extraction quality. The measurement of non-overlapping crystals presented an average error of 6% for the different measurement metrics, thus suggesting that the model provides a high-performance measurement for non-overlapping segmentations. For overlapping crystals, however, a limitation of the model was identified. To overcome this limitation, it is important to ensure that the samples to be analyzed are properly prepared. This will minimize crystal overlap in the SEM image acquisition and guarantee a lower error in the measurements without greater efforts for data handling. All in all, the method developed is a time optimizer with a high measurement value, considering that it is capable of measuring hundreds of graphene oxide crystals in seconds, saving weeks of manual work.Keywords: characterization, graphene oxide, nanomaterials, U-net, deep learning
Procedia PDF Downloads 1623442 Elucidation of Dynamics of Murine Double Minute 2 Shed Light on the Anti-cancer Drug Development
Authors: Nigar Kantarci Carsibasi
Abstract:
Coarse-grained elastic network models, namely Gaussian network model (GNM) and Anisotropic network model (ANM), are utilized in order to investigate the fluctuation dynamics of Murine Double Minute 2 (MDM2), which is the native inhibitor of p53. Conformational dynamics of MDM2 are elucidated in unbound, p53 bound, and non-peptide small molecule inhibitor bound forms. With this, it is aimed to gain insights about the alterations brought to global dynamics of MDM2 by native peptide inhibitor p53, and two small molecule inhibitors (HDM201 and NVP-CGM097) that are undergoing clinical stages in cancer studies. MDM2 undergoes significant conformational changes upon inhibitor binding, carrying pieces of evidence of induced-fit mechanism. Small molecule inhibitors examined in this work exhibit similar fluctuation dynamics and characteristic mode shapes with p53 when complexed with MDM2, which would shed light on the design of novel small molecule inhibitors for cancer therapy. The results showed that residues Phe 19, Trp 23, Leu 26 reside in the minima of slowest modes of p53, pointing to the accepted three-finger binding model. Pro 27 displays the most significant hinge present in p53 and comes out to be another functionally important residue. Three distinct regions are identified in MDM2, for which significant conformational changes are observed upon binding. Regions I (residues 50-77) and III (residues 90-105) correspond to the binding interface of MDM2, including (α2, L2, and α4), which are stabilized during complex formation. Region II (residues 77-90) exhibits a large amplitude motion, being highly flexible, both in the absence and presence of p53 or other inhibitors. MDM2 exhibits a scattered profile in the fastest modes of motion, while binding of p53 and inhibitors puts restraints on MDM2 domains, clearly distinguishing the kinetically hot regions. Mode shape analysis revealed that the α4 domain controls the size of the cleft by keeping the cleft narrow in unbound MDM2; and open in the bound states for proper penetration and binding of p53 and inhibitors, which points to the induced-fit mechanism of p53 binding. P53 interacts with α2 and α4 in a synchronized manner. Collective modes are shifted upon inhibitor binding, i.e., second mode characteristic motion in MDM2-p53 complex is observed in the first mode of apo MDM2; however, apo and bound MDM2 exhibits similar features in the softest modes pointing to pre-existing modes facilitating the ligand binding. Although much higher amplitude motions are attained in the presence of non-peptide small molecule inhibitor molecules as compared to p53, they demonstrate close similarity. Hence, NVP-CGM097 and HDM201 succeed in mimicking the p53 behavior well. Elucidating how drug candidates alter the MDM2 global and conformational dynamics would shed light on the rational design of novel anticancer drugs.Keywords: cancer, drug design, elastic network model, MDM2
Procedia PDF Downloads 131