Search results for: parking monitoring system
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 19185

Search results for: parking monitoring system

10545 An Assessment of Different Blade Tip Timing (BTT) Algorithms Using an Experimentally Validated Finite Element Model Simulator

Authors: Mohamed Mohamed, Philip Bonello, Peter Russhard

Abstract:

Blade Tip Timing (BTT) is a technology concerned with the estimation of both frequency and amplitude of rotating blades. A BTT system comprises two main parts: (a) the arrival time measurement system, and (b) the analysis algorithms. Simulators play an important role in the development of the analysis algorithms since they generate blade tip displacement data from the simulated blade vibration under controlled conditions. This enables an assessment of the performance of the different algorithms with respect to their ability to accurately reproduce the original simulated vibration. Such an assessment is usually not possible with real engine data since there is no practical alternative to BTT for blade vibration measurement. Most simulators used in the literature are based on a simple spring-mass-damper model to determine the vibration. In this work, a more realistic experimentally validated simulator based on the Finite Element (FE) model of a bladed disc (blisk) is first presented. It is then used to generate the necessary data for the assessment of different BTT algorithms. The FE modelling is validated using both a hammer test and two firewire cameras for the mode shapes. A number of autoregressive methods, fitting methods and state-of-the-art inverse methods (i.e. Russhard) are compared. All methods are compared with respect to both synchronous and asynchronous excitations with both single and simultaneous frequencies. The study assesses the applicability of each method for different conditions of vibration, amount of sampling data, and testing facilities, according to its performance and efficiency under these conditions.

Keywords: blade tip timing, blisk, finite element, vibration measurement

Procedia PDF Downloads 291
10544 Antimicrobial Evaluation of Polyphenon 60 and Ciprofloxacin Loaded Nano Emulsion against Uropathogenic Escherichia coli Bacteria and Its in vivo Analysis

Authors: Atinderpal Kaur, Shweta Dang

Abstract:

Our aim is to develop a nanoemulsion-based delivery system containing polyphenon 60 (P60) and ciprofloxacin (Cipro) for intravaginal delivery to treat urinary tract infection. In the present study Polyphenon 60 (P60) and ciprofloxacin (Cipro) were loaded in a single nano emulsion (NE) system via ultra-sonication technique and characterized for particle size, in vitro release and antibacterial efficacy against Bcl-2 level Escherichia coli bacteria. To determine in vivo pharmacokinetic parameters and intravaginal transportation of NE, gamma scintigraphy and biodistribution study was conducted by radiolabelling NE with technetium pertechnetate (99mTc). The preliminary antibacterial investigation showed synergy between these compounds with FICindex of 0.42. The developed formulation showed zeta potential +55.3 and particle size of 151.7 nm, with PDI of 0.196. The in vitro release percentage of P60 at the end of 7th hours was 94.8 ± 0.9 % whereas the release for Cipro was 75.1± 0.15 % in simulated vaginal media. MBC was identified and the findings demonstrated that in both ESBL (Extended Spectrum β- lactamase) and MBL (Metallo β- lactamase) cultures the P60+Cipro NE showed inhibition of growth of all the isolates at 2 mg/ml dilutions. The percentage per gram of radiolabelled drug was found (3.50±0.26) and (3.81±0.30) in kidney and urinary bladder, respectively at 3 h. From the findings, it was concluded that the developed P60+Cipro NE was transported efficiently throughout the target organs, had long duration of action and high biocompatibility via intravaginal administration as compared to oral administration.

Keywords: ciprofloxacin, gamma scintigraphy, intravaginal drug delivery, Polyphenon 60

Procedia PDF Downloads 302
10543 Pressure-Controlled Dynamic Equations of the PFC Model: A Mathematical Formulation

Authors: Jatupon Em-Udom, Nirand Pisutha-Arnond

Abstract:

The phase-field-crystal, PFC, approach is a density-functional-type material model with an atomic resolution on a diffusive timescale. Spatially, the model incorporates periodic nature of crystal lattices and can naturally exhibit elasticity, plasticity and crystal defects such as grain boundaries and dislocations. Temporally, the model operates on a diffusive timescale which bypasses the need to resolve prohibitively small atomic-vibration time steps. The PFC model has been used to study many material phenomena such as grain growth, elastic and plastic deformations and solid-solid phase transformations. In this study, the pressure-controlled dynamic equation for the PFC model was developed to simulate a single-component system under externally applied pressure; these coupled equations are important for studies of deformable systems such as those under constant pressure. The formulation is based on the non-equilibrium thermodynamics and the thermodynamics of crystalline solids. To obtain the equations, the entropy variation around the equilibrium point was derived. Then the resulting driving forces and flux around the equilibrium were obtained and rewritten as conventional thermodynamic quantities. These dynamics equations are different from the recently-proposed equations; the equations in this study should provide more rigorous descriptions of the system dynamics under externally applied pressure.

Keywords: driving forces and flux, evolution equation, non equilibrium thermodynamics, Onsager’s reciprocal relation, phase field crystal model, thermodynamics of single-component solid

Procedia PDF Downloads 288
10542 A Conceptual Analysis of Right of Taxpayers to Claim Refund in Nigeria

Authors: Hafsat Iyabo Sa'adu

Abstract:

A salient feature of the Nigerian Tax Law is the right of the taxpayer to demand for a refund where excess tax is paid. Section 23 of the Federal Inland Revenue Service (Establishment) Act, 2007 vests Federal Inland Revenue Services with the power to make tax refund as well as set guidelines and requirements for refund process from time to time. In addition, Section 61 of the Federal Inland Revenue Service (Establishment) Act, 2007, empowers the Federal Inland Revenue Services to issue information circular to acquaint stakeholders with the policy on the refund process. A Circular was issued to that effect to correct the position that until after the annual audit of the Service before such excess can be paid to the claimant/taxpayer. But it is amazing that such circular issuance does not feature under the states’ laws. Hence, there is an inconsistencies in the tax paying system in Nigeria. This study, therefore, sets an objective, to examine the trending concept of tax refund in Nigeria. In order to achieve this set objective, a doctrinal study went under way, wherein both federal and states laws were consulted including journals and textbooks. At the end of the research, it was revealed that the law should be specific as to the time frame within which to make the refund. It further revealed that it is essential to put up a legal framework for the tax system to recognize excess payment as debt due from the state. This would provide a foundational framework for the relationship between taxpayers and Federal Inland Revenue Service as well as promote effective tax administration in all the states of the federation. Several Recommendations were made especially relating to legislative passage of ‘’Refund Circular Bill at the states levels’ pursuant to the Federal Inland Revenue Service (Establishment) Act, 2007.

Keywords: claim, Nigeria, refund, right

Procedia PDF Downloads 103
10541 Formation of an Empire in the 21st Century: Theoretical Approach in International Relations and a Worldview of the New World Order

Authors: Rami Georg Johann

Abstract:

Against the background of the current geopolitical constellations, the author looks at various empire models, which are discussed and compared with each other with regard to their stability and functioning. The focus is on the fifth concept as a possible new world order in the 21st century. These will be discussed and compared to one another according to their stability and functioning. All empires to be designed will be conceptualised based on one, two, three, four, and five worlds. All worlds are made up of a different constellation of states and relating coalitions. All systems will be discussed in detail. The one-world-system, the“Western Empire,” will be presented as a possible solution to a new world order in the 21st century (fifth concept). The term “Western” in “Western Empire” describes the Western concept after World War II. This Western concept was the result of two horrible world wars in the 20th century.” With this in mind, the fifth concept forms a stable empire system, the “Western Empire,” by political measures tied to two issues. Thus, this world order provides a significantly higher long-term stability in contrast to all other empire models (comprising five, four, three, or two worlds). Confrontations and threats of war are reduced to a minimum. The two issues mentioned are “merger” and “competition.” These are the main differences in forming an empire compared to all empires and realms in the history of mankind. The fifth concept of this theory, the “Western Empire,” acts explicitly as a counter model. The Western Empire (fifth concept) is formed by the merger of world powers without war. Thus, a world order without competition is created. This merged entity secures long-term peace, stability, democratic values, freedom, human rights, equality, and justice in the new world order.

Keywords: empire formation, theory of international relations, Western Empire, world order

Procedia PDF Downloads 126
10540 Bioinspired Green Synthesis of Magnetite Nanoparticles Using Room-Temperature Co-Precipitation: A Study of the Effect of Amine Additives on Particle Morphology in Fluidic Systems

Authors: Laura Norfolk, Georgina Zimbitas, Jan Sefcik, Sarah Staniland

Abstract:

Magnetite nanoparticles (MNP) have been an area of increasing research interest due to their extensive applications in industry, such as in carbon capture, water purification, and crucially, the biomedical industry. The use of MNP in the biomedical industry is rising, with studies on their effect as Magnetic resonance imaging contrast agents, drug delivery systems, and as hyperthermic cancer treatments becoming prevalent in the nanomaterial research community. Particles used for biomedical purposes must meet stringent criteria; the particles must have consistent shape and size between particles. Variation between particle morphology can drastically alter the effective surface area of the material, making it difficult to correctly dose particles that are not homogeneous. Particles of defined shape such as octahedral and cubic have been shown to outperform irregular shaped particles in some applications, leading to the need to synthesize particles of defined shape. In nature, highly homogeneous MNP are found within magnetotactic bacteria, a unique bacteria capable of producing magnetite nanoparticles internally under ambient conditions. Biomineralisation proteins control the properties of the MNPs, enhancing their homogeneity. One of these proteins, Mms6, has been successfully isolated and used in vitro as an additive in room-temperature co-precipitation reactions (RTCP) to produce particles of defined mono-dispersed size & morphology. When considering future industrial scale-up it is crucial to consider the costs and feasibility of an additive, as an additive that is not readily available or easily synthesized at a competitive price will not be sustainable. As such, additives selected for this research are inspired by the functional groups of biomineralisation proteins, but cost-effective, environmentally friendly, and compatible with scale-up. Diethylenetriamine (DETA), triethylenetetramine (TETA), tetraethylenepentamine (TEPA), and pentaethylenehexamine (PEHA) have been successfully used in RTCP to modulate the properties of particles synthesized, leading to the formation of octahedral nanoparticles with no use of organic solvents, heating, or toxic precursors. By extending this principle to a fluidic system, ongoing research will reveal whether the amine additives can also exert morphological control in an environment which is suited toward higher particle yield. Two fluidic systems have been employed; a peristaltic turbulent flow mixing system suitable for the rapid production of MNP, and a macrofluidic system for the synthesis of tailored nanomaterials under a laminar flow regime. The presence of the amine additives in the turbulent flow system in initial results appears to offer similar morphological control as observed under RTCP conditions, with higher proportions of octahedral particles formed. This is a proof of concept which may pave the way to green synthesis of tailored MNP on an industrial scale. Mms6 and amine additives have been used in the macrofluidic system, with Mms6 allowing magnetite to be synthesized at unfavourable ferric ratios, but no longer influencing particle size. This suggests this synthetic technique while still benefiting from the addition of additives, may not allow additives to fully influence the particles formed due to the faster timescale of reaction. The amine additives have been tested at various concentrations, the results of which will be discussed in this paper.

Keywords: bioinspired, green synthesis, fluidic, magnetite, morphological control, scale-up

Procedia PDF Downloads 103
10539 The Human Rights Code: Fundamental Rights as the Basis of Human-Robot Coexistence

Authors: Gergely G. Karacsony

Abstract:

Fundamental rights are the result of thousand years’ progress of legislation, adjudication and legal practice. They serve as the framework of peaceful cohabitation of people, protecting the individual from any abuse by the government or violation by other people. Artificial intelligence, however, is the development of the very recent past, being one of the most important prospects to the future. Artificial intelligence is now capable of communicating and performing actions the same way as humans; such acts are sometimes impossible to tell from actions performed by flesh-and-blood people. In a world, where human-robot interactions are more and more common, a new framework of peaceful cohabitation is to be found. Artificial intelligence, being able to take part in almost any kind of interaction where personal presence is not necessary without being recognized as a non-human actor, is now able to break the law, violate people’s rights, and disturb social peace in many other ways. Therefore, a code of peaceful coexistence is to be found or created. We should consider the issue, whether human rights can serve as the code of ethical and rightful conduct in the new era of artificial intelligence and human coexistence. In this paper, we will examine the applicability of fundamental rights to human-robot interactions as well as to the actions of artificial intelligence performed without human interaction whatsoever. Robot ethics has been a topic of discussion and debate of philosophy, ethics, computing, legal sciences and science fiction writing long before the first functional artificial intelligence has been introduced. Legal science and legislation have approached artificial intelligence from different angles, regulating different areas (e.g. data protection, telecommunications, copyright issues), but they are only chipping away at the mountain of legal issues concerning robotics. For a widely acceptable and permanent solution, a more general set of rules would be preferred to the detailed regulation of specific issues. We argue that human rights as recognized worldwide are able to be adapted to serve as a guideline and a common basis of coexistence of robots and humans. This solution has many virtues: people don’t need to adjust to a completely unknown set of standards, the system has proved itself to withstand the trials of time, legislation is easier, and the actions of non-human entities are more easily adjudicated within their own framework. In this paper we will examine the system of fundamental rights (as defined in the most widely accepted source, the 1966 UN Convention on Human Rights), and try to adapt each individual right to the actions of artificial intelligence actors; in each case we will examine the possible effects on the legal system and the society of such an approach, finally we also examine its effect on the IT industry.

Keywords: human rights, robot ethics, artificial intelligence and law, human-robot interaction

Procedia PDF Downloads 226
10538 Automatic Classification of Lung Diseases from CT Images

Authors: Abobaker Mohammed Qasem Farhan, Shangming Yang, Mohammed Al-Nehari

Abstract:

Pneumonia is a kind of lung disease that creates congestion in the chest. Such pneumonic conditions lead to loss of life of the severity of high congestion. Pneumonic lung disease is caused by viral pneumonia, bacterial pneumonia, or Covidi-19 induced pneumonia. The early prediction and classification of such lung diseases help to reduce the mortality rate. We propose the automatic Computer-Aided Diagnosis (CAD) system in this paper using the deep learning approach. The proposed CAD system takes input from raw computerized tomography (CT) scans of the patient's chest and automatically predicts disease classification. We designed the Hybrid Deep Learning Algorithm (HDLA) to improve accuracy and reduce processing requirements. The raw CT scans have pre-processed first to enhance their quality for further analysis. We then applied a hybrid model that consists of automatic feature extraction and classification. We propose the robust 2D Convolutional Neural Network (CNN) model to extract the automatic features from the pre-processed CT image. This CNN model assures feature learning with extremely effective 1D feature extraction for each input CT image. The outcome of the 2D CNN model is then normalized using the Min-Max technique. The second step of the proposed hybrid model is related to training and classification using different classifiers. The simulation outcomes using the publically available dataset prove the robustness and efficiency of the proposed model compared to state-of-art algorithms.

Keywords: CT scan, Covid-19, deep learning, image processing, lung disease classification

Procedia PDF Downloads 127
10537 Effects of Sacubitril and Valsartan on Gut Microbiome

Authors: Wei-Ju Huang, Hung-Pin Hsu

Abstract:

[Background] In congestive heart failure (CHF), it has always been the principle of clinical treatment to control the water retention mechanism in the body to prevent excessive fluid retention. Early control of sympathetic nerves, Renin-Angiotensin-Aldosterone system (RAA system, RAAS), or strengthening of Atrial Natriuretic Peptide (ANP) was the point. In RAA system, related hormones, such as angiotensin, or enzymes in the pathway, such as ACE-I, can be used with corresponding inhibitors to reduce water content.[Aim] In recent years, clinical studies have pointed out that if different mechanisms are combined, the control effect seems to be better. For example, recent studies showed that ENTRESTO, a combination of Sacubitril and Valsartan, is a good new drug for CHF. Sacubitril is a prodrug. After activation, it can inhibit neprilysin and act as a neprilysin inhibitor (ARNI) to reduce the breakdown of natriuretic peptides(ANP). Valsartan is a kind of angiotensin receptor blocker (ARB), both of which are used to treat heart failure at the same time, have excellent curative effects.[Materials and Methods] Considering the side effects of this drug, coughing and a few cases of diarrhea were observed. However, the effect of this drug on the patient's intestinal tract has not been confirmed. On the other hand, studies have pointed out that ANP supplement can improve the CHF and increase the inhibitory effect on cancer cells. Therefore, the purpose of this study is to use a special microbial detection method to prove that whether oral drugs have an effect on microorganisms.The experimental method uses Nissui Compact Dry to observe the situation in different types of microorganisms. After the drug is dissolved in water, it is implanted in a petri dish, and the presence of different microorganisms is detected through different antibody reactions to confirm whether the drug has some toxicology in the gut.[Results and Discussion]From the above experimental results, it can be known that among the effects of Sacubitril and Valsartan on the basic microbial flora of the human body, low doses had no significant effect on Escherichia coli or intestinal bacteria. If Sacubitril or Valsartan with a high concentration of 3mg/ml is used alone or under the stimulation of a high concentration of the two drugs, it has a significant inhibitory effect on Escherichia coli. However, in terms of the effect on intestinal bacteria, high concentration of Sacubitril has a more significant inhibitory effect on intestinal bacteria, while high concentration of Valsartan has a less significant inhibitory effect on intestinal bacteria. The inhibitory effect of the combination of the two drugs on intestinal bacteria is also less significant.[Conclusion]The results of this study can be used as a further reference for the possible side effects of the clinical use of Sacubitril and Valsartan on the intestinal tract of patients,

Keywords: sacubitril, valsartan, entresto, congestive heart failure (CHF)

Procedia PDF Downloads 55
10536 India’s Energy System Transition, Survival of the Greenest

Authors: B. Sudhakara Reddy

Abstract:

The transition to a clean and green energy system is an economic and social transformation that is exciting as well as challenging. The world today faces a formidable challenge in transforming its economy from being driven primarily by fossil fuels, which are non-renewable and a major source of global pollution, to becoming an economy that can function effectively using renewable energy sources and by achieving high energy efficiency levels. In the present study, a green economy scenario is developed for India using a bottom-up approach. The results show that the penetration rate of renewable energy resources will reduce the total primary energy demand by 23% under GE. Improvements in energy efficiency (e.g. households, industrial and commercial sectors) will result in reduced demand to the tune of 318 MTOE. The volume of energy-related CO2 emissions decline to 2,218 Mt in 2030 from 3,440 under the BAU scenario and the per capita emissions will reduce by about 35% (from 2.22 to 1.45) under the GE scenario. The reduction in fossil fuel demand and focus on clean energy will reduce the energy intensity to 0.21 (TOE/US$ of GDP) and carbon intensity to 0.42 (ton/US$ of GDP) under the GE scenario. total import bill (coal and oil) will amount to US$ 334 billion by 2030 (at 2010/11 prices), but as per the GE scenario, it would be US$ 194.2 billion, a saving of about US$ 140 billion. The building of a green energy economy can also serve another purpose: to develop new ‘pathways out of poverty’ by creating more than 10 million jobs and thus raise the standard of living of low-income people. The differences between the baseline and green energy scenarios are not so much the consequence of the diffusion of various technologies. It is the result of the active roles of different actors and the drivers that become dominant.

Keywords: emissions, green energy, fossil fuels, green jobs, renewables, scenario

Procedia PDF Downloads 519
10535 BFDD-S: Big Data Framework to Detect and Mitigate DDoS Attack in SDN Network

Authors: Amirreza Fazely Hamedani, Muzzamil Aziz, Philipp Wieder, Ramin Yahyapour

Abstract:

Software-defined networking in recent years came into the sight of so many network designers as a successor to the traditional networking. Unlike traditional networks where control and data planes engage together within a single device in the network infrastructure such as switches and routers, the two planes are kept separated in software-defined networks (SDNs). All critical decisions about packet routing are made on the network controller, and the data level devices forward the packets based on these decisions. This type of network is vulnerable to DDoS attacks, degrading the overall functioning and performance of the network by continuously injecting the fake flows into it. This increases substantial burden on the controller side, and the result ultimately leads to the inaccessibility of the controller and the lack of network service to the legitimate users. Thus, the protection of this novel network architecture against denial of service attacks is essential. In the world of cybersecurity, attacks and new threats emerge every day. It is essential to have tools capable of managing and analyzing all this new information to detect possible attacks in real-time. These tools should provide a comprehensive solution to automatically detect, predict and prevent abnormalities in the network. Big data encompasses a wide range of studies, but it mainly refers to the massive amounts of structured and unstructured data that organizations deal with on a regular basis. On the other hand, it regards not only the volume of the data; but also that how data-driven information can be used to enhance decision-making processes, security, and the overall efficiency of a business. This paper presents an intelligent big data framework as a solution to handle illegitimate traffic burden on the SDN network created by the numerous DDoS attacks. The framework entails an efficient defence and monitoring mechanism against DDoS attacks by employing the state of the art machine learning techniques.

Keywords: apache spark, apache kafka, big data, DDoS attack, machine learning, SDN network

Procedia PDF Downloads 153
10534 The Effective Use of the Network in the Distributed Storage

Authors: Mamouni Mohammed Dhiya Eddine

Abstract:

This work aims at studying the exploitation of high-speed networks of clusters for distributed storage. Parallel applications running on clusters require both high-performance communications between nodes and efficient access to the storage system. Many studies on network technologies led to the design of dedicated architectures for clusters with very fast communications between computing nodes. Efficient distributed storage in clusters has been essentially developed by adding parallelization mechanisms so that the server(s) may sustain an increased workload. In this work, we propose to improve the performance of distributed storage systems in clusters by efficiently using the underlying high-performance network to access distant storage systems. The main question we are addressing is: do high-speed networks of clusters fit the requirements of a transparent, efficient and high-performance access to remote storage? We show that storage requirements are very different from those of parallel computation. High-speed networks of clusters were designed to optimize communications between different nodes of a parallel application. We study their utilization in a very different context, storage in clusters, where client-server models are generally used to access remote storage (for instance NFS, PVFS or LUSTRE). Our experimental study based on the usage of the GM programming interface of MYRINET high-speed networks for distributed storage raised several interesting problems. Firstly, the specific memory utilization in the storage access system layers does not easily fit the traditional memory model of high-speed networks. Secondly, client-server models that are used for distributed storage have specific requirements on message control and event processing, which are not handled by existing interfaces. We propose different solutions to solve communication control problems at the filesystem level. We show that a modification of the network programming interface is required. Data transfer issues need an adaptation of the operating system. We detail several propositions for network programming interfaces which make their utilization easier in the context of distributed storage. The integration of a flexible processing of data transfer in the new programming interface MYRINET/MX is finally presented. Performance evaluations show that its usage in the context of both storage and other types of applications is easy and efficient.

Keywords: distributed storage, remote file access, cluster, high-speed network, MYRINET, zero-copy, memory registration, communication control, event notification, application programming interface

Procedia PDF Downloads 202
10533 Application of Sustainable Agriculture Based on LEISA in Landscape Design of Integrated Farming

Authors: Eduwin Eko Franjaya, Andi Gunawan, Wahju Qamara Mugnisjah

Abstract:

Sustainable agriculture in the form of integrated farming with its LEISA (Low External Input Sustainable Agriculture) concept has brought a positive impact on agriculture development and ambient amelioration. But, most of the small farmers in Indonesia did not know how to put the concept of it and how to combine agricultural commodities on the site effectively and efficiently. This research has an aim to promote integrated farming (agrofisheries, etc) to the farmers by designing the agricultural landscape to become integrated farming landscape as medium of education for the farmers. The method used in this research is closely related with the rule of design in the landscape architecture science. The first step is inventarization for the existing condition on the research site. The second step is analysis. Then, the third step is concept-making that consists of base concept, design concept, and developing concept. The base concept used in this research is sustainable agriculture with LEISA. The concept design is related with activity base on site. The developing concept consists of space concept, circulation, vegetation and commodity, production system, etc. The fourth step as the final step is planning and design. This step produces site plan of integrated farming based on LEISA. The result of this research is site plan of integrated farming with its explanation, including the energy flow of integrated farming system on site and the production calendar of integrated farming commodities for education and agri-tourism opportunity. This research become the right way to promote the integrated farming and also as a medium for the farmers to learn and to develop it.

Keywords: integrated farming, LEISA, planning and design, site plan

Procedia PDF Downloads 486
10532 Treatment of Non-Small Cell Lung Cancer (NSCLC) With Activating Mutations Considering ctDNA Fluctuations

Authors: Moiseenko F. V., Volkov N. M., Zhabina A. S., Stepanova E. O., Kirillov A. V., Myslik A. V., Artemieva E. V., Agranov I. R., Oganesyan A. P., Egorenkov V. V., Abduloeva N. H., Aleksakhina S. Yu., Ivantsov A. O., Kuligina E. S., Imyanitov E. N., Moiseyenko V. M.

Abstract:

Analysis of ctDNA in patients with NSCLC is an emerging biomarker. Multiple research efforts of quantitative or at least qualitative analysis before and during the first periods of treatment with TKI showed the prognostic value of ctDNA clearance. Still, these important results are not incorporated in clinical standards. We evaluated the role of ctDNA in EGFR-mutated NSCLC receiving first-line TKI. Firstly, we analyzed sequential plasma samples from 30 patients that were collected before intake of the first tablet (at baseline) and at 6, 12, 24, 36, and 48 hours after the “starting point.” EGFR-M+ allele was measured by ddPCR. Afterward, we included sequential qualitative analysis of ctDNA with cobas® EGFR Mutation Test v2 from 99 NSCLC patients before the first dose, after 2 and 4 months of treatment, and on progression. Early response analysis showed the decline of EGFR-M+ level in plasma within the first 48 hours of treatment in 11 subjects. All these patients showed objective tumor response. 10 patients showed either elevation of EGFR-M+ plasma concentration (n = 5) or stable content of circulating EGFR-M+ after the start of the therapy (n = 5); only 3 of these patients achieved an objective response (p = 0.026) when compared to the former group). The rapid decline of plasma EGFR-M+ DNA concentration also predicted for longer PFS (13.7 vs. 11.4 months, p = 0.030). Long-term ctDNA monitoring showed clinically significant heterogeneity of EGFR-mutated NSCLC treated with 1st line TKIs in terms of progression-free and overall survival. Patients without detectable ctDNA at baseline (N = 32) possess the best prognosis on the duration of treatment (PFS: 24.07 [16.8-31.3] and OS: 56.2 [21.8-90.7] months). Those who achieve clearance after two months of TKI (N = 42) have indistinguishably good PFS (19.0 [13.7 – 24.2]). Individuals who retain ctDNA after 2 months (N = 25) have the worst prognosis (PFS: 10.3 [7.0 – 13.5], p = 0.000). 9/25 patients did not develop ctDNA clearance at 4 months with no statistical difference in PFS from those without clearance at 2 months. Prognostic heterogeneity of EGFR-mutated NSCLC should be taken into consideration in planning further clinical trials and optimizing the outcomes of patients.

Keywords: NSCLC, EGFR, targeted therapy, ctDNA, prognosis

Procedia PDF Downloads 36
10531 A Conceptual Model of the 'Driver – Highly Automated Vehicle' System

Authors: V. A. Dubovsky, V. V. Savchenko, A. A. Baryskevich

Abstract:

The current trend in the automotive industry towards automatic vehicles is creating new challenges related to human factors. This occurs due to the fact that the driver is increasingly relieved of the need to be constantly involved in driving the vehicle, which can negatively impact his/her situation awareness when manual control is required, and decrease driving skills and abilities. These new problems need to be studied in order to provide road safety during the transition towards self-driving vehicles. For this purpose, it is important to develop an appropriate conceptual model of the interaction between the driver and the automated vehicle, which could serve as a theoretical basis for the development of mathematical and simulation models to explore different aspects of driver behaviour in different road situations. Well-known driver behaviour models describe the impact of different stages of the driver's cognitive process on driving performance but do not describe how the driver controls and adjusts his actions. A more complete description of the driver's cognitive process, including the evaluation of the results of his/her actions, will make it possible to more accurately model various aspects of the human factor in different road situations. This paper presents a conceptual model of the 'driver – highly automated vehicle' system based on the P.K. Anokhin's theory of functional systems, which is a theoretical framework for describing internal processes in purposeful living systems based on such notions as goal, desired and actual results of the purposeful activity. A central feature of the proposed model is a dynamic coupling mechanism between the decision-making of a driver to perform a particular action and changes of road conditions due to driver’s actions. This mechanism is based on the stage by stage evaluation of the deviations of the actual values of the driver’s action results parameters from the expected values. The overall functional structure of the highly automated vehicle in the proposed model includes a driver/vehicle/environment state analyzer to coordinate the interaction between driver and vehicle. The proposed conceptual model can be used as a framework to investigate different aspects of human factors in transitions between automated and manual driving for future improvements in driving safety, and for understanding how driver-vehicle interface must be designed for comfort and safety. A major finding of this study is the demonstration that the theory of functional systems is promising and has the potential to describe the interaction of the driver with the vehicle and the environment.

Keywords: automated vehicle, driver behavior, human factors, human-machine system

Procedia PDF Downloads 123
10530 Identification, Isolation and Characterization of Unknown Degradation Products of Cefprozil Monohydrate by HPTLC

Authors: Vandana T. Gawande, Kailash G. Bothara, Chandani O. Satija

Abstract:

The present research work was aimed to determine stability of cefprozil monohydrate (CEFZ) as per various stress degradation conditions recommended by International Conference on Harmonization (ICH) guideline Q1A (R2). Forced degradation studies were carried out for hydrolytic, oxidative, photolytic and thermal stress conditions. The drug was found susceptible for degradation under all stress conditions. Separation was carried out by using High Performance Thin Layer Chromatographic System (HPTLC). Aluminum plates pre-coated with silica gel 60F254 were used as the stationary phase. The mobile phase consisted of ethyl acetate: acetone: methanol: water: glacial acetic acid (7.5:2.5:2.5:1.5:0.5v/v). Densitometric analysis was carried out at 280 nm. The system was found to give compact spot for cefprozil monohydrate (0.45 Rf). The linear regression analysis data showed good linear relationship in the concentration range 200-5.000 ng/band for cefprozil monohydrate. Percent recovery for the drug was found to be in the range of 98.78-101.24. Method was found to be reproducible with % relative standard deviation (%RSD) for intra- and inter-day precision to be < 1.5% over the said concentration range. The method was validated for precision, accuracy, specificity and robustness. The method has been successfully applied in the analysis of drug in tablet dosage form. Three unknown degradation products formed under various stress conditions were isolated by preparative HPTLC and characterized by mass spectroscopic studies.

Keywords: cefprozil monohydrate, degradation products, HPTLC, stress study, stability indicating method

Procedia PDF Downloads 287
10529 Calculational-Experimental Approach of Radiation Damage Parameters on VVER Equipment Evaluation

Authors: Pavel Borodkin, Nikolay Khrennikov, Azamat Gazetdinov

Abstract:

The problem of ensuring of VVER type reactor equipment integrity is now most actual in connection with justification of safety of the NPP Units and extension of their service life to 60 years and more. First of all, it concerns old units with VVER-440 and VVER-1000. The justification of the VVER equipment integrity depends on the reliability of estimation of the degree of the equipment damage. One of the mandatory requirements, providing the reliability of such estimation, and also evaluation of VVER equipment lifetime, is the monitoring of equipment radiation loading parameters. In this connection, there is a problem of justification of such normative parameters, used for an estimation of the pressure vessel metal embrittlement, as the fluence and fluence rate (FR) of fast neutrons above 0.5 MeV. From the point of view of regulatory practice, a comparison of displacement per atom (DPA) and fast neutron fluence (FNF) above 0.5 MeV has a practical concern. In accordance with the Russian regulatory rules, neutron fluence F(E > 0.5 MeV) is a radiation exposure parameter used in steel embrittlement prediction under neutron irradiation. However, the DPA parameter is a more physically legitimate quantity of neutron damage of Fe based materials. If DPA distribution in reactor structures is more conservative as neutron fluence, this case should attract the attention of the regulatory authority. The purpose of this work was to show what radiation load parameters (fluence, DPA) on all VVER equipment should be under control, and give the reasonable estimations of such parameters in the volume of all equipment. The second task is to give the conservative estimation of each parameter including its uncertainty. Results of recently received investigations allow to test the conservatism of calculational predictions, and, as it has been shown in the paper, combination of ex-vessel measured data with calculated ones allows to assess unpredicted uncertainties which are results of specific unique features of individual equipment for VVER reactor. Some results of calculational-experimental investigations are presented in this paper.

Keywords: equipment integrity, fluence, displacement per atom, nuclear power plant, neutron activation measurements, neutron transport calculations

Procedia PDF Downloads 143
10528 Algorithms of ABS-Plastic Extrusion

Authors: Dmitrii Starikov, Evgeny Rybakov, Denis Zhuravlev

Abstract:

Plastic for 3D printing is very necessary material part for printers. But plastic production is technological process, which implies application of different control algorithms. Possible algorithms of providing set diameter of plastic fiber are proposed and described in the article. Results of research were proved by existing unit of filament production.

Keywords: ABS-plastic, automation, control system, extruder, filament, PID-algorithm

Procedia PDF Downloads 390
10527 Three-Dimensional Finite Element Analysis of Geogrid-Reinforced Piled Embankments on Soft Clay

Authors: Mahmoud Y. Shokry, Rami M. El-Sherbiny

Abstract:

This paper aims to highlight the role of some parameters that may be of a noticeable impact on numerical analysis/design of embankments. It presents the results of a three-dimensional (3-D) finite element analysis of a monitored earth embankment that was constructed on soft clay formation stabilized by cast in-situ piles using software PLAXIS 3D. A comparison between the predicted and the monitored responses is presented to assess the adequacy of the adopted numerical model. The model was used in the targeted parametric study. Moreover, a comparison was performed between the results of the 3-D analyses and the analytical solutions. This paper concluded that the effect of using mono pile caps led to decrease both the total and differential settlement and increased the efficiency of the piled embankment system. The study of using geogrids revealed that it can contribute in decreasing the settlement and maximizing the part of the embankment load transferred to piles. Moreover, it was found that increasing the stiffness of the geogrids provides higher values of tensile forces and hence has more effective influence on embankment load carried by piles rather than using multi-number of layers with low values of geogrid stiffness. The efficiency of the piled embankments system was also found to be greater when higher embankments are used rather than the low height embankments. The comparison between the numerical 3-D model and the theoretical design methods revealed that many analytical solutions are conservative and non-accurate rather than the 3-D finite element numerical models.

Keywords: efficiency, embankment, geogrids, soft clay

Procedia PDF Downloads 308
10526 Mapping Environmental Complexity: A Strategic Tool for Sustainable Development of Road Infrastructure in Santa Catarina, Brazil

Authors: Edinei Coser, Cátia Regina Silva de Carvalho Pinto, Kleber Isaac Silva de Souza

Abstract:

The road transportation system is an integral part of the Brazilian economy, so investing in this sector is paramount. Despite being a significant contributor to national and regional development, implementing road infrastructures brings about significant environmental changes, resulting in negative impacts that need to be mitigated through environmental licensing. However, by considering potential environmental impacts from a strategic perspective earlier, we can ensure that the sustainable development resulting from investments in this sector is more efficient. Therefore, this work aims to incorporate strategic environmental assessment into the road transportation system in the state of Santa Catarina using a tool that evaluates the entire territory. This tool analyzes 15 qualitative socio-environmental factors that may complicate environmental licensing and project implementation, with the help of multi-criteria analysis based on AHP and geographic information systems with Python, which presents a surface map of environmental cost for Santa Catarina state in Brazil. This map represents how environmental restrictions are spatially distributed in the territory and can be used for governments and decision-makers to assess potential areas for road implementation or paving, evaluate and propose road corridors, propose, promote, and evaluate risks for governmental programs and investments, set environmental management guidelines and enhance contracting and environmental assessment processes.

Keywords: environmental impact assessment., GIS, highways, multi-criteria analysis, strategic environmental assessment

Procedia PDF Downloads 35
10525 Building up Regional Innovation Systems (RIS) for Development: The Case Study of the State of Mexico, México

Authors: Jose Luis Solleiro, Rosario Castanon, Laura Elena Martinez

Abstract:

The State of Mexico is an administrative entity of Mexico, and it is one of the most important territories due to its great economic and social impact for the whole country, especially since it contributes with more than eight of the national Gross Domestic Product (GDP). The State of Mexico has a population of over seventeen million people and host very important business and productive industries such as Automotive, Chemicals, Pharmaceutical, and Agri-food. In 2017, the State Development Plan (Plan Estatal de Desarrollo in Spanish) which is a policy document that rules State's economic actions and integrates the bases for sectoral and regional programs to achieve regional development), raised innovation as a key aspect to boost competitiveness and productivity of the State of Mexico. Therefore, in line with this proposal, in 2018 the Mexican Council for Science and Technology (COMECYT for its acronym in Spanish), an institution in charge of promoting public science and technology policies in the State of Mexico, took actions towards building up the State´s Innovation System. Hence, the main objective of this paper is to review and analyze the process to create RIS in the State of Mexico. We focus on the key elements of the process, the diverse actors that were involved in it, the activities that were carried out and the identification of the challenges, findings, successes, and failures of the intended exercise. The methodology used to analyze the structure of the Innovation System of the State of Mexico is based on two elements: the case study and the research-action approach. The main objective of the paper, the case study was based on semi-structured interviews with key actors who have participated in the process of launching the RIS of the State of Mexico. Additionally, we analyzed the information reports and other documents that were elaborated during the process of shaping the State's innovation system. Finally, the results obtained in the process were also examined. The relevance of this investigation fundamentally rests in two elements: 1) keeping documental record of the process of building a RIS in Mexico; and 2) carrying out the analysis of this case study recognizing the importance of knowledge extraction and dissemination, so that lessons on this matter may be useful for similar experiences in the future. We conclude that in Mexico, documentation and analysis efforts related to the formation of RIS and interaction processes between innovation ecosystem actors are scarce, so documents like are of great importance, especially since it generates a series of findings and recommendations for the building of RIS.

Keywords: regional innovation systems, innovation, development, competitiveness

Procedia PDF Downloads 101
10524 Determination of the Stability of Haloperidol Tablets and Phenytoin Capsules Stored in the Inpatient Dispensary System (Swisslog) by the Respective HPLC and Raman Spectroscopy Assay

Authors: Carol Yue-En Ong, Angelina Hui-Min Tan, Quan Liu, Paul Chi-Lui Ho

Abstract:

A public general hospital in Singapore has recently implemented an automated unit-dose machine in their inpatient dispensary, Swisslog, with the objective of reducing human error and improving patient safety. However, a concern in stability arises as tablets are removed from their original packaging (bottled loose tablets/capsules) and are repackaged into individual, clear plastic wrappers as unit doses in the system. Drugs that are light-sensitive and hygroscopic would be more susceptible to degradation as the wrapper does not offer full protection. Hence, this study was carried out to study the stability of haloperidol tablets and phenytoin capsules that are light-sensitive and hygroscopic respectively. Validated HPLC-UV assays were first established for quantification of these two compounds. The medications involved were put in the Swisslog and sampled every week for one month. The collected data was analysed and showed no degradation over time. This study also explored an alternative approach for drug stability determination-Raman spectroscopy. The advantage of Raman spectroscopy is its high time efficiency and non-destructive nature. The results suggest that drug degradation can indeed be detected using Raman microscopy, but further research is needed to establish this approach for quantification or qualification of compounds. NanoRam®, a portable Raman spectrocope was also used alongside Raman microscopy but was unsuccessful in detecting degradation in this study.

Keywords: drug stability, haloperidol, HPLC, phenytoin, raman spectroscopy, Swisslog

Procedia PDF Downloads 321
10523 Optimization of Personnel Selection Problems via Unconstrained Geometric Programming

Authors: Vildan Kistik, Tuncay Can

Abstract:

From a business perspective, cost and profit are two key factors for businesses. The intent of most businesses is to minimize the cost to maximize or equalize the profit, so as to provide the greatest benefit to itself. However, the physical system is very complicated because of technological constructions, rapid increase of competitive environments and similar factors. In such a system it is not easy to maximize profits or to minimize costs. Businesses must decide on the competence and competence of the personnel to be recruited, taking into consideration many criteria in selecting personnel. There are many criteria to determine the competence and competence of a staff member. Factors such as the level of education, experience, psychological and sociological position, and human relationships that exist in the field are just some of the important factors in selecting a staff for a firm. Personnel selection is a very important and costly process in terms of businesses in today's competitive market. Although there are many mathematical methods developed for the selection of personnel, unfortunately the use of these mathematical methods is rarely encountered in real life. In this study, unlike other methods, an exponential programming model was established based on the possibilities of failing in case the selected personnel was started to work. With the necessary transformations, the problem has been transformed into unconstrained Geometrical Programming problem and personnel selection problem is approached with geometric programming technique. Personnel selection scenarios for a classroom were established with the help of normal distribution and optimum solutions were obtained. In the most appropriate solutions, the personnel selection process for the classroom has been achieved with minimum cost.

Keywords: geometric programming, personnel selection, non-linear programming, operations research

Procedia PDF Downloads 253
10522 Moved by Music: The Impact of Music on Fatigue, Arousal and Motivation During Conditioning for High to Elite Level Female Artistic Gymnasts

Authors: Chante J. De Klerk

Abstract:

The potential of music to facilitate superior performance during high to elite level gymnastics conditioning instigated this research. A team of seven gymnasts completed a fixed conditioning programme eight times, alternating the two variable conditions. Four sessions of each condition were conducted: without music (session 1), with music (session 2), without music (3), with music (4), without music (5), and so forth. Quantitative data were collected in both conditions through physiological monitoring of the gymnasts, and administration of the Situational Motivation Scale (SIMS). Statistical analysis of the physiological data made it possible to quantify the presence as well as the magnitude of the musical intervention’s impact on various aspects of the gymnasts' physiological functioning during conditioning. The SIMS questionnaire results were used to evaluate if their motivation towards conditioning was altered by the intervention. Thematic analysis of qualitative data collected through semi-structured interviews revealed themes reflecting the gymnasts’ sentiments towards the data collection process. Gymnast-specific descriptions and experiences of the team as a whole were integrated with the quantitative data to facilitate greater dimension in establishing the impact of the intervention. The results showed positive physiological, motivational, and emotional effects. In the presence of music, superior sympathetic nervous activation, and energy efficiency, with more economic breathing, dominated the physiological data. Fatigue and arousal levels (emotional and physiological) were also conducive to improved conditioning outcomes compared to conventional conditioning (without music). Greater levels of positive affect and motivation emerged in analysis of both the SIMS and interview data sets. Overall, the intervention was found to promote psychophysiological coherence during the physical activity. In conclusion, a strategically constructed musical intervention, designed to accompany a gymnastics conditioning session for high to elite level gymnasts, has ergogenic potential.

Keywords: arousal, fatigue, gymnastics conditioning, motivation, musical intervention, psychophysiological coherence

Procedia PDF Downloads 73
10521 Optimized Techniques for Reducing the Reactive Power Generation in Offshore Wind Farms in India

Authors: Pardhasaradhi Gudla, Imanual A.

Abstract:

The generated electrical power in offshore needs to be transmitted to grid which is located in onshore by using subsea cables. Long subsea cables produce reactive power, which should be compensated in order to limit transmission losses, to optimize the transmission capacity, and to keep the grid voltage within the safe operational limits. Installation cost of wind farm includes the structure design cost and electrical system cost. India has targeted to achieve 175GW of renewable energy capacity by 2022 including offshore wind power generation. Due to sea depth is more in India, the installation cost will be further high when compared to European countries where offshore wind energy is already generating successfully. So innovations are required to reduce the offshore wind power project cost. This paper presents the optimized techniques to reduce the installation cost of offshore wind firm with respect to electrical transmission systems. This technical paper provides the techniques for increasing the current carrying capacity of subsea cable by decreasing the reactive power generation (capacitance effect) of the subsea cable. There are many methods for reactive power compensation in wind power plants so far in execution. The main reason for the need of reactive power compensation is capacitance effect of subsea cable. So if we diminish the cable capacitance of cable then the requirement of the reactive power compensation will be reduced or optimized by avoiding the intermediate substation at midpoint of the transmission network.

Keywords: offshore wind power, optimized techniques, power system, sub sea cable

Procedia PDF Downloads 168
10520 Fixed Point Iteration of a Damped and Unforced Duffing's Equation

Authors: Paschal A. Ochang, Emmanuel C. Oji

Abstract:

The Duffing’s Equation is a second order system that is very important because they are fundamental to the behaviour of higher order systems and they have applications in almost all fields of science and engineering. In the biological area, it is useful in plant stem dependence and natural frequency and model of the Brain Crash Analysis (BCA). In Engineering, it is useful in the study of Damping indoor construction and Traffic lights and to the meteorologist it is used in the prediction of weather conditions. However, most Problems in real life that occur are non-linear in nature and may not have analytical solutions except approximations or simulations, so trying to find an exact explicit solution may in general be complicated and sometimes impossible. Therefore we aim to find out if it is possible to obtain one analytical fixed point to the non-linear ordinary equation using fixed point analytical method. We started by exposing the scope of the Duffing’s equation and other related works on it. With a major focus on the fixed point and fixed point iterative scheme, we tried different iterative schemes on the Duffing’s Equation. We were able to identify that one can only see the fixed points to a Damped Duffing’s Equation and not to the Undamped Duffing’s Equation. This is because the cubic nonlinearity term is the determining factor to the Duffing’s Equation. We finally came to the results where we identified the stability of an equation that is damped, forced and second order in nature. Generally, in this research, we approximate the solution of Duffing’s Equation by converting it to a system of First and Second Order Ordinary Differential Equation and using Fixed Point Iterative approach. This approach shows that for different versions of Duffing’s Equations (damped), we find fixed points, therefore the order of computations and running time of applied software in all fields using the Duffing’s equation will be reduced.

Keywords: damping, Duffing's equation, fixed point analysis, second order differential, stability analysis

Procedia PDF Downloads 266
10519 Development of Star Image Simulator for Star Tracker Algorithm Validation

Authors: Zoubida Mahi

Abstract:

A successful satellite mission in space requires a reliable attitude and orbit control system to command, control and position the satellite in appropriate orbits. Several sensors are used for attitude control, such as magnetic sensors, earth sensors, horizon sensors, gyroscopes, and solar sensors. The star tracker is the most accurate sensor compared to other sensors, and it is able to offer high-accuracy attitude control without the need for prior attitude information. There are mainly three approaches in star sensor research: digital simulation, hardware in the loop simulation, and field test of star observation. In the digital simulation approach, all of the processes are done in software, including star image simulation. Hence, it is necessary to develop star image simulation software that could simulate real space environments and various star sensor configurations. In this paper, we present a new stellar image simulation tool that is used to test and validate the stellar sensor algorithms; the developed tool allows to simulate of stellar images with several types of noise, such as background noise, gaussian noise, Poisson noise, multiplicative noise, and several scenarios that exist in space such as the presence of the moon, the presence of optical system problem, illumination and false objects. On the other hand, we present in this paper a new star extraction algorithm based on a new centroid calculation method. We compared our algorithm with other star extraction algorithms from the literature, and the results obtained show the star extraction capability of the proposed algorithm.

Keywords: star tracker, star simulation, star detection, centroid, noise, scenario

Procedia PDF Downloads 70
10518 Development, Evaluation and Scale-Up of a Mental Health Care Plan (MHCP) in Nepal

Authors: Nagendra P. Luitel, Mark J. D. Jordans

Abstract:

Globally, there is a significant gap between the number of individuals in need of mental health care and those who actually receive treatment. The evidence is accumulating that mental health services can be delivered effectively by primary health care workers through community-based programs and task-sharing approaches. Changing the role of specialist mental health workers from service delivery to building clinical capacity of the primary health care (PHC) workers could help in reducing treatment gap in low and middle-income countries (LMICs). We developed a comprehensive mental health care plan in 2012 and evaluated its feasibility and effectiveness over the past three years. Initially, a mixed method formative study was conducted for the development of mental health care plan (MHCP). Routine monitoring and evaluation data, including client flow and reports of satisfaction, were obtained from beneficiaries (n=135) during the pilot-testing phase. Repeated community survey (N=2040); facility detection survey (N=4704) and the cohort study (N=576) were conducted for evaluation of the MHCP. The resulting MHCP consists of twelve packages divided over the community, health facility, and healthcare organization platforms. Detection of mental health problems increased significantly after introducing MHCP. Service implementation data support the real-life applicability of the MHCP, with reasonable treatment uptake. Currently, MHCP has been implemented in the entire Chitwan district where over 1400 people (438 people with depression, 406 people with psychosis, 181 people with epilepsy, 360 people with alcohol use disorder and 51 others) have received mental health services from trained health workers. Key barriers were identified and addressed, namely dissatisfaction with privacy, perceived burden among health workers, high drop-out rates and continue the supply of medicines. The results indicated that involvement of PHC workers in detection and management of mental health problems is an effective strategy to minimize treatment gap on mental health care in Nepal.

Keywords: mental health, Nepal, primary care, treatment gap

Procedia PDF Downloads 279
10517 The State of Oral Health after COVID-19 Lockdown: A Systematic Review

Authors: Faeze omid, Morteza Banakar

Abstract:

Background: The COVID-19 pandemic has had a significant impact on global health and healthcare systems, including oral health. The lockdown measures implemented in many countries have led to changes in oral health behaviors, access to dental care, and the delivery of dental services. However, the extent of these changes and their effects on oral health outcomes remains unclear. This systematic review aims to synthesize the available evidence on the state of oral health after the COVID-19 lockdown. Methods: We conducted a systematic search of electronic databases (PubMed, Embase, Scopus, and Web of Science) and grey literature sources for studies reporting on oral health outcomes after the COVID-19 lockdown. We included studies published in English between January 2020 and March 2023. Two reviewers independently screened the titles, abstracts, and full texts of potentially relevant articles and extracted data from included studies. We used a narrative synthesis approach to summarize the findings. Results: Our search identified 23 studies from 12 countries, including cross-sectional surveys, cohort studies, and case reports. The studies reported on changes in oral health behaviors, access to dental care, and the prevalence and severity of dental conditions after the COVID-19 lockdown. Overall, the evidence suggests that the lockdown measures had a negative impact on oral health outcomes, particularly among vulnerable populations. There were decreases in dental attendance, increases in dental anxiety and fear, and changes in oral hygiene practices. Furthermore, there were increases in the incidence and severity of dental conditions, such as dental caries and periodontal disease, and delays in the diagnosis and treatment of oral cancers. Conclusion: The COVID-19 pandemic and associated lockdown measures have had significant effects on oral health outcomes, with negative impacts on oral health behaviors, access to care, and the prevalence and severity of dental conditions. These findings highlight the need for continued monitoring and interventions to address the long-term effects of the pandemic on oral health.

Keywords: COVID-19, oral health, systematic review, dental public health

Procedia PDF Downloads 53
10516 Design of an Acoustic Imaging Sensor Array for Mobile Robots

Authors: Dibyendu Roy, V. Ramu Reddy, Parijat Deshpande, Ranjan Dasgupta

Abstract:

Imaging of underwater objects is primarily conducted by acoustic imagery due to the severe attenuation of electro-magnetic waves in water. Acoustic imagery underwater has varied range of significant applications such as side-scan sonar, mine hunting sonar. It also finds utility in other domains such as imaging of body tissues via ultrasonography and non-destructive testing of objects. In this paper, we explore the feasibility of using active acoustic imagery in air and simulate phased array beamforming techniques available in literature for various array designs to achieve a suitable acoustic sensor array design for a portable mobile robot which can be applied to detect the presence/absence of anomalous objects in a room. The multi-path reflection effects especially in enclosed rooms and environmental noise factors are currently not simulated and will be dealt with during the experimental phase. The related hardware is designed with the same feasibility criterion that the developed system needs to be deployed on a portable mobile robot. There is a trade of between image resolution and range with the array size, number of elements and the imaging frequency and has to be iteratively simulated to achieve the desired acoustic sensor array design. The designed acoustic imaging array system is to be mounted on a portable mobile robot and targeted for use in surveillance missions for intruder alerts and imaging objects during dark and smoky scenarios where conventional optic based systems do not function well.

Keywords: acoustic sensor array, acoustic imagery, anomaly detection, phased array beamforming

Procedia PDF Downloads 387