Search results for: singleton review spam detection
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7703

Search results for: singleton review spam detection

5603 The Use of Vasopressin in the Management of Severe Traumatic Brain Injury: A Narrative Review

Authors: Nicole Selvi Hill, Archchana Radhakrishnan

Abstract:

Introduction: Traumatic brain injury (TBI) is a leading cause of mortality among trauma patients. In the management of TBI, the main principle is avoiding cerebral ischemia, as this is a strong determiner of neurological outcomes. The use of vasoactive drugs, such as vasopressin, has an important role in maintaining cerebral perfusion pressure to prevent secondary brain injury. Current guidelines do not suggest a preferred vasoactive drug to administer in the management of TBI, and there is a paucity of information on the therapeutic potential of vasopressin following TBI. Vasopressin is also an endogenous anti-diuretic hormone (AVP), and pathways mediated by AVP play a large role in the underlying pathological processes of TBI. This creates an overlap of discussion regarding the therapeutic potential of vasopressin following TBI. Currently, its popularity lies in vasodilatory and cardiogenic shock in the intensive care setting, with increasing support for its use in haemorrhagic and septic shock. Methodology: This is a review article based on a literature review. An electronic search was conducted via PubMed, Cochrane, EMBASE, and Google Scholar. The aim was to identify clinical studies looking at the therapeutic administration of vasopressin in severe traumatic brain injury. The primary aim was to look at the neurological outcome of patients. The secondary aim was to look at surrogate markers of cerebral perfusion measurements, such as cerebral perfusion pressure, cerebral oxygenation, and cerebral blood flow. Results: Eight papers were included in the final number. Three were animal studies; five were human studies, comprised of three case reports, one retrospective review of data, and one randomised control trial. All animal studies demonstrated the benefits of vasopressors in TBI management. One animal study showed the superiority of vasopressin in reducing intracranial pressure and increasing cerebral oxygenation over a catecholaminergic vasopressor, phenylephrine. All three human case reports were supportive of vasopressin as a rescue therapy in catecholaminergic-resistant hypotension. The retrospective review found vasopressin did not increase cerebral oedema in TBI patients compared to catecholaminergic vasopressors; and demonstrated a significant reduction in the requirements of hyperosmolar therapy in patients that received vasopressin. The randomised control trial results showed no significant differences in primary and secondary outcomes between TBI patients receiving vasopressin versus those receiving catecholaminergic vasopressors. Apart from the randomised control trial, the studies included are of low-level evidence. Conclusion: Studies favour vasopressin within certain parameters of cerebral function compared to control groups. However, the neurological outcomes of patient groups are not known, and animal study results are difficult to extrapolate to humans. It cannot be said with certainty whether vasopressin’s benefits stand above usage of other vasoactive drugs due to the weaknesses of the evidence. Further randomised control trials, which are larger, standardised, and rigorous, are required to improve knowledge in this field.

Keywords: catecholamines, cerebral perfusion pressure, traumatic brain injury, vasopressin, vasopressors

Procedia PDF Downloads 58
5602 The State of Oral Health after COVID-19 Lockdown: A Systematic Review

Authors: Faeze omid, Morteza Banakar

Abstract:

Background: The COVID-19 pandemic has had a significant impact on global health and healthcare systems, including oral health. The lockdown measures implemented in many countries have led to changes in oral health behaviors, access to dental care, and the delivery of dental services. However, the extent of these changes and their effects on oral health outcomes remains unclear. This systematic review aims to synthesize the available evidence on the state of oral health after the COVID-19 lockdown. Methods: We conducted a systematic search of electronic databases (PubMed, Embase, Scopus, and Web of Science) and grey literature sources for studies reporting on oral health outcomes after the COVID-19 lockdown. We included studies published in English between January 2020 and March 2023. Two reviewers independently screened the titles, abstracts, and full texts of potentially relevant articles and extracted data from included studies. We used a narrative synthesis approach to summarize the findings. Results: Our search identified 23 studies from 12 countries, including cross-sectional surveys, cohort studies, and case reports. The studies reported on changes in oral health behaviors, access to dental care, and the prevalence and severity of dental conditions after the COVID-19 lockdown. Overall, the evidence suggests that the lockdown measures had a negative impact on oral health outcomes, particularly among vulnerable populations. There were decreases in dental attendance, increases in dental anxiety and fear, and changes in oral hygiene practices. Furthermore, there were increases in the incidence and severity of dental conditions, such as dental caries and periodontal disease, and delays in the diagnosis and treatment of oral cancers. Conclusion: The COVID-19 pandemic and associated lockdown measures have had significant effects on oral health outcomes, with negative impacts on oral health behaviors, access to care, and the prevalence and severity of dental conditions. These findings highlight the need for continued monitoring and interventions to address the long-term effects of the pandemic on oral health.

Keywords: COVID-19, oral health, systematic review, dental public health

Procedia PDF Downloads 59
5601 Modern Proteomics and the Application of Machine Learning Analyses in Proteomic Studies of Chronic Kidney Disease of Unknown Etiology

Authors: Dulanjali Ranasinghe, Isuru Supasan, Kaushalya Premachandra, Ranjan Dissanayake, Ajith Rajapaksha, Eustace Fernando

Abstract:

Proteomics studies of organisms are considered to be significantly information-rich compared to their genomic counterparts because proteomes of organisms represent the expressed state of all proteins of an organism at a given time. In modern top-down and bottom-up proteomics workflows, the primary analysis methods employed are gel–based methods such as two-dimensional (2D) electrophoresis and mass spectrometry based methods. Machine learning (ML) and artificial intelligence (AI) have been used increasingly in modern biological data analyses. In particular, the fields of genomics, DNA sequencing, and bioinformatics have seen an incremental trend in the usage of ML and AI techniques in recent years. The use of aforesaid techniques in the field of proteomics studies is only beginning to be materialised now. Although there is a wealth of information available in the scientific literature pertaining to proteomics workflows, no comprehensive review addresses various aspects of the combined use of proteomics and machine learning. The objective of this review is to provide a comprehensive outlook on the application of machine learning into the known proteomics workflows in order to extract more meaningful information that could be useful in a plethora of applications such as medicine, agriculture, and biotechnology.

Keywords: proteomics, machine learning, gel-based proteomics, mass spectrometry

Procedia PDF Downloads 137
5600 Comparison of Urban Regeneration Strategies in Asia and the Development of Neighbourhood Regeneration in Malaysia

Authors: Wan Jiun Tin

Abstract:

Neighborhood regeneration has gained its popularity despite market-led urban redevelopment is still the main strategy in most of the countries in Asia. Area-based approach of neighborhood regeneration with the focus on people, place and system which covers the main sustainable aspects shall be studied as part of the solution. Project implementation in small scale without fully depending on the financial support from the government and main stakeholders is the advantage of neighborhood regeneration. This enables the improving and upgrading of living conditions to be ongoing even during the economy downturn. In addition to that, there will be no specific selection on the development areas as the entire nation share the similar opportunity to upgrade and to improve their neighborhood. This is important to narrow the income disparities in urban. The objective of this paper is to review and to summarize the urban regeneration in developed countries with the focus on Korea, Singapore and Hong Kong. The aim is to determine the direction of sustainable urban regeneration in Malaysia for post-Vision 2020 through the introduction of neighborhood regeneration. This paper is conducted via literature review and observations in those selected countries. In conclusion, neighborhood regeneration shall be one of the approach of sustainable urban regeneration in Malaysia. A few criteria have been identified and to be recommended for the adaptation in Malaysia.

Keywords: area-based regeneration, public participation, sustainable urban regeneration, urban redevelopment

Procedia PDF Downloads 258
5599 A Review of Test Protocols for Assessing Coating Performance of Water Ballast Tank Coatings

Authors: Emmanuel A. Oriaifo, Noel Perera, Alan Guy, Pak. S. Leung, Kian T. Tan

Abstract:

Concerns on corrosion and effective coating protection of double hull tankers and bulk carriers in service have been raised especially in water ballast tanks (WBTs). Test protocols/methodologies specifically that which is incorporated in the International Maritime Organisation (IMO), Performance Standard for Protective Coatings for Dedicated Sea Water ballast tanks (PSPC) are being used to assess and evaluate the performance of the coatings for type approval prior to their application in WBTs. However, some of the type approved coatings may be applied as very thick films to less than ideally prepared steel substrates in the WBT. As such films experience hygrothermal cycling from operating and environmental conditions, they become embrittled which may ultimately result in cracking. This embrittlement of the coatings is identified as an undesirable feature in the PSPC but is not mentioned in the test protocols within it. There is therefore renewed industrial research aimed at understanding this issue in order to eliminate cracking and achieve the intended coating lifespan of 15 years in good condition. This paper will critically review test protocols currently used for assessing and evaluating coating performance, particularly the IMO PSPC.

Keywords: corrosion test, hygrothermal cycling, coating test protocols, water ballast tanks

Procedia PDF Downloads 417
5598 Exploration and Exploitation within Operations

Authors: D. Gåsvaer, L. Stålberg, A. Fundin, M. Jackson, P. Johansson

Abstract:

Exploration and exploitation capabilities are both important within Operations as means for improvement when managed separately, and for establishing dynamic improvement capabilities when combined in balance. However, it is unclear what exploration and exploitation capabilities imply in improvement and development work within an operations context. So in order to better understand how to develop exploration and exploitation capabilities within operations, the main characteristics of these constructs needs to be identified and further understood. Thus, the objective of this research is to increase the understanding about exploitation and exploration characteristics, to concretize what they translates to within the context of improvement and development work in an operations unit, and to identify practical challenges. A literature review and a case study are presented. In the literature review, different interpretations of exploration and exploitation are portrayed, key characteristics have been identified, and a deepened understanding of exploration and exploitation characteristics is described. The case in the study is an operations unit, and the aim is to explore to what extent and in what ways exploration and exploitation activities are part of the improvement structures and processes. The contribution includes an identification of key characteristics of exploitation and exploration, as well as an interpretation of the constructs. Further, some practical challenges are identified. For instance, exploration activities tend to be given low priority, both in daily work as in the manufacturing strategy. Also, the overall understanding about the concepts of exploitation and exploration (or any similar aspect of dynamic improvement capabilities) is very low.

Keywords: exploitation, exploration, improvement, lean production, manufacturing

Procedia PDF Downloads 472
5597 Detection of Defects in CFRP by Ultrasonic IR Thermographic Method

Authors: W. Swiderski

Abstract:

In the paper introduced the diagnostic technique making possible the research of internal structures in composite materials reinforced fibres using in different applications. The main reason of damages in structures of these materials is the changing distribution of load in constructions in the lifetime. Appearing defect is largely complicated because of the appearance of disturbing of continuity of reinforced fibres, binder cracks and loss of fibres adhesiveness from binders. Defect in composite materials is usually more complicated than in metals. At present, infrared thermography is the most effective method in non-destructive testing composite. One of IR thermography methods used in non-destructive evaluation is vibrothermography. The vibrothermography is not a new non-destructive method, but the new solution in this test is use ultrasonic waves to thermal stimulation of materials. In this paper, both modelling and experimental results which illustrate the advantages and limitations of ultrasonic IR thermography in inspecting composite materials will be presented. The ThermoSon computer program for computing 3D dynamic temperature distribuions in anisotropic layered solids with subsurface defects subject to ulrasonic stimulation was used to optimise heating parameters in the detection of subsurface defects in composite materials. The program allows for the analysis of transient heat conduction and ultrasonic wave propagation phenomena in solids. The experiments at MIAT were fulfilled by means of FLIR SC 7600 IR camera. Ultrasonic stimulation was performed with the frequency from 15 kHz to 30 kHz with maximum power up to 2 kW.

Keywords: composite material, ultrasonic, infrared thermography, non-destructive testing

Procedia PDF Downloads 285
5596 Short Review on Models to Estimate the Risk in the Financial Area

Authors: Tiberiu Socaciu, Tudor Colomeischi, Eugenia Iancu

Abstract:

Business failure affects in various proportions shareholders, managers, lenders (banks), suppliers, customers, the financial community, government and society as a whole. In the era in which we have telecommunications networks, exists an interdependence of markets, the effect of a failure of a company is relatively instant. To effectively manage risk exposure is thus require sophisticated support systems, supported by analytical tools to measure, monitor, manage and control operational risks that may arise. As we know, bankruptcy is a phenomenon that managers do not want no matter what stage of life is the company they direct / lead. In the analysis made by us, by the nature of economic models that are reviewed (Altman, Conan-Holder etc.), estimating the risk of bankruptcy of a company corresponds to some extent with its own business cycle tracing of the company. Various models for predicting bankruptcy take into account direct / indirect aspects such as market position, company growth trend, competition structure, characteristics and customer retention, organization and distribution, location etc. From the perspective of our research we will now review the economic models known in theory and practice for estimating the risk of bankruptcy; such models are based on indicators drawn from major accounting firms.

Keywords: Anglo-Saxon models, continental models, national models, statistical models

Procedia PDF Downloads 388
5595 An Autonomous Passive Acoustic System for Detection, Tracking and Classification of Motorboats in Portofino Sea

Authors: A. Casale, J. Alessi, C. N. Bianchi, G. Bozzini, M. Brunoldi, V. Cappanera, P. Corvisiero, G. Fanciulli, D. Grosso, N. Magnoli, A. Mandich, C. Melchiorre, C. Morri, P. Povero, N. Stasi, M. Taiuti, G. Viano, M. Wurtz

Abstract:

This work describes a real-time algorithm for detecting, tracking and classifying single motorboats, developed using the acoustic data recorded by a hydrophone array within the framework of EU LIFE + project ARION (LIFE09NAT/IT/000190). The project aims to improve the conservation status of bottlenose dolphins through a real-time simultaneous monitoring of their population and surface ship traffic. A Passive Acoustic Monitoring (PAM) system is installed on two autonomous permanent marine buoys, located close to the boundaries of the Marine Protected Area (MPA) of Portofino (Ligurian Sea- Italy). Detecting surface ships is also a necessity in many other sensible areas, such as wind farms, oil platforms, and harbours. A PAM system could be an effective alternative to the usual monitoring systems, as radar or active sonar, for localizing unauthorized ship presence or illegal activities, with the advantage of not revealing its presence. Each ARION buoy consists of a particular type of structure, named meda elastica (elastic beacon) composed of a main pole, about 30-meter length, emerging for 7 meters, anchored to a mooring of 30 tons at 90 m depth by an anti-twist steel wire. Each buoy is equipped with a floating element and a hydrophone tetrahedron array, whose raw data are send via a Wi-Fi bridge to a ground station where real-time analysis is performed. Bottlenose dolphin detection algorithm and ship monitoring algorithm are operating in parallel and in real time. Three modules were developed and commissioned for ship monitoring. The first is the detection algorithm, based on Time Difference Of Arrival (TDOA) measurements, i.e., the evaluation of angular direction of the target respect to each buoy and the triangulation for obtaining the target position. The second is the tracking algorithm, based on a Kalman filter, i.e., the estimate of the real course and speed of the target through a predictor filter. At last, the classification algorithm is based on the DEMON method, i.e., the extraction of the acoustic signature of single vessels. The following results were obtained; the detection algorithm succeeded in evaluating the bearing angle with respect to each buoy and the position of the target, with an uncertainty of 2 degrees and a maximum range of 2.5 km. The tracking algorithm succeeded in reconstructing the real vessel courses and estimating the speed with an accuracy of 20% respect to the Automatic Identification System (AIS) signals. The classification algorithm succeeded in isolating the acoustic signature of single vessels, demonstrating its temporal stability and the consistency of both buoys results. As reference, the results were compared with the Hilbert transform of single channel signals. The algorithm for tracking multiple targets is ready to be developed, thanks to the modularity of the single ship algorithm: the classification module will enumerate and identify all targets present in the study area; for each of them, the detection module and the tracking module will be applied to monitor their course.

Keywords: acoustic-noise, bottlenose-dolphin, hydrophone, motorboat

Procedia PDF Downloads 155
5594 Video Object Segmentation for Automatic Image Annotation of Ethernet Connectors with Environment Mapping and 3D Projection

Authors: Marrone Silverio Melo Dantas Pedro Henrique Dreyer, Gabriel Fonseca Reis de Souza, Daniel Bezerra, Ricardo Souza, Silvia Lins, Judith Kelner, Djamel Fawzi Hadj Sadok

Abstract:

The creation of a dataset is time-consuming and often discourages researchers from pursuing their goals. To overcome this problem, we present and discuss two solutions adopted for the automation of this process. Both optimize valuable user time and resources and support video object segmentation with object tracking and 3D projection. In our scenario, we acquire images from a moving robotic arm and, for each approach, generate distinct annotated datasets. We evaluated the precision of the annotations by comparing these with a manually annotated dataset, as well as the efficiency in the context of detection and classification problems. For detection support, we used YOLO and obtained for the projection dataset an F1-Score, accuracy, and mAP values of 0.846, 0.924, and 0.875, respectively. Concerning the tracking dataset, we achieved an F1-Score of 0.861, an accuracy of 0.932, whereas mAP reached 0.894. In order to evaluate the quality of the annotated images used for classification problems, we employed deep learning architectures. We adopted metrics accuracy and F1-Score, for VGG, DenseNet, MobileNet, Inception, and ResNet. The VGG architecture outperformed the others for both projection and tracking datasets. It reached an accuracy and F1-score of 0.997 and 0.993, respectively. Similarly, for the tracking dataset, it achieved an accuracy of 0.991 and an F1-Score of 0.981.

Keywords: RJ45, automatic annotation, object tracking, 3D projection

Procedia PDF Downloads 147
5593 Determination of Marbofloxacin in Pig Plasma Using LC-MS/MS and Its Application to the Pharmacokinetic Studies

Authors: Jeong Woo Kang, MiYoung Baek, Ki-Suk Kim, Kwang-Jick Lee, ByungJae So

Abstract:

Introduction: A fast, easy and sensitive detection method was developed and validated by liquid chromatography tandem mass spectrometry for the determination of marbofloxacin in pig plasma which was further applied to study the pharmacokinetics of marbofloxacin. Materials and Methods: The plasma sample (500 μL) was mixed with 1.5 ml of 0.1% formic acid in MeCN to precipitate plasma proteins. After shaking for 20 min, The mixture was centrifuged at 5,000 × g for 30 min. It was dried under a nitrogen flow at 50℃. 500 μL aliquot of the sample was injected into the LC-MS/MS system. Chromatographic analysis was carried out mobile phase gradient consisting 0.1% formic acid in D.W. (A) and 0.1% formic acid in MeCN (B) with C18 reverse phase column. Mass spectrometry was performed using the positive ion mode and the selected ion monitoring (MRM). Results and Conclusions: The method validation was performed in the sample matrix. Good linearities (R2>0.999) were observed and the quantified average recoveries of marbofloxacin were 87 - 92% at level of 10 ng g-1 -100 ng g-1. The percent of coefficient of variation (CV) for the described method was less than 10 % over the range of concentrations studied. The limits of detection (LOD) and quantification (LOQ) were 2 and 5 ng g-1, respectively. This method has also been applied successfully to pharmacokinetic analysis of marbofloxacin after intravenous (IV), intramuscular (IM) and oral administration (PO). The mean peak plasma concentration (Cmax) was 2,597 ng g-1at 0.25 h, 2,587 ng g-1at 0.44 h and 2,355 ng g-1at 1.58 h for IV, IM and PO, respectively. The area under the plasma concentration-time curve (AUC0–t) was 24.8, 29.0 and 25.2 h μg/mL for IV, IM and PO, respectively. The elimination half-life (T1/2) was 8.6, 13.1 and 9.5 for IV, IM and PO, respectively. Bioavailability (F) of the marbofloxacin in pig was 117 and 101 % for IM and PO, respectively. Based on these result, marbofloxacin does not have any obstacles as therapeutics to develop the oral formulations such as tablets and capsules.

Keywords: marbofloxacin, LC-MS/MS, pharmacokinetics, chromatographic

Procedia PDF Downloads 529
5592 Hardware Implementation on Field Programmable Gate Array of Two-Stage Algorithm for Rough Set Reduct Generation

Authors: Tomasz Grzes, Maciej Kopczynski, Jaroslaw Stepaniuk

Abstract:

The rough sets theory developed by Prof. Z. Pawlak is one of the tools that can be used in the intelligent systems for data analysis and processing. Banking, medicine, image recognition and security are among the possible fields of utilization. In all these fields, the amount of the collected data is increasing quickly, but with the increase of the data, the computation speed becomes the critical factor. Data reduction is one of the solutions to this problem. Removing the redundancy in the rough sets can be achieved with the reduct. A lot of algorithms of generating the reduct were developed, but most of them are only software implementations, therefore have many limitations. Microprocessor uses the fixed word length, consumes a lot of time for either fetching as well as processing of the instruction and data; consequently, the software based implementations are relatively slow. Hardware systems don’t have these limitations and can process the data faster than a software. Reduct is the subset of the decision attributes that provides the discernibility of the objects. For the given decision table there can be more than one reduct. Core is the set of all indispensable condition attributes. None of its elements can be removed without affecting the classification power of all condition attributes. Moreover, every reduct consists of all the attributes from the core. In this paper, the hardware implementation of the two-stage greedy algorithm to find the one reduct is presented. The decision table is used as an input. Output of the algorithm is the superreduct which is the reduct with some additional removable attributes. First stage of the algorithm is calculating the core using the discernibility matrix. Second stage is generating the superreduct by enriching the core with the most common attributes, i.e., attributes that are more frequent in the decision table. Described above algorithm has two disadvantages: i) generating the superreduct instead of reduct, ii) additional first stage may be unnecessary if the core is empty. But for the systems focused on the fast computation of the reduct the first disadvantage is not the key problem. The core calculation can be achieved with a combinational logic block, and thus add respectively little time to the whole process. Algorithm presented in this paper was implemented in Field Programmable Gate Array (FPGA) as a digital device consisting of blocks that process the data in a single step. Calculating the core is done by the comparators connected to the block called 'singleton detector', which detects if the input word contains only single 'one'. Calculating the number of occurrences of the attribute is performed in the combinational block made up of the cascade of the adders. The superreduct generation process is iterative and thus needs the sequential circuit for controlling the calculations. For the research purpose, the algorithm was also implemented in C language and run on a PC. The times of execution of the reduct calculation in a hardware and software were considered. Results show increase in the speed of data processing.

Keywords: data reduction, digital systems design, field programmable gate array (FPGA), reduct, rough set

Procedia PDF Downloads 205
5591 Persistent Ribosomal In-Frame Mis-Translation of Stop Codons as Amino Acids in Multiple Open Reading Frames of a Human Long Non-Coding RNA

Authors: Leonard Lipovich, Pattaraporn Thepsuwan, Anton-Scott Goustin, Juan Cai, Donghong Ju, James B. Brown

Abstract:

Two-thirds of human genes do not encode any known proteins. Aside from long non-coding RNA (lncRNA) genes with recently-discovered functions, the ~40,000 non-protein-coding human genes remain poorly understood, and a role for their transcripts as de-facto unconventional messenger RNAs has not been formally excluded. Ribosome profiling (Riboseq) predicts translational potential, but without independent evidence of proteins from lncRNA open reading frames (ORFs), ribosome binding of lncRNAs does not prove translation. Previously, we mass-spectrometrically documented translation of specific lncRNAs in human K562 and GM12878 cells. We now examined lncRNA translation in human MCF7 cells, integrating strand-specific Illumina RNAseq, Riboseq, and deep mass spectrometry in biological quadruplicates performed at two core facilities (BGI, China; City of Hope, USA). We excluded known-protein matches. UCSC Genome Browser-assisted manual annotation of imperfect (tryptic-digest-peptides)-to-(lncRNA-three-frame-translations) alignments revealed three peptides hypothetically explicable by 'stop-to-nonstop' in-frame replacement of stop codons by amino acids in two ORFs of the lncRNA MMP24-AS1. To search for this phenomenon genomewide, we designed and implemented a novel pipeline, matching tryptic-digest spectra to wildcard-instead-of-stop versions of repeat-masked, six-frame, whole-genome translations. Along with singleton putative stop-to-nonstop events affecting four other lncRNAs, we identified 24 additional peptides with stop-to-nonstop in-frame substitutions from multiple positive-strand MMP24-AS1 ORFs. Only UAG and UGA, never UAA, stop codons were impacted. All MMP24-AS1-matching spectra met the same significance thresholds as high-confidence known-protein signatures. Targeted resequencing of MMP24-AS1 genomic DNA and cDNA from the same samples did not reveal any mutations, polymorphisms, or sequencing-detectable RNA editing. This unprecedented apparent gene-specific violation of the genetic code highlights the importance of matching peptides to whole-genome, not known-genes-only, ORFs in mass-spectrometry workflows, and suggests a new mechanism enhancing the combinatorial complexity of the proteome. Funding: NIH Director’s New Innovator Award 1DP2-CA196375 to LL.

Keywords: genetic code, lncRNA, long non-coding RNA, mass spectrometry, proteogenomics, ribo-seq, ribosome, RNAseq

Procedia PDF Downloads 214
5590 Space Debris Mitigation: Solutions from the Dark Skies of the Remote Australian Outback Using a Proposed Network of Mobile Astronomical Observatories

Authors: Muhammad Akbar Hussain, Muhammad Mehdi Hussain, Waqar Haider

Abstract:

There are tens of thousands of undetected and uncatalogued pieces of space debris in the Low Earth Orbit (LEO). They are not only difficult to be detected and tracked, their sheer number puts active satellites and humans in orbit around Earth into danger. With the entry of more governments and private companies into harnessing the Earth’s orbit for communication, research and military purposes, there is an ever-increasing need for not only the detection and cataloguing of these pieces of space debris, it is time to take measures to take them out and clean up the space around Earth. Current optical and radar-based Space Situational Awareness initiatives are useful mostly in detecting and cataloguing larger pieces of debris mainly for avoidance measures. Smaller than 10 cm pieces are in a relatively dark zone, yet these are deadly and capable of destroying satellites and human missions. A network of mobile observatories, connected to each other in real time and working in unison as a single instrument, may be able to detect small pieces of debris and achieve effective triangulation to help create a comprehensive database of their trajectories and parameters to the highest level of precision. This data may enable ground-based laser systems to help deorbit individual debris. Such a network of observatories can join current efforts in detection and removal of space debris in Earth’s orbit.

Keywords: space debris, low earth orbit, mobile observatories, triangulation, seamless operability

Procedia PDF Downloads 148
5589 A Review of Current Knowledge on Assessment of Precast Structures Using Fragility Curves

Authors: E. Akpinar, A. Erol, M.F. Cakir

Abstract:

Precast reinforced concrete (RC) structures are excellent alternatives for construction world all over the globe, thanks to their rapid erection phase, ease mounting process, better quality and reasonable prices. Such structures are rather popular for industrial buildings. For the sake of economic importance of such industrial buildings as well as significance of safety, like every other type of structures, performance assessment and structural risk analysis are important. Fragility curves are powerful tools for damage projection and assessment for any sort of building as well as precast structures. In this study, a comparative review of current knowledge on fragility analysis of industrial precast RC structures were presented and findings in previous studies were compiled. Effects of different structural variables, parameters and building geometries as well as soil conditions on fragility analysis of precast structures are reviewed. It was aimed to briefly present the information in the literature about the procedure of damage probability prediction including fragility curves for such industrial facilities. It is found that determination of the aforementioned structural parameters as well as selecting analysis procedure are critically important for damage prediction of industrial precast RC structures using fragility curves.

Keywords: damage prediction, fragility curve, industrial buildings, precast reinforced concrete structures

Procedia PDF Downloads 177
5588 An Exploratory Study of Reliability of Ranking vs. Rating in Peer Assessment

Authors: Yang Song, Yifan Guo, Edward F. Gehringer

Abstract:

Fifty years of research has found great potential for peer assessment as a pedagogical approach. With peer assessment, not only do students receive more copious assessments; they also learn to become assessors. In recent decades, more educational peer assessments have been facilitated by online systems. Those online systems are designed differently to suit different class settings and student groups, but they basically fall into two categories: rating-based and ranking-based. The rating-based systems ask assessors to rate the artifacts one by one following some review rubrics. The ranking-based systems allow assessors to review a set of artifacts and give a rank for each of them. Though there are different systems and a large number of users of each category, there is no comprehensive comparison on which design leads to higher reliability. In this paper, we designed algorithms to evaluate assessors' reliabilities based on their rating/ranking against the global ranks of the artifacts they have reviewed. These algorithms are suitable for data from both rating-based and ranking-based peer assessment systems. The experiments were done based on more than 15,000 peer assessments from multiple peer assessment systems. We found that the assessors in ranking-based peer assessments are at least 10% more reliable than the assessors in rating-based peer assessments. Further analysis also demonstrated that the assessors in ranking-based assessments tend to assess the more differentiable artifacts correctly, but there is no such pattern for rating-based assessors.

Keywords: peer assessment, peer rating, peer ranking, reliability

Procedia PDF Downloads 421
5587 A Systematic Review on Development of a Cost Estimation Framework: A Case Study of Nigeria

Authors: Babatunde Dosumu, Obuks Ejohwomu, Akilu Yunusa-Kaltungo

Abstract:

Cost estimation in construction is often difficult, particularly when dealing with risks and uncertainties, which are inevitable and peculiar to developing countries like Nigeria. Direct consequences of these are major deviations in cost, duration, and quality. The fundamental aim of this study is to develop a framework for assessing the impacts of risk on cost estimation, which in turn causes variabilities between contract sum and final account. This is very important, as initial estimates given to clients should reflect the certain magnitude of consistency and accuracy, which the client builds other planning-related activities upon, and also enhance the capabilities of construction industry professionals by enabling better prediction of the final account from the contract sum. In achieving this, a systematic literature review was conducted with cost variability and construction projects as search string within three databases: Scopus, Web of science, and Ebsco (Business source premium), which are further analyzed and gap(s) in knowledge or research discovered. From the extensive review, it was found that factors causing deviation between final accounts and contract sum ranged between 1 and 45. Besides, it was discovered that a cost estimation framework similar to Building Cost Information Services (BCIS) is unavailable in Nigeria, which is a major reason why initial estimates are very often inconsistent, leading to project delay, abandonment, or determination at the expense of the huge sum of money invested. It was concluded that the development of a cost estimation framework that is adjudged an important tool in risk shedding rather than risk-sharing in project risk management would be a panacea to cost estimation problems, leading to cost variability in the Nigerian construction industry by the time this ongoing Ph.D. research is completed. It was recommended that practitioners in the construction industry should always take into account risk in order to facilitate the rapid development of the construction industry in Nigeria, which should give stakeholders a more in-depth understanding of the estimation effectiveness and efficiency to be adopted by stakeholders in both the private and public sectors.

Keywords: cost variability, construction projects, future studies, Nigeria

Procedia PDF Downloads 180
5586 Understanding the Complex Relationship Between Economic Independency and Intimate Partner Violence by Applying a Socio-Ecological Analysis Framework

Authors: Suzanne Bouma

Abstract:

In the Netherlands, the assumed causal relationship between employment, economic independence and individual freedom of choice has been extended to the approach of intimate partner violence (IPV). In the interests of combating IPV, it is crucial to further investigate this relationship. Based on a literature review, this article shows that the relationship between economic independence and IPV is highly complex. To unravel this complex relationship, a socio-ecological analysis framework has been applied. First, it is a layered relation, in where employment does not necessarily lead to economic independence, which can be explained by social inequalities. Second, the relation is bidirectional, where women do not by definition have access to their own financial recourses due to tactics of financial control by the intimate partner. This reveals the coexistence of IPV and economic abuse and the extent to which an intimate relationship affects the scope for individual choice. Third, there is a paradoxical relationship in which employment is both a protective and risk factor for IPV. This, in turn, cannot be separated from traditional norms about masculinity and femininity, where men occupy a position of power and derive status from being the breadwinner. These findings imply that not only the approach to IPV but also the labor market policy requires a gender-sensitive approach.

Keywords: intimate partner violence, economic independence, literature review, socio-ecological analysis framework

Procedia PDF Downloads 216
5585 Success Measurement in Corporate Venturing: Integrating Three Decades of Research

Authors: Maurice Steinhoff, Lucas Costantino, Dominik Kanbach

Abstract:

Measurement approaches to corporate venturing (CV) success are highly diverse in the extant literature. Furthermore, these approaches rarely build on each other, making it difficult to derive comparable conclusions about CV outcomes. Employing a systematic literature review of three decades of research, the objective of this study is to provide transparency and structure in the broad field of CV research. Subsequently, the paper examines 28 studies in detail, resulting in two main contributions to the research field. First, three structural dimensions of measurement approaches are derived from the studies in the sample, namely, “level of analysis” (parent, program, and venture levels), “measurement perspective” (objective, subjective, and mixed measurement), and “locus of opportunity” (internal, external, and general CV activities). Second, an integrated overview of nine unique clusters structures the different measurement approaches. These clusters allow to encapsulate measurement approaches, but also make visible the approaches’ heterogeneity, as well as specific measurement items. Thereby, the study contributes to CV research by revealing and reconciling the variety of CV success-measurement approaches. The study also provides relevant insights for practitioners, by making transparent the various approaches to measuring the success of CV activities and presenting a list of 114 concrete and distinct measurement items.

Keywords: corporate venturing, measurement items, success measurement, structured literature review

Procedia PDF Downloads 161
5584 A Review on Parametric Optimization of Casting Processes Using Optimization Techniques

Authors: Bhrugesh Radadiya, Jaydeep Shah

Abstract:

In Indian foundry industry, there is a need of defect free casting with minimum production cost in short lead time. Casting defect is a very large issue in foundry shop which increases the rejection rate of casting and wastage of materials. The various parameters influences on casting process such as mold machine related parameters, green sand related parameters, cast metal related parameters, mold related parameters and shake out related parameters. The mold related parameters are most influences on casting defects in sand casting process. This paper review the casting produced by foundry with shrinkage and blow holes as a major defects was analyzed and identified that mold related parameters such as mold temperature, pouring temperature and runner size were not properly set in sand casting process. These parameters were optimized using different optimization techniques such as Taguchi method, Response surface methodology, Genetic algorithm and Teaching-learning based optimization algorithm. Finally, concluded that a Teaching-learning based optimization algorithm give better result than other optimization techniques.

Keywords: casting defects, genetic algorithm, parametric optimization, Taguchi method, TLBO algorithm

Procedia PDF Downloads 716
5583 A Quality Index Optimization Method for Non-Invasive Fetal ECG Extraction

Authors: Lucia Billeci, Gennaro Tartarisco, Maurizio Varanini

Abstract:

Fetal cardiac monitoring by fetal electrocardiogram (fECG) can provide significant clinical information about the healthy condition of the fetus. Despite this potentiality till now the use of fECG in clinical practice has been quite limited due to the difficulties in its measuring. The recovery of fECG from the signals acquired non-invasively by using electrodes placed on the maternal abdomen is a challenging task because abdominal signals are a mixture of several components and the fetal one is very weak. This paper presents an approach for fECG extraction from abdominal maternal recordings, which exploits the characteristics of pseudo-periodicity of fetal ECG. It consists of devising a quality index (fQI) for fECG and of finding the linear combinations of preprocessed abdominal signals, which maximize these fQI (quality index optimization - QIO). It aims at improving the performances of the most commonly adopted methods for fECG extraction, usually based on maternal ECG (mECG) estimating and canceling. The procedure for the fECG extraction and fetal QRS (fQRS) detection is completely unsupervised and based on the following steps: signal pre-processing; maternal ECG (mECG) extraction and maternal QRS detection; mECG component approximation and canceling by weighted principal component analysis; fECG extraction by fQI maximization and fetal QRS detection. The proposed method was compared with our previously developed procedure, which obtained the highest at the Physionet/Computing in Cardiology Challenge 2013. That procedure was based on removing the mECG from abdominal signals estimated by a principal component analysis (PCA) and applying the Independent component Analysis (ICA) on the residual signals. Both methods were developed and tuned using 69, 1 min long, abdominal measurements with fetal QRS annotation of the dataset A provided by PhysioNet/Computing in Cardiology Challenge 2013. The QIO-based and the ICA-based methods were compared in analyzing two databases of abdominal maternal ECG available on the Physionet site. The first is the Abdominal and Direct Fetal Electrocardiogram Database (ADdb) which contains the fetal QRS annotations thus allowing a quantitative performance comparison, the second is the Non-Invasive Fetal Electrocardiogram Database (NIdb), which does not contain the fetal QRS annotations so that the comparison between the two methods can be only qualitative. In particular, the comparison on NIdb was performed defining an index of quality for the fetal RR series. On the annotated database ADdb the QIO method, provided the performance indexes Sens=0.9988, PPA=0.9991, F1=0.9989 overcoming the ICA-based one, which provided Sens=0.9966, PPA=0.9972, F1=0.9969. The comparison on NIdb was performed defining an index of quality for the fetal RR series. The index of quality resulted higher for the QIO-based method compared to the ICA-based one in 35 records out 55 cases of the NIdb. The QIO-based method gave very high performances with both the databases. The results of this study foresees the application of the algorithm in a fully unsupervised way for the implementation in wearable devices for self-monitoring of fetal health.

Keywords: fetal electrocardiography, fetal QRS detection, independent component analysis (ICA), optimization, wearable

Procedia PDF Downloads 267
5582 A Review on Thermal Conductivity of Bio-Based Carbon Nanotubes

Authors: Gloria A. Adewumi, Andrew C. Eloka-Eboka, Freddie L. Inambao

Abstract:

Bio-based carbon nanotubes (CNTs) have received considerable research attention due to their comparative advantages of high level stability, simplistic use, low toxicity and overall environmental friendliness. New potentials for improvement in heat transfer applications are presented due to their high aspect ratio, high thermal conductivity and special surface area. Phonons have been identified as being responsible for thermal conductivities in carbon nanotubes. Therefore, understanding the mechanism of heat conduction in CNTs involves investigating the difference between the varieties of phonon modes and knowing the kinds of phonon modes that play the dominant role. In this review, a reference to a different number of studies is made and in addition, the role of phonon relaxation rate mainly controlled by boundary scattering and three-phonon Umklapp scattering process was investigated. Results show that the phonon modes are sensitive to a number of nanotube conditions such as: diameter, length, temperature, defects and axial strain. At a low temperature (<100K) the thermal conductivity increases with increasing temperature. A small nanotube size causes phonon quantization which is evident in the thermal conductivity at low temperatures.

Keywords: carbon nanotubes, phonons, thermal conductivity, Umklapp process

Procedia PDF Downloads 343
5581 Pre-Operative Tool for Facial-Post-Surgical Estimation and Detection

Authors: Ayat E. Ali, Christeen R. Aziz, Merna A. Helmy, Mohammed M. Malek, Sherif H. El-Gohary

Abstract:

Goal: Purpose of the project was to make a plastic surgery prediction by using pre-operative images for the plastic surgeries’ patients and to show this prediction on a screen to compare between the current case and the appearance after the surgery. Methods: To this aim, we implemented a software which used data from the internet for facial skin diseases, skin burns, pre-and post-images for plastic surgeries then the post- surgical prediction is done by using K-nearest neighbor (KNN). So we designed and fabricated a smart mirror divided into two parts a screen and a reflective mirror so patient's pre- and post-appearance will be showed at the same time. Results: We worked on some skin diseases like vitiligo, skin burns and wrinkles. We classified the three degrees of burns using KNN classifier with accuracy 60%. We also succeeded in segmenting the area of vitiligo. Our future work will include working on more skin diseases, classify them and give a prediction for the look after the surgery. Also we will go deeper into facial deformities and plastic surgeries like nose reshaping and face slim down. Conclusion: Our project will give a prediction relates strongly to the real look after surgery and decrease different diagnoses among doctors. Significance: The mirror may have broad societal appeal as it will make the distance between patient's satisfaction and the medical standards smaller.

Keywords: k-nearest neighbor (knn), face detection, vitiligo, bone deformity

Procedia PDF Downloads 148
5580 Design and Fabrication of ZSO Nanocomposite Thin Film Based NO2 Gas Sensor

Authors: Bal Chandra Yadav, Rakesh K. Sonker, Anjali Sharma, Punit Tyagi, Vinay Gupta, Monika Tomar

Abstract:

In the present study, ZnO doped SnO2 thin films of various compositions were deposited on the surface of a corning substrate by dropping the two sols containing the precursors for composite (ZSO) with subsequent heat treatment. The sensor materials used for selective detection of nitrogen dioxide (NO2) were designed from the correlation between the sensor composition and gas response. The available NO2 sensors are operative at very high temperature (150-800 °C) with low sensing response (2-100) even in higher concentrations. Efforts are continuing towards the development of NO2 gas sensor aiming with an enhanced response along with a reduction in operating temperature by incorporating some catalysts or dopants. Thus in this work, a novel sensor structure based on ZSO nanocomposite has been fabricated using chemical route for the detection of NO2 gas. The structural, surface morphological and optical properties of prepared films have been studied by using X-ray diffraction (XRD), Atomic force microscopy (AFM), Transmission electron microscope (TEM) and UV-visible spectroscopy respectively. The effect of thickness variation from 230 nm to 644 nm of ZSO composite thin film has been studied and the ZSO thin film of thickness ~ 460 nm was found to exhibit the maximum gas sensing response ~ 2.1×103 towards 20 ppm NO2 gas at an operating temperature of 90 °C. The average response and recovery times of the sensor were observed to be 3.51 and 6.91 min respectively. Selectivity of the sensor was checked with the cross-exposure of vapour CO, acetone, IPA, CH4, NH3 and CO2 gases. It was found that besides the higher sensing response towards NO2 gas, the prepared ZSO thin film was also highly selective towards NO2 gas.

Keywords: ZSO nanocomposite thin film, ZnO tetrapod structure, NO2 gas sensor, sol-gel method

Procedia PDF Downloads 324
5579 The Evolution of Man through Cranial and Dental Remains: A Literature Review

Authors: Rishana Bilimoria

Abstract:

Darwin’s insightful anthropological theory on the evolution drove mankind’s understanding of our existence in the natural world. Scientists consider analysis of dental and craniofacial remains to be pivotal in uncovering facts about our evolutionary journey. The resilient mineral content of enamel and dentine allow cranial and dental remains to be preserved for millions of years, making it an excellent resource not only in anthropology but other fields of research including forensic dentistry. This literature review aims to chronologically approach each ancestral species, reviewing Australopithecus, Paranthropus, Homo Habilis, Homo Rudolfensis, Homo Erectus, Homo Neanderthalis, and finally Homo Sapiens. Studies included in the review assess the features of cranio-dental remains that are of evolutionary importance, such as microstructure, microwear, morphology, and jaw biomechanics. The article discusses the plethora of analysis techniques employed to study dental remains including carbon dating, dental topography, confocal imaging, DPI scanning and light microscopy, in addition to microwear study and analysis of features such as coronal and root morphology, mandibular corpus shape, craniofacial anatomy and microstructure. Furthermore, results from these studies provide insight into the diet, lifestyle and consequently, ecological surroundings of each species. We can correlate dental fossil evidence with wider theories on pivotal global events, to help us contextualize each species in space and time. Examples include dietary adaptation during the period of global cooling converting the landscape of Africa from forest to grassland. Global migration ‘out of Africa’ can be demonstrated by enamel thickness variation, cranial vault variation over time demonstrates accommodation to larger brain sizes, and dental wear patterns can place the commencement of lithic technology in history. Conclusions from this literature review show that dental evidence plays a major role in painting a phenotypic and all rounded picture of species of the Homo genus, in particular, analysis of coronal morphology through carbon dating and dental wear analysis. With regards to analysis technique, whilst studies require larger sample sizes, this could be unrealistic since there are limitations in ability to retrieve fossil data. We cannot deny the reliability of carbon dating; however, there is certainly scope for the use of more recent techniques, and further evidence of their success is required.

Keywords: cranio-facial, dental remains, evolution, hominids

Procedia PDF Downloads 149
5578 Redox-labeled Electrochemical Aptasensor Array for Single-cell Detection

Authors: Shuo Li, Yannick Coffinier, Chann Lagadec, Fabrizio Cleri, Katsuhiko Nishiguchi, Akira Fujiwara, Soo Hyeon Kim, Nicolas Clément

Abstract:

The need for single cell detection and analysis techniques has increased in the past decades because of the heterogeneity of individual living cells, which increases the complexity of the pathogenesis of malignant tumors. In the search for early cancer detection, high-precision medicine and therapy, the technologies most used today for sensitive detection of target analytes and monitoring the variation of these species are mainly including two types. One is based on the identification of molecular differences at the single-cell level, such as flow cytometry, fluorescence-activated cell sorting, next generation proteomics, lipidomic studies, another is based on capturing or detecting single tumor cells from fresh or fixed primary tumors and metastatic tissues, and rare circulating tumors cells (CTCs) from blood or bone marrow, for example, dielectrophoresis technique, microfluidic based microposts chip, electrochemical (EC) approach. Compared to other methods, EC sensors have the merits of easy operation, high sensitivity, and portability. However, despite various demonstrations of low limits of detection (LOD), including aptamer sensors, arrayed EC sensors for detecting single-cell have not been demonstrated. In this work, a new technique based on 20-nm-thick nanopillars array to support cells and keep them at ideal recognition distance for redox-labeled aptamers grafted on the surface. The key advantages of this technology are not only to suppress the false positive signal arising from the pressure exerted by all (including non-target) cells pushing on the aptamers by downward force but also to stabilize the aptamer at the ideal hairpin configuration thanks to a confinement effect. With the first implementation of this technique, a LOD of 13 cells (with5.4 μL of cell suspension) was estimated. In further, the nanosupported cell technology using redox-labeled aptasensors has been pushed forward and fully integrated into a single-cell electrochemical aptasensor array. To reach this goal, the LOD has been reduced by more than one order of magnitude by suppressing parasitic capacitive electrochemical signals by minimizing the sensor area and localizing the cells. Statistical analysis at the single-cell level is demonstrated for the recognition of cancer cells. The future of this technology is discussed, and the potential for scaling over millions of electrodes, thus pushing further integration at sub-cellular level, is highlighted. Despite several demonstrations of electrochemical devices with LOD of 1 cell/mL, the implementation of single-cell bioelectrochemical sensor arrays has remained elusive due to their challenging implementation at a large scale. Here, the introduced nanopillar array technology combined with redox-labeled aptamers targeting epithelial cell adhesion molecule (EpCAM) is perfectly suited for such implementation. Combining nanopillar arrays with microwells determined for single cell trapping directly on the sensor surface, single target cells are successfully detected and analyzed. This first implementation of a single-cell electrochemical aptasensor array based on Brownian-fluctuating redox species opens new opportunities for large-scale implementation and statistical analysis of early cancer diagnosis and cancer therapy in clinical settings.

Keywords: bioelectrochemistry, aptasensors, single-cell, nanopillars

Procedia PDF Downloads 95
5577 Review of the Anatomy of the Middle Cerebral Artery and Its Anomalies

Authors: Karen Cilliers, Benedict John Page

Abstract:

The middle cerebral artery (MCA) is the most complex cerebral artery although few anomalies are found compared to the other cerebral arteries. The branches of the MCA cover a large part of each hemisphere, therefore it is exposed in various operations. Although the segments of the MCA are similarly described by most authors, there is some disagreement on the branching pattern of the MCA. The aim of this study was to review the available literature on the anatomy and variations of the MCA, and to compare this to a pilot study. For the pilot study, 20 hemispheres were perfused with coloured silicone and the MCA was dissected. According to the literature, the two most common branching configurations are the bifurcating and trifurcating patterns. In the pilot study, bifurcation was observed in 19 hemispheres, and in one hemisphere there was no branching (monofurcation). No trifurcation was observed. The most commonly duplicated branch was the anterior parietal artery in 30%, and most commonly absent was the common temporal artery in 65% and the temporal polar artery in 40%. Very few studies describe the origins of the branches of the MCA, therefore a detailed description is given. Middle cerebral artery variations that are occasionally reported in the literature include fenestration, and a duplicated or accessory MCA, although no variations were observed in the pilot study. Aneurysms can frequently be observed at the branching of cerebral vessels, therefore a thorough knowledge of the vascular anatomy is vital. Furthermore, knowledge of possible variations is important since variations can have serious clinical implications.

Keywords: anatomy, anomaly, description, middle cerebral artery, origin, variation

Procedia PDF Downloads 332
5576 Deep Learning-Based Automated Structure Deterioration Detection for Building Structures: A Technological Advancement for Ensuring Structural Integrity

Authors: Kavita Bodke

Abstract:

Structural health monitoring (SHM) is experiencing growth, necessitating the development of distinct methodologies to address its expanding scope effectively. In this study, we developed automatic structure damage identification, which incorporates three unique types of a building’s structural integrity. The first pertains to the presence of fractures within the structure, the second relates to the issue of dampness within the structure, and the third involves corrosion inside the structure. This study employs image classification techniques to discern between intact and impaired structures within structural data. The aim of this research is to find automatic damage detection with the probability of each damage class being present in one image. Based on this probability, we know which class has a higher probability or is more affected than the other classes. Utilizing photographs captured by a mobile camera serves as the input for an image classification system. Image classification was employed in our study to perform multi-class and multi-label classification. The objective was to categorize structural data based on the presence of cracks, moisture, and corrosion. In the context of multi-class image classification, our study employed three distinct methodologies: Random Forest, Multilayer Perceptron, and CNN. For the task of multi-label image classification, the models employed were Rasnet, Xceptionet, and Inception.

Keywords: SHM, CNN, deep learning, multi-class classification, multi-label classification

Procedia PDF Downloads 16
5575 RP-HPLC Method Development and Its Validation for Simultaneous Estimation of Metoprolol Succinate and Olmesartan Medoxomil Combination in Bulk and Tablet Dosage Form

Authors: S. Jain, R. Savalia, V. Saini

Abstract:

A simple, accurate, precise, sensitive and specific RP-HPLC method was developed and validated for simultaneous estimation of Metoprolol Succinate and Olmesartan Medoxomil in bulk and tablet dosage form. The RP-HPLC method has shown adequate separation for Metoprolol Succinate and Olmesartan Medoxomil from its degradation products. The separation was achieved on a Phenomenex luna ODS C18 (250mm X 4.6mm i.d., 5μm particle size) with an isocratic mixture of acetonitrile: 50mM phosphate buffer pH 4.0 adjusted with glacial acetic acid in the ratio of 55:45 v/v. The mobile phase at a flow rate of 1.0ml/min, Injection volume 20μl and wavelength of detection was kept at 225nm. The retention time for Metoprolol Succinate and Olmesartan Medoxomil was 2.451±0.1min and 6.167±0.1min, respectively. The linearity of the proposed method was investigated in the range of 5-50μg/ml and 2-20μg/ml for Metoprolol Succinate and Olmesartan Medoxomil, respectively. Correlation coefficient was 0.999 and 0.9996 for Metoprolol Succinate and Olmesartan Medoxomil, respectively. The limit of detection was 0.2847μg/ml and 0.1251μg/ml for Metoprolol Succinate and Olmesartan Medoxomil, respectively and the limit of quantification was 0.8630μg/ml and 0.3793μg/ml for Metoprolol and Olmesartan, respectively. Proposed methods were validated as per ICH guidelines for linearity, accuracy, precision, specificity and robustness for estimation of Metoprolol Succinate and Olmesartan Medoxomil in commercially available tablet dosage form and results were found to be satisfactory. Thus the developed and validated stability indicating method can be used successfully for marketed formulations.

Keywords: metoprolol succinate, olmesartan medoxomil, RP-HPLC method, validation, ICH

Procedia PDF Downloads 297
5574 The Effectiveness of Scalp Cooling Therapy on Reducing Chemotherapy Induced Alopecia: A Critical Literature Review

Authors: M. Krishna

Abstract:

The study was intended to identify if scalp cooling therapy is effective on preventing chemotherapy-induced hair loss among cancer patients. Critical literature of non-randomized controlled trials was used to investigate whether scalp cooling therapy is effective on preventing chemotherapy-induced alopecia. The review identified that scalp cooling therapy is effective on preventing chemotherapy-induced alopecia. Most of the patients receiving chemotherapy experience alopecia. It is also perceived as the worst effect of chemotherapy. This may be severe and lead the patients to withdraw the chemo treatment. The image disturbance caused by alopecia will make the patient depressed and will lead to declined immunity. With the knowledge on effectiveness of scalp cooling therapy on preventing chemotherapy-induced alopecia, patient undergoing chemotherapy will not be hesitant to undergo the treatment. Patients are recommended to go through scalp cooling therapy every chemo cycle and the proper therapy duration is 30 minutes before, during chemo. The suggested duration of the scalp cooling therapy is 45-90 minutes for an effective and positive outcome. This finding is excluding other factors of alopecia such as menopause, therapeutic drugs, poor hair density, liver function problems, and drug regimes.

Keywords: alopecia, cancer, chemotherapy, scalp cooling therapy

Procedia PDF Downloads 194