Search results for: drying time
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18186

Search results for: drying time

13806 Application of Finite Volume Method for Numerical Simulation of Contaminant Transfer in a Two-Dimensional Reservoir

Authors: Atousa Ataieyan, Salvador A. Gomez-Lopera, Gennaro Sepede

Abstract:

Today, due to the growing urban population and consequently, the increasing water demand in cities, the amount of contaminants entering the water resources is increasing. This can impose harmful effects on the quality of the downstream water. Therefore, predicting the concentration of discharged pollutants at different times and distances of the interested area is of high importance in order to carry out preventative and controlling measures, as well as to avoid consuming the contaminated water. In this paper, the concentration distribution of an injected conservative pollutant in a square reservoir containing four symmetric blocks and three sources using Finite Volume Method (FVM) is simulated. For this purpose, after estimating the flow velocity, classical Advection-Diffusion Equation (ADE) has been discretized over the studying domain by Backward Time- Backward Space (BTBS) scheme. Then, the discretized equations for each node have been derived according to the initial condition, boundary conditions and point contaminant sources. Finally, taking into account the appropriate time step and space step, a computational code was set up in MATLAB. Contaminant concentration was then obtained at different times and distances. Simulation results show how using BTBS differentiating scheme and FVM as a numerical method for solving the partial differential equation of transport is an appropriate approach in the case of two-dimensional contaminant transfer in an advective-diffusive flow.

Keywords: BTBS differentiating scheme, contaminant concentration, finite volume, mass transfer, water pollution

Procedia PDF Downloads 124
13805 Open Source Knowledge Management Approach to Manage and Disseminate Distributed Content in a Global Enterprise

Authors: Rahul Thakur, Onkar Chandel

Abstract:

Red Hat is the world leader in providing open source software and solutions. A global enterprise, like Red Hat, has unique issues of connecting employees with content because of distributed offices, multiple teams spread across geographies, multiple languages, and different cultures. Employees, of a global company, create content that is distributed across departments, teams, regions, and countries. This makes finding the best content difficult since owners keep iterating on the existing content. When employees are unable to find the content, they end up creating it once again and in the process duplicating existing material and effort. Also, employees may not find the relevant content and spend time reviewing obsolete duplicate, or irrelevant content. On an average, a person spends 15 minutes/day in failed searches that might result in missed business opportunities, employee frustration, and substandard deliverables. Red Hat Knowledge Management Office (KMO) applied 'open source strategy' to solve the above problems. Under the Open Source Strategy, decisions are taken collectively. The strategy aims at accomplishing common goals with the help of communities. The objectives of this initiative were to save employees' time, get them authentic content, improve their content search experience, avoid duplicate content creation, provide context based search, improve analytics, improve content management workflows, automate content classification, and automate content upload. This session will describe open source strategy, its applicability in content management, challenges, recommended solutions, and outcome.

Keywords: content classification, content management, knowledge management, open source

Procedia PDF Downloads 197
13804 Examination of the Self-Expression Model with Reference to Luxury Watches with Particular Regard of the Buying-Reasons

Authors: Christopher Benedikt Jakob

Abstract:

Human beings are intrigued by luxury watches for decades. It is fascinating that customers pay an enormous amount of money for specific wristwatch models. It is fascinating that customers of the luxury watch industry accept a yearly price increase. This behavior increases their desirability even more. Luxury watches are perceived as status symbols, but they are additionally accepted as a currency without the disadvantage of currency fluctuations. It is obvious that the symbolic value is more important as the functional value with reference to the buying-reasons as regards luxury watches. Nowadays human beings do not need a wristwatch to read the time. Tablets, notebooks, smartphones, the watch in the car and watches on public places are used to inform people about the current time. This is one of the reasons why there is a trend that people do not wear wristwatches anymore. Due to these facts, this study has the intention to give answers to the question why people invest an enormous amount of money on the consumption of luxury watches and why those watches are seen as a status symbol. The study examines why the luxury watch industry records significant growth rates. The self-expression model is used as an appropriate methodology to find reasons why human beings purchase specific luxury watches. This evaluative approach further discusses if human beings are aware of their current self and their ideal self and how they express them. Furthermore, the research critically evaluates the people’s social self and their ideal social self. One of the goals is to identify if customers know why they like specific luxury watches and dislike others although they have the same quality and cost comparable prices.

Keywords: luxury watch, brand awareness, buying-behaviour, consumer, self-expression

Procedia PDF Downloads 148
13803 Structural Insulated Panels

Authors: R. Padmini, G. V. Manoj Kumar

Abstract:

Structural insulated panels (SIPs) are a high-performance building system for residential and light commercial construction. The panels consist of an insulating foam core sandwiched between two structural facings, typically oriented strand board (OSB). SIPs are manufactured under factory controlled conditions and can be fabricated to fit nearly any building design. The result is a building system that is extremely strong, energy efficient and cost effective. Building with SIPs will save you time, money and labor. Building with SIPs generally costs about the same as building with wood frame construction when you factor in the labor savings resulting from shorter construction time and less job-site waste. Other savings are realized because smaller heating and cooling systems are required with SIP construction. Structural insulated panels (SIPs) are one of the most airtight and well-insulated building systems available, making them an inherently green product. An airtight SIP building will use less energy to heat and cool, allow for better control over indoor environmental conditions, and reduce construction waste. Green buildings use less energy, reducing carbon dioxide emissions and playing an important role in combating global climate change. Buildings also use a tremendous amount of natural resources to construct and operate. Constructing green buildings that use these resources more efficiently, while minimizing pollution that can harm renewable natural resources, is crucial to a sustainable future.

Keywords: high performance, under factory controlled, wood frame, carbon dioxide emissions, natural resources

Procedia PDF Downloads 426
13802 A Human Centered Design of an Exoskeleton Using Multibody Simulation

Authors: Sebastian Kölbl, Thomas Reitmaier, Mathias Hartmann

Abstract:

Trial and error approaches to adapt wearable support structures to human physiology are time consuming and elaborate. However, during preliminary design, the focus lies on understanding the interaction between exoskeleton and the human body in terms of forces and moments, namely body mechanics. For the study at hand, a multi-body simulation approach has been enhanced to evaluate actual forces and moments in a human dummy model with and without a digital mock-up of an active exoskeleton. Therefore, different motion data have been gathered and processed to perform a musculosceletal analysis. The motion data are ground reaction forces, electromyography data (EMG) and human motion data recorded with a marker-based motion capture system. Based on the experimental data, the response of the human dummy model has been calibrated. Subsequently, the scalable human dummy model, in conjunction with the motion data, is connected with the exoskeleton structure. The results of the human-machine interaction (HMI) simulation platform are in particular resulting contact forces and human joint forces to compare with admissible values with regard to the human physiology. Furthermore, it provides feedback for the sizing of the exoskeleton structure in terms of resulting interface forces (stress justification) and the effect of its compliance. A stepwise approach for the setup and validation of the modeling strategy is presented and the potential for a more time and cost-effective development of wearable support structures is outlined.

Keywords: assistive devices, ergonomic design, inverse dynamics, inverse kinematics, multibody simulation

Procedia PDF Downloads 149
13801 Clinical Implication of Hyper-Intense Signal Thyroid Incidentaloma on Time of Flight Magnetic Resonance Angiography

Authors: Inseon Ryoo, Soo Chin Kim, Hyena Jung, Sangil Suh

Abstract:

Objectives: The purpose of this study is to evaluate the clinical significance of hyper-intense signal thyroid incidentalomas on the time of flight magnetic resonance angiography (TOF-MRA) using correlation study with ultrasound (US). Methods: We retrospectively reviewed 3,505 non-contrast TOF-MRA performed at an institution between September 2014 and May 2017. Two radiologists correlated the thyroid incidentalomas detected on TOF-MRA with US features which was obtained within three months interval between MRA and US examinations in consensus method. Results: The prevalence of hyper-intense signal thyroid nodules incidentally detected on TOF-MRA was 1.2% (43/3505). Among them, 35 people (81.4%) underwent US examinations, and total 45 hyper-intense signal thyroid nodules were detected on US exams. Of these 45 nodules, 35 nodules (72.9%) were categorized as benign (K-TIRADS category 2) on US exams. Fine needle aspiration was performed on 9 nodules according to the indications recommended by Korean Society of Thyroid Radiology. All except one high-suspicious thyroid nodule were confirmed as benign (Bethesda 2) on cytologic exams. One high-suspicious nodule on US showed a non-diagnostic result (Bethesda 1) on cytologic exam. However, this nodule collapsed after aspiration of thick colloid material. Conclusions: Our study showed that the most hyper-intense signal thyroid nodules detected on TOF-MRA were benign. Therefore, if a hyper-intense signal incidentaloma is found on TOF-MRA, further evaluation, especially invasive biopsy of the nodules could be suspended unless the patient had other symptoms or clinical factors suggesting the need for further evaluation.

Keywords: incidentaloma, thyroid nodule, TOF MR angiography, ultrasound

Procedia PDF Downloads 154
13800 Timetabling for Interconnected LRT Lines: A Package Solution Based on a Real-world Case

Authors: Huazhen Lin, Ruihua Xu, Zhibin Jiang

Abstract:

In this real-world case, timetabling the LRT network as a whole is rather challenging for the operator: they are supposed to create a timetable to avoid various route conflicts manually while satisfying a given interval and the number of rolling stocks, but the outcome is not satisfying. Therefore, the operator adopts a computerised timetabling tool, the Train Plan Maker (TPM), to cope with this problem. However, with various constraints in the dual-line network, it is still difficult to find an adequate pairing of turnback time, interval and rolling stocks’ number, which requires extra manual intervention. Aiming at current problems, a one-off model for timetabling is presented in this paper to simplify the procedure of timetabling. Before the timetabling procedure starts, this paper presents how the dual-line system with a ring and several branches is turned into a simpler structure. Then, a non-linear programming model is presented in two stages. In the first stage, the model sets a series of constraints aiming to calculate a proper timing for coordinating two lines by adjusting the turnback time at termini. Then, based on the result of the first stage, the model introduces a series of inequality constraints to avoid various route conflicts. With this model, an analysis is conducted to reveal the relation between the ratio of trains in different directions and the possible minimum interval, observing that the more imbalance the ratio is, the less possible to provide frequent service under such strict constraints.

Keywords: light rail transit (LRT), non-linear programming, railway timetabling, timetable coordination

Procedia PDF Downloads 54
13799 Fast Return Path Planning for Agricultural Autonomous Terrestrial Robot in a Known Field

Authors: Carlo Cernicchiaro, Pedro D. Gaspar, Martim L. Aguiar

Abstract:

The agricultural sector is becoming more critical than ever in view of the expected overpopulation of the Earth. The introduction of robotic solutions in this field is an increasingly researched topic to make the most of the Earth's resources, thus going to avoid the problems of wear and tear of the human body due to the harsh agricultural work, and open the possibility of a constant careful processing 24 hours a day. This project is realized for a terrestrial autonomous robot aimed to navigate in an orchard collecting fallen peaches below the trees. When it receives the signal indicating the low battery, it has to return to the docking station where it will replace its battery and then return to the last work point and resume its routine. Considering a preset path in orchards with tree rows with variable length by which the robot goes iteratively using the algorithm D*. In case of low battery, the D* algorithm is still used to determine the fastest return path to the docking station as well as to come back from the docking station to the last work point. MATLAB simulations were performed to analyze the flexibility and adaptability of the developed algorithm. The simulation results show an enormous potential for adaptability, particularly in view of the irregularity of orchard field, since it is not flat and undergoes modifications over time from fallen branch as well as from other obstacles and constraints. The D* algorithm determines the best route in spite of the irregularity of the terrain. Moreover, in this work, it will be shown a possible solution to improve the initial points tracking and reduce time between movements.

Keywords: path planning, fastest return path, agricultural autonomous terrestrial robot, docking station

Procedia PDF Downloads 123
13798 Comparative Study of the Effects of Process Parameters on the Yield of Oil from Melon Seed (Cococynthis citrullus) and Coconut Fruit (Cocos nucifera)

Authors: Ndidi F. Amulu, Patrick E. Amulu, Gordian O. Mbah, Callistus N. Ude

Abstract:

Comparative analysis of the properties of melon seed, coconut fruit and their oil yield were evaluated in this work using standard analytical technique AOAC. The results of the analysis carried out revealed that the moisture contents of the samples studied are 11.15% (melon) and 7.59% (coconut). The crude lipid content are 46.10% (melon) and 55.15% (coconut).The treatment combinations used (leaching time, leaching temperature and solute: solvent ratio) showed significant difference (p < 0.05) in yield between the samples, with melon oil seed flour having a higher percentage range of oil yield (41.30 – 52.90%) and coconut (36.25 – 49.83%). The physical characterization of the extracted oil was also carried out. The values gotten for refractive index are 1.487 (melon seed oil) and 1.361 (coconut oil) and viscosities are 0.008 (melon seed oil) and 0.002 (coconut oil). The chemical analysis of the extracted oils shows acid value of 1.00mg NaOH/g oil (melon oil), 10.050mg NaOH/g oil (coconut oil) and saponification value of 187.00mg/KOH (melon oil) and 183.26mg/KOH (coconut oil). The iodine value of the melon oil gave 75.00mg I2/g and 81.00mg I2/g for coconut oil. A standard statistical package Minitab version 16.0 was used in the regression analysis and analysis of variance (ANOVA). The statistical software mentioned above was also used to optimize the leaching process. Both samples gave high oil yield at the same optimal conditions. The optimal conditions to obtain highest oil yield ≥ 52% (melon seed) and ≥ 48% (coconut seed) are solute - solvent ratio of 40g/ml, leaching time of 2hours and leaching temperature of 50oC. The two samples studied have potential of yielding oil with melon seed giving the higher yield.

Keywords: Coconut, Melon, Optimization, Processing

Procedia PDF Downloads 426
13797 Peptide-Based Platform for Differentiation of Antigenic Variations within Influenza Virus Subtypes (Flutype)

Authors: Henry Memczak, Marc Hovestaedt, Bernhard Ay, Sandra Saenger, Thorsten Wolff, Frank F. Bier

Abstract:

The influenza viruses cause flu epidemics every year and serious pandemics in larger time intervals. The only cost-effective protection against influenza is vaccination. Due to rapid mutation continuously new subtypes appear, what requires annual reimmunization. For a correct vaccination recommendation, the circulating influenza strains had to be detected promptly and exactly and characterized due to their antigenic properties. During the flu season 2016/17, a wrong vaccination recommendation has been given because of the great time interval between identification of the relevant influenza vaccine strains and outbreak of the flu epidemic during the following winter. Due to such recurring incidents of vaccine mismatches, there is a great need to speed up the process chain from identifying the right vaccine strains to their administration. The monitoring of subtypes as part of this process chain is carried out by national reference laboratories within the WHO Global Influenza Surveillance and Response System (GISRS). To this end, thousands of viruses from patient samples (e.g., throat smears) are isolated and analyzed each year. Currently, this analysis involves complex and time-intensive (several weeks) animal experiments to produce specific hyperimmune sera in ferrets, which are necessary for the determination of the antigen profiles of circulating virus strains. These tests also bear difficulties in standardization and reproducibility, which restricts the significance of the results. To replace this test a peptide-based assay for influenza virus subtyping from corresponding virus samples was developed. The differentiation of the viruses takes place by a set of specifically designed peptidic recognition molecules which interact differently with the different influenza virus subtypes. The differentiation of influenza subtypes is performed by pattern recognition guided by machine learning algorithms, without any animal experiments. Synthetic peptides are immobilized in multiplex format on various platforms (e.g., 96-well microtiter plate, microarray). Afterwards, the viruses are incubated and analyzed comparing different signaling mechanisms and a variety of assay conditions. Differentiation of a range of influenza subtypes, including H1N1, H3N2, H5N1, as well as fine differentiation of single strains within these subtypes is possible using the peptide-based subtyping platform. Thereby, the platform could be capable of replacing the current antigenic characterization of influenza strains using ferret hyperimmune sera.

Keywords: antigenic characterization, influenza-binding peptides, influenza subtyping, influenza surveillance

Procedia PDF Downloads 138
13796 Novel Point of Care Test for Rapid Diagnosis of COVID-19 Using Recombinant Nanobodies against SARS-CoV-2 Spike1 (S1) Protein

Authors: Manal Kamel, Sara Maher, Hanan El Baz, Faten Salah, Omar Sayyouh, Zeinab Demerdash

Abstract:

In the recent COVID 19 pandemic, experts of public health have emphasized testing, tracking infected people, and tracing their contacts as an effective strategy to reduce the spread of the virus. Development of rapid and sensitive diagnostic assays to replace reverse transcription polymerase chain reaction (RT-PCR) is mandatory..Our innovative test strip relying on the application of nanoparticles conjugated to recombinant nanobodies for SARS-COV-2 spike protein (S1) & angiotensin-converting enzyme 2 (that is responsible for the virus entry into host cells) for rapid detection of SARS-COV-2 spike protein (S1) in saliva or sputum specimens. Comparative tests with RT-PCR will be held to estimate the significant effect of using COVID 19 nanobodies for the first time in the development of lateral flow test strip. The SARS-CoV-2 S1 (3 ng of recombinant proteins) was detected by our developed LFIA in saliva specimen of COVID-19 Patients No cross-reaction was detected with Middle East respiratory syndrome coronavirus (MERS-CoV) or SARS- CoV antigens..Our developed system revealed 96 % sensitivity and 100% specificity for saliva samples compared to 89 % and 100% sensitivity and specificity for nasopharyngeal swabs. providing a reliable alternative for the painful and uncomfortable nasopharyngeal swab process and the complexes, time consuming PCR test. An increase in testing compliances to be expected.

Keywords: COVID 19, diagnosis, LFIA, nanobodies, ACE2

Procedia PDF Downloads 116
13795 Simulation of Lean Principles Impact in a Multi-Product Supply Chain

Authors: Matteo Rossini, Alberto Portioli Staudacher

Abstract:

The market competition is moving from the single firm to the whole supply chain one because of increasing competition and growing need for operational efficiencies and customer orientation. Supply chain management allows companies to look beyond their organizational boundaries to develop and leverage resources and capabilities of their supply chain partners. This leads to create competitive advantages in the marketplace and because of this SCM has acquired strategic importance. Lean Approach is a management strategy that focuses on reducing every type of waste present in an organization. This approach is becoming more and more popular among supply chain managers. The supply chain application of lean approach is low diffused. It is not well studied which are the impacts of lean approach principles in a supply chain context. In literature there are only few studies simulating the lean approach performance in single products supply chain. This research work studies the impacts of lean principles implementation along a supply chain. To achieve this, a simulation model of a three-echelon multiproduct product supply chain has been built. Kanban system (and several priority policies) and setup time reduction degrees are implemented in the lean-configured supply chain to apply pull and lot-sizing decrease principles respectively. To evaluate the benefits of lean approach, lean supply chain is compared with an EOQ-configured supply chain. The simulation results show that Kanban system and setup-time reduction improve inventory stock level. They also show that logistics efforts are affected to lean implementation degree. The paper concludes describing performances of lean supply chain in different contexts.

Keywords: inventory policy, Kanban, lean supply chain, simulation study, supply chain management, planning

Procedia PDF Downloads 346
13794 Optimization of Gastro-Retentive Matrix Formulation and Its Gamma Scintigraphic Evaluation

Authors: Swapnila V. Shinde, Hemant P. Joshi, Sumit R. Dhas, Dhananjaysingh B. Rajput

Abstract:

The objective of the present study is to develop hydro-dynamically balanced system for atenolol, β-blocker as a single unit floating tablet. Atenolol shows pH dependent solubility resulting into a bioavailability of 36%. Thus, site specific oral controlled release floating drug delivery system was developed. Formulation includes novice use of rate controlling polymer such as locust bean gum (LBG) in combination of HPMC K4M and gas generating agent sodium bicarbonate. Tablet was prepared by direct compression method and evaluated for physico-mechanical properties. The statistical method was utilized to optimize the effect of independent variables, namely amount of HPMC K4M, LBG and three dependent responses such as cumulative drug release, floating lag time, floating time. Graphical and mathematical analysis of the results allowed the identification and quantification of the formulation variables influencing the selected responses. To study the gastrointestinal transit of the optimized gastro-retentive formulation, in vivo gamma scintigraphy was carried out in six healthy rabbits, after radio labeling the formulation with 99mTc. The transit profiles demonstrated that the dosage form was retained in the stomach for more than 5 hrs. The study signifies the potential of the developed system for stomach targeted delivery of atenolol with improved bioavailability.

Keywords: floating tablet, factorial design, gamma scintigraphy, antihypertensive model drug, HPMC, locust bean gum

Procedia PDF Downloads 268
13793 Mental Health Challenges, Internalizing and Externalizing Behavior Problems, and Academic Challenges among Adolescents from Broken Families

Authors: Fadzai Munyuki

Abstract:

Parental divorce is one of youth's most stressful life events and is associated with long-lasting emotional and behavioral problems. Over the last few decades, research has consistently found strong associations between divorce and adverse health effects in adolescents. Parental divorce has been hypothesized to lead to psychosocial development problems, mental health challenges, internalizing and externalizing behavior problems, and low academic performance among adolescents. This is supported by the Positive youth development theory, which states that a family setup has a major role to play in adolescent development and well-being. So, the focus of this research will be to test this hypothesized process model among adolescents in five provinces in Zimbabwe. A cross-sectional study will be conducted to test this hypothesis, and 1840 (n = 1840) adolescents aged between 14 to 17 will be employed for this study. A Stress and Questionnaire scale, a Child behavior checklist scale, and an academic concept scale will be used for this study. Data analysis will be done using Structural Equations Modeling. This study has many limitations, including the lack of a 'real-time' study, a few cross-sectional studies, a lack of a thorough and validated population measure, and many studies that have been done that have focused on one variable in relation to parental divorce. Therefore, this study seeks to bridge this gap between past research and current literature by using a validated population measure, a real-time study, and combining three latent variables in this study.

Keywords: mental health, internalizing and externalizing behavior, divorce, academic achievements

Procedia PDF Downloads 60
13792 An Exploratory Research of Human Character Analysis Based on Smart Watch Data: Distinguish the Drinking State from Normal State

Authors: Lu Zhao, Yanrong Kang, Lili Guo, Yuan Long, Guidong Xing

Abstract:

Smart watches, as a handy device with rich functionality, has become one of the most popular wearable devices all over the world. Among the various function, the most basic is health monitoring. The monitoring data can be provided as an effective evidence or a clue for the detection of crime cases. For instance, the step counting data can help to determine whether the watch wearer was quiet or moving during the given time period. There is, however, still quite few research on the analysis of human character based on these data. The purpose of this research is to analyze the health monitoring data to distinguish the drinking state from normal state. The analysis result may play a role in cases involving drinking, such as drunk driving. The experiment mainly focused on finding the figures of smart watch health monitoring data that change with drinking and figuring up the change scope. The chosen subjects are mostly in their 20s, each of whom had been wearing the same smart watch for a week. Each subject drank for several times during the week, and noted down the begin and end time point of the drinking. The researcher, then, extracted and analyzed the health monitoring data from the watch. According to the descriptive statistics analysis, it can be found that the heart rate change when drinking. The average heart rate is about 10% higher than normal, the coefficient of variation is less than about 30% of the normal state. Though more research is needed to be carried out, this experiment and analysis provide a thought of the application of the data from smart watches.

Keywords: character analysis, descriptive statistics analysis, drink state, heart rate, smart watch

Procedia PDF Downloads 156
13791 Frequency of Surgical Complications in Diabetic Patients after Kidney Transplantation

Authors: Hakan Duger, Alparslan Ersoy, Canan Ersoy

Abstract:

The improvement of surgical techniques in recent years has reduced the frequency of postoperative complications in kidney transplant recipients. Novel immunosuppressive agents have reduced rates of graft loss due to acute rejection to less than 1%. However, surgical complications may still lead graft loss and morbidity in recipients. Because of potent immunosuppression, impaired wound healing and complications are frequent after transplantation. We compared the frequency of post-operative surgical complications in diabetic and non-diabetic patients after kidney transplantation. Materials and Methods: This retrospective study conducted in consecutive patients (213 females, 285 males, median age 39 years) who underwent kidney transplant surgery at our center between December 2005 and October 2015. The patients were divided into two groups: diabetics (46 ± 10 year, 26 males, 16 females) and non-diabetics (39 ± 12 year, 259 males, 197 females). Characteristics of both groups were obtained from medical records. Results: We performed 225 living and 273 deceased donor transplantations. Renal replacement type was hemodialysis in 60.8%, peritoneal dialysis in 17.3% and preemptive in 12%. The mean body mass indexes of the recipients were 24 ± 4.6 kg/m², donor age was 48.6 ± 14.3 years, cold ischemic time was 11.3 ± 6.1 hours, surgery time was 4.9 ± 1.2 hours, and recovery time was 54±31 min. The mean hospitalization duration was 19.1 ± 13.5 days. The frequency of postoperative surgical complications was 43.8%. There was no significant difference between the ratios of post-operative surgical complications in non-diabetic (43.5%) and diabetic (47.4%) groups (p=0.648). Post-operative surgical complications were lymphocele (24.6% vs. 23.7%), delayed wound healing (13.2% vs. 7.6%), hematoma (7.8% vs.15.8 %), urinary leak (4.6% vs. 5.3%), hemorrhage (5.1% vs. 0%), hydronephrosis (2.2% vs. 0%), renal artery thrombosis (1.5% vs. 0%), renal vein thrombosis (1% vs. 2.6%), urinoma (0.7% vs. 0%), urinary obstruction (0.5% vs. 0%), ureteral stenosis (0.5% vs. 0%) and ureteral reflux (0.2% vs. 0%) in non-diabetic and diabetic groups, respectively (p > 0.05). Mean serum creatinine levels in non-diabetics and diabetics were 1.43 ± 0.81 and 1.61 ± 0.96 mg/dL at 1st month (p=0.198). At the 6th month, the mean graft and patient survival times in patients with post-operative surgical complications were significantly lower than in those who did not (162.9 ± 3.4 vs. 175.6 ± 1.5 days, p=0.008, and 171 ± 2.9 vs. 176.1 ± 1.6 days, p=0.047, respectively). However, patient survival durations of non-diabetic (173 ± 27) and diabetic (177 ± 13 day) groups were comparable (p=0.396). Conclusion: As a result, we concluded that surgical complications such as lymphocele and delayed wound healing were common and that frequency of these complications in diabetic recipients did not differ from non-diabetic one. All persons involved in the postoperative care of kidney transplant recipients be aware of the potential surgical complications for rapid diagnosis and treatment.

Keywords: kidney transplantation, diabetes mellitus, surgery, complication

Procedia PDF Downloads 170
13790 Robotic Arm-Automated Spray Painting with One-Shot Object Detection and Region-Based Path Optimization

Authors: Iqraq Kamal, Akmal Razif, Sivadas Chandra Sekaran, Ahmad Syazwan Hisaburi

Abstract:

Painting plays a crucial role in the aerospace manufacturing industry, serving both protective and cosmetic purposes for components. However, the traditional manual painting method is time-consuming and labor-intensive, posing challenges for the sector in achieving higher efficiency. Additionally, the current automated robot path planning has been a bottleneck for spray painting processes, as typical manual teaching methods are time-consuming, error-prone, and skill-dependent. Therefore, it is essential to develop automated tool path planning methods to replace manual ones, reducing costs and improving product quality. Focusing on flat panel painting in aerospace manufacturing, this study aims to address issues related to unreliable part identification techniques caused by the high-mixture, low-volume nature of the industry. The proposed solution involves using a spray gun and a UR10 robotic arm with a vision system that utilizes one-shot object detection (OS2D) to identify parts accurately. Additionally, the research optimizes path planning by concentrating on the region of interest—specifically, the identified part, rather than uniformly covering the entire painting tray.

Keywords: aerospace manufacturing, one-shot object detection, automated spray painting, vision-based path optimization, deep learning, automation, robotic arm

Procedia PDF Downloads 65
13789 The Methodology of Hand-Gesture Based Form Design in Digital Modeling

Authors: Sanghoon Shim, Jaehwan Jung, Sung-Ah Kim

Abstract:

As the digital technology develops, studies on the TUI (Tangible User Interface) that links the physical environment utilizing the human senses with the virtual environment through the computer are actively being conducted. In addition, there has been a tremendous advance in computer design making through the use of computer-aided design techniques, which enable optimized decision-making through comparison with machine learning and parallel comparison of alternatives. However, a complex design that can respond to user requirements or performance can emerge through the intuition of the designer, but it is difficult to actualize the emerged design by the designer's ability alone. Ancillary tools such as Gaudí's Sandbag can be an instrument to reinforce and evolve emerged ideas from designers. With the advent of many commercial tools that support 3D objects, designers' intentions are easily reflected in their designs, but the degree of their reflection reflects their intentions according to the proficiency of design tools. This study embodies the environment in which the form can be implemented by the fingers of the most basic designer in the initial design phase of the complex type building design. Leapmotion is used as a sensor to recognize the hand motions of the designer, and it is converted into digital information to realize an environment that can be linked in real time in virtual reality (VR). In addition, the implemented design can be linked with Rhino™, a 3D authoring tool, and its plug-in Grasshopper™ in real time. As a result, it is possible to design sensibly using TUI, and it can serve as a tool for assisting designer intuition.

Keywords: design environment, digital modeling, hand gesture, TUI, virtual reality

Procedia PDF Downloads 357
13788 Integration Process and Analytic Interface of different Environmental Open Data Sets with Java/Oracle and R

Authors: Pavel H. Llamocca, Victoria Lopez

Abstract:

The main objective of our work is the comparative analysis of environmental data from Open Data bases, belonging to different governments. This means that you have to integrate data from various different sources. Nowadays, many governments have the intention of publishing thousands of data sets for people and organizations to use them. In this way, the quantity of applications based on Open Data is increasing. However each government has its own procedures to publish its data, and it causes a variety of formats of data sets because there are no international standards to specify the formats of the data sets from Open Data bases. Due to this variety of formats, we must build a data integration process that is able to put together all kind of formats. There are some software tools developed in order to give support to the integration process, e.g. Data Tamer, Data Wrangler. The problem with these tools is that they need data scientist interaction to take part in the integration process as a final step. In our case we don’t want to depend on a data scientist, because environmental data are usually similar and these processes can be automated by programming. The main idea of our tool is to build Hadoop procedures adapted to data sources per each government in order to achieve an automated integration. Our work focus in environment data like temperature, energy consumption, air quality, solar radiation, speeds of wind, etc. Since 2 years, the government of Madrid is publishing its Open Data bases relative to environment indicators in real time. In the same way, other governments have published Open Data sets relative to the environment (like Andalucia or Bilbao). But all of those data sets have different formats and our solution is able to integrate all of them, furthermore it allows the user to make and visualize some analysis over the real-time data. Once the integration task is done, all the data from any government has the same format and the analysis process can be initiated in a computational better way. So the tool presented in this work has two goals: 1. Integration process; and 2. Graphic and analytic interface. As a first approach, the integration process was developed using Java and Oracle and the graphic and analytic interface with Java (jsp). However, in order to open our software tool, as second approach, we also developed an implementation with R language as mature open source technology. R is a really powerful open source programming language that allows us to process and analyze a huge amount of data with high performance. There are also some R libraries for the building of a graphic interface like shiny. A performance comparison between both implementations was made and no significant differences were found. In addition, our work provides with an Official Real-Time Integrated Data Set about Environment Data in Spain to any developer in order that they can build their own applications.

Keywords: open data, R language, data integration, environmental data

Procedia PDF Downloads 300
13787 System Identification of Timber Masonry Walls Using Shaking Table Test

Authors: Timir Baran Roy, Luis Guerreiro, Ashutosh Bagchi

Abstract:

Dynamic study is important in order to design, repair and rehabilitation of structures. It has played an important role in the behavior characterization of structures; such as bridges, dams, high-rise buildings etc. There had been a substantial development in this area over the last few decades, especially in the field of dynamic identification techniques of structural systems. Frequency Domain Decomposition (FDD) and Time Domain Decomposition are most commonly used methods to identify modal parameters; such as natural frequency, modal damping, and mode shape. The focus of the present research is to study the dynamic characteristics of typical timber masonry walls commonly used in Portugal. For that purpose, a multi-storey structural prototypes of such walls have been tested on a seismic shake table at the National Laboratory for Civil Engineering, Portugal (LNEC). Signal processing has been performed of the output response, which is collected from the shaking table experiment of the prototype using accelerometers. In the present work signal processing of the output response, based on the input response has been done in two ways: FDD and Stochastic Subspace Identification (SSI). In order to estimate the values of the modal parameters, algorithms for FDD are formulated, and parametric functions for the SSI are computed. Finally, estimated values from both the methods are compared to measure the accuracy of both the techniques.

Keywords: frequency domain decomposition (fdd), modal parameters, signal processing, stochastic subspace identification (ssi), time domain decomposition

Procedia PDF Downloads 253
13786 Stability Optimization of NABH₄ via PH and H₂O:NABH₄ Ratios for Large Scale Hydrogen Production

Authors: Parth Mehta, Vedasri Bai Khavala, Prabhu Rajagopal, Tiju Thomas

Abstract:

There is an increasing need for alternative clean fuels, and hydrogen (H₂) has long been considered a promising solution with a high calorific value (142MJ/kg). However, the storage of H₂ and expensive processes for its generation have hindered its usage. Sodium borohydride (NaBH₄) can potentially be used as an economically viable means of H₂ storage. Thus far, there have been attempts to optimize the life of NaBH₄ (half-life) in aqueous media by stabilizing it with sodium hydroxide (NaOH) for various pH values. Other reports have shown that H₂ yield and reaction kinetics remained constant for all ratios of H₂O to NaBH₄ > 30:1, without any acidic catalysts. Here we highlight the importance of pH and H₂O: NaBH₄ ratio (80:1, 40:1, 20:1 and 10:1 by weight), for NaBH₄ stabilization (half-life reaction time at room temperature) and corrosion minimization of H₂ reactor components. It is interesting to observe that at any particular pH>10 (e.g., pH = 10, 11 and 12), the H₂O: NaBH₄ ratio does not have the expected linear dependence with stability. On the contrary, high stability was observed at the ratio of 10:1 H₂O: NaBH₄ across all pH>10. When the H₂O: NaBH₄ ratio is increased from 10:1 to 20:1 and beyond (till 80:1), constant stability (% degradation) is observed with respect to time. For practical usage (consumption within 6 hours of making NaBH₄ solution), 15% degradation at pH 11 and NaBH₄: H₂O ratio of 10:1 is recommended. Increasing this ratio demands higher NaOH concentration at the same pH, thus requiring a higher concentration or volume of acid (e.g., HCl) for H₂ generation. The reactions are done with tap water to render the results useful from an industrial standpoint. The observed stability regimes are rationalized based on complexes associated with NaBH₄ when solvated in water, which depend sensitively on both pH and NaBH₄: H₂O ratio.

Keywords: hydrogen, sodium borohydride, stability optimization, H₂O:NaBH₄ ratio

Procedia PDF Downloads 103
13785 Team Teaching, Students Perception, Challenges, and Remedies for Effective Implementation: A Case Study of the Department of Biology, Alvan Ikoku Federal College of Education, Owerri Imo State, Nigeria

Authors: Daniel Ihemtuge Akim, Micheal O. Ikeanumba

Abstract:

This research focused on team teaching; students perception, challenges, and remedies for effective implementation, a case study of the department of Biology, Alvan Ikoku Federal College of Education, Owerri Imo State, Nigeria. It seeks to address the misconception by students on the use of team teaching as a methodology for learning. Five purposes and five research questions guided this study. Descriptive survey design was used in the study. The students of biology department enrolled in both Bachelor degree and National Certificate in Education in Alvan Ikoku Federal College of Education, Owerri, formed the population size. Simple random sampling technique was used to select the sampled students and 20% of whole lecturers were selected out of the whole given sample size of three hundred and forty (340). The instrument used for data collection was structured 4 point Likert scale questionnaire and analysis was made using mean method. The result revealed that poor time management by lectures, lack of lecture venues, manpower are some of the challenges hindering the effective implementation of team teaching. It was also observed that students perform better in academic when team teaching approach is used than single teaching approach. Finally, recommendations made suggested that teachers involved in team teaching should work together with their teaching strategies and within the time frame to achieve the stated objectives.

Keywords: challenges, implementation, perception, team teaching

Procedia PDF Downloads 368
13784 An Analysis of Gamification in the Post-Secondary Classroom

Authors: F. Saccucci

Abstract:

Gamification has now started to take root in the post-secondary classroom. Educators have learned much about gamification to date but there is still a great deal to learn. One definition of gamification is the ability to engage post-secondary students with games that are fun and correlate to class room curriculum. There is no shortage of literature illustrating the advantages of gamification in the class room. This study is an extension of similar thought as well as an extension of a previous study where in class testing proved with the used of paired T-test that gamification did significantly improve the students’ understanding of subject material. Gamification itself in the class room can range from high end computer simulated software to paper based games of which both have advantages and disadvantages. This analysis used a paper based game to highlight certain qualitative advantages of gamification. The paper based game in this analysis was inexpensive, required low preparation time for the faculty member and consumed approximately 20 minutes of class room time. Data for the study was collected through in class student feedback surveys and narrative from the faculty member moderating the game. Students were randomly selected into groups of four. Qualitative advantages identified in this analysis included: 1. Students had a chance to meet, connect and know other students. 2. Students enjoyed the gamification process given there was a sense of fun and competition. 3. The post assessment that followed the simulation game was not part of their grade calculation therefore it was an opportunity to participate in a low risk activity whereby students could subsequently self-assess their understanding of the subject material. 4. In the view of the student, content knowledge did increase after the gamification process. These qualitative advantages identified in this analysis contribute to the argument that there should be an attempt to use gamification in today’s post-secondary class room. The analysis also highlighted that eighty (80) percent of the respondents believe twenty minutes devoted to the gamification process was appropriate, however twenty (20) percentage of respondents believed that rather than scheduling a gamification process and its post quiz in the last week, a review for the final exam may have been more useful. An additional study to this hopes to determine if the scheduling of the gamification had any correlation to a percentage of the students not wanting to be engaged in the process. As well, the additional study hopes to determine at what incremental level of time invested in class room gamification produce no material incremental benefits to the student as well as determine if any correlation exist between respondents preferring not to have it at the end of the semester to students not believing the gamification process added to the increase of their curricular knowledge.

Keywords: gamification, inexpensive, non-quantitative advantages, post-secondary

Procedia PDF Downloads 197
13783 Detection of Acrylamide Using Liquid Chromatography-Tandem Mass Spectrometry and Quantitative Risk Assessment in Selected Food from Saudi Market

Authors: Sarah A. Alotaibi, Mohammed A. Almutairi, Abdullah A. Alsayari, Adibah M. Almutairi, Somaiah K. Almubayedh

Abstract:

Concerns over the presence of acrylamide in food date back to 2002, when Swedish scientists stated that, in carbohydrate-rich foods, amounts of acrylamide were formed when cooked at high temperatures. Similar findings were reported by other researchers which, consequently, caused major international efforts to investigate dietary exposure and the subsequent health complications in order to properly manage this issue. Due to this issue, in this work, we aim to determine the acrylamide level in different foods (coffee, potato chips, biscuits, and baby food) commonly consumed by the Saudi population. In a total of forty-three samples, acrylamide was detected in twenty-three samples at levels of 12.3 to 2850 µg/kg. In reference to the food groups, the highest concentration of acrylamide was found in coffee samples (<12.3-2850 μg/kg), followed by potato chips (655-1310 μg/kg), then biscuits (23.5-449 μg/kg), whereas the lowest acrylamide level was observed in baby food (<14.75 – 126 μg/kg). Most coffee, biscuits and potato chips products contain high amount of acrylamide content and also the most commonly consumed product. Saudi adults had a mean exposure of acrylamide for coffee, potato, biscuit, and cereal (0.07439, 0.04794, 0.01125, 0.003371 µg/kg-b.w/day), respectively. On the other hand, exposure to acrylamide in Saudi infants and children to the same types of food was (0.1701, 0.1096, 0.02572, 0.00771 µg/kg-b.w/day), respectively. Most groups have a percentile that exceeds the tolerable daily intake (TDI) cancer value (2.6 µg/kg-b.w/day). Overall, the MOE results show that the Saudi population is at high risk of acrylamide-related disease in all food types, and there is a chance of cancer risk in all age groups (all values ˂10,000). Furthermore, it was found that in non-cancer risks, the acrylamide in all tested foods was within the safe limit (˃125), except for potato chips, in which there is a risk for diseases in the population. With potato and coffee as raw materials, additional studies were conducted to assess different factors, including temperature, cocking time, and additives affecting the acrylamide formation in fried potato and roasted coffee, by systematically varying processing temperatures and time values, a mitigation of acrylamide content was achieved when lowering the temperature and decreasing the cooking time. Furthermore, it was shown that the combination of the addition of chitosan and NaCl had a large impact on the formation.

Keywords: risk assessment, dietary exposure, MOA, acrylamide, hazard

Procedia PDF Downloads 39
13782 Risk Factors for Post-Induction Hypotension Among Elderly Patients Undergoing Elective Non-Cardiac Surgery Under General Anesthesia

Authors: Karuna Sutthibenjakul, Sunisa Chatmongkolchart

Abstract:

Background: Postinduction hypotension is common and occurs more often in elderly patients. We aimed to determine risk factors for hypotension after induction among elderly patients (aged 65 years and older) who underwent elective non-cardiac surgery under general anesthesia. Methods: This cohort study analyzed from 580 data between December 2017 and July 2018 at a tertiary university hospital in south of Thailand. Hypotension is defined as more than 30% decrease mean arterial pressure from baseline after induction within 20 minutes or the use of vasopressive agent to treat low blood pressure. Intraoperative parameters were blood pressure and heart rate at T0, TEI, T5, T10, T15 and T20 (immediately after arrival at operating room, time after intubation, 5, 10, 15 and 20 minutes after intubation) respectively. Results: The median age was 72.5 (68, 78) years. A prevalence of post-induction hypotension was 64.8%. The highest prevalence (39.7%) was at 15 minutes after intubation. The association of post-induction hypotension is rising with diuretic drug as preoperative medication (P-value=0.016), hematocrit level (P-value=0.031) and the degree of hypertension immediately after arrival at operating room (P-value<0.001). Increasing fentanyl dosage during induction was associated with hypotension at intubation time (P-value<0.01) and 5 minutes after intubation (P-value<0.001). There was no statistically significant difference in the increasing propofol dosage. Conclusion: The degree of hypertension immediately after arrival at operating room and increasing fentanyl dosage were a significant risk factors for postinduction hypotension in elderly patients.

Keywords: risk factors, post-induction, hypotension, elderly

Procedia PDF Downloads 120
13781 A Surgical Correction and Innovative Splint for Swan Neck Deformity in Hypermobility Syndrome

Authors: Deepak Ganjiwale, Karthik Vishwanathan

Abstract:

Objective: Splinting is a great domain of occupational therapy profession.Making a splint for the patient would depend upon the need or requirement of the problems and deformities. Swan neck deformity is not very common in finger it may occur after any disease. Conservative treatment of the swan neck deformity is available by using different static splints only. There are very few reports of surgical correction of swan-neck deformity in benign hypermobility syndrome. Method: This case report describes the result of surgical intervention and hand splint in a twenty year old lady with past history of cardiovascular stroke with no residual neurological deficit. She presented with correctable swan neck deformity and failed to improve with static ring splints to correct the deformity. She was noted to have hyperlaxity (EhlerDanlos type) as per modified Beighton’s score of 5/9. She underwent volar plate plication of the proximal interphalangeal joint of the left ring finger along with hemitenodesis of ulnar slip of flexor digitorum superficialis (FDS) tendon whereby, the ulnar slip of FDS was passed through a small surgically created rent in A2 pulley and sutured back to itself. Result: Postoperatively, the patient was referred to occupational therapy for splinting with the instruction that the splint would work some time for as static and some time as dynamic for positional and correction of the finger. Conclusion: After occupational therapy intervention and splinting, the patient had a full correction of the swan-neck deformity with near full flexion of the operated finger and is able to work independently.

Keywords: swan neck, finger, deformity, splint, hypermobility

Procedia PDF Downloads 242
13780 Chaotic Sequence Noise Reduction and Chaotic Recognition Rate Improvement Based on Improved Local Geometric Projection

Authors: Rubin Dan, Xingcai Wang, Ziyang Chen

Abstract:

A chaotic time series noise reduction method based on the fusion of the local projection method, wavelet transform, and particle swarm algorithm (referred to as the LW-PSO method) is proposed to address the problem of false recognition due to noise in the recognition process of chaotic time series containing noise. The method first uses phase space reconstruction to recover the original dynamical system characteristics and removes the noise subspace by selecting the neighborhood radius; then it uses wavelet transform to remove D1-D3 high-frequency components to maximize the retention of signal information while least-squares optimization is performed by the particle swarm algorithm. The Lorenz system containing 30% Gaussian white noise is simulated and verified, and the phase space, SNR value, RMSE value, and K value of the 0-1 test method before and after noise reduction of the Schreiber method, local projection method, wavelet transform method, and LW-PSO method are compared and analyzed, which proves that the LW-PSO method has a better noise reduction effect compared with the other three common methods. The method is also applied to the classical system to evaluate the noise reduction effect of the four methods and the original system identification effect, which further verifies the superiority of the LW-PSO method. Finally, it is applied to the Chengdu rainfall chaotic sequence for research, and the results prove that the LW-PSO method can effectively reduce the noise and improve the chaos recognition rate.

Keywords: Schreiber noise reduction, wavelet transform, particle swarm optimization, 0-1 test method, chaotic sequence denoising

Procedia PDF Downloads 181
13779 Environmental Potential of Biochar from Wood Biomass Thermochemical Conversion

Authors: Cora Bulmău

Abstract:

Soil polluted with hydrocarbons spills is a major global concern today. As a response to this issue, our experimental study tries to put in evidence the option to choose for one environmentally friendly method: use of the biochar, despite to a classical procedure; incineration of contaminated soil. Biochar represents the solid product obtained through the pyrolysis of biomass, its additional use being as an additive intended to improve the quality of the soil. The positive effect of biochar addition to soil is represented by its capacity to adsorb and contain petroleum products within its pores. Taking into consideration the capacity of the biochar to interact with organic contaminants, the purpose of the present study was to experimentally establish the effects of the addition of wooden biomass-derived biochar on a soil contaminated with oil. So, the contaminated soil was amended with biochar (10%) produced by pyrolysis in different operational conditions of the thermochemical process. After 25 days, the concentration of petroleum hydrocarbons from soil treated with biochar was measured. An analytical method as Soxhlet extraction was adopted to estimate the concentrations of total petroleum products (TPH) in the soil samples: This technique was applied to contaminated soil, also to soils remediated by incineration/adding biochar. The treatment of soil using biochar obtained from pyrolysis of the Birchwood led to a considerable decrease in the concentrations of petroleum products. The incineration treatments conducted under experimental stage to clean up the same soil, contaminated with petroleum products, involved specific parameters: temperature of about 600°C, 800°C and 1000°C and treatment time 30 and 60 minutes. The experimental results revealed that the method using biochar has registered values of efficiency up to those of all incineration processes applied for the shortest time.

Keywords: biochar, biomass, remediaton, soil, TPH

Procedia PDF Downloads 217
13778 An Extensible Software Infrastructure for Computer Aided Custom Monitoring of Patients in Smart Homes

Authors: Ritwik Dutta, Marylin Wolf

Abstract:

This paper describes the trade-offs and the design from scratch of a self-contained, easy-to-use health dashboard software system that provides customizable data tracking for patients in smart homes. The system is made up of different software modules and comprises a front-end and a back-end component. Built with HTML, CSS, and JavaScript, the front-end allows adding users, logging into the system, selecting metrics, and specifying health goals. The back-end consists of a NoSQL Mongo database, a Python script, and a SimpleHTTPServer written in Python. The database stores user profiles and health data in JSON format. The Python script makes use of the PyMongo driver library to query the database and displays formatted data as a daily snapshot of user health metrics against target goals. Any number of standard and custom metrics can be added to the system, and corresponding health data can be fed automatically, via sensor APIs or manually, as text or picture data files. A real-time METAR request API permits correlating weather data with patient health, and an advanced query system is implemented to allow trend analysis of selected health metrics over custom time intervals. Available on the GitHub repository system, the project is free to use for academic purposes of learning and experimenting, or practical purposes by building on it.

Keywords: flask, Java, JavaScript, health monitoring, long-term care, Mongo, Python, smart home, software engineering, webserver

Procedia PDF Downloads 374
13777 Development and Validation of High-Performance Liquid Chromatography Method for the Determination and Pharmacokinetic Study of Linagliptin in Rat Plasma

Authors: Hoda Mahgoub, Abeer Hanafy

Abstract:

Linagliptin (LNG) belongs to dipeptidyl-peptidase-4 (DPP-4) inhibitor class. DPP-4 inhibitors represent a new therapeutic approach for the treatment of type 2 diabetes in adults. The aim of this work was to develop and validate an accurate and reproducible HPLC method for the determination of LNG with high sensitivity in rat plasma. The method involved separation of both LNG and pindolol (internal standard) at ambient temperature on a Zorbax Eclipse XDB C18 column and a mobile phase composed of 75% methanol: 25% formic acid 0.1% pH 4.1 at a flow rate of 1.0 mL.min-1. UV detection was performed at 254nm. The method was validated in compliance with ICH guidelines and found to be linear in the range of 5–1000ng.mL-1. The limit of quantification (LOQ) was found to be 5ng.mL-1 based on 100µL of plasma. The variations for intra- and inter-assay precision were less than 10%, and the accuracy values were ranged between 93.3% and 102.5%. The extraction recovery (R%) was more than 83%. The method involved a single extraction step of a very small plasma volume (100µL). The assay was successfully applied to an in-vivo pharmacokinetic study of LNG in rats that were administered a single oral dose of 10mg.kg-1 LNG. The maximum concentration (Cmax) was found to be 927.5 ± 23.9ng.mL-1. The area under the plasma concentration-time curve (AUC0-72) was 18285.02 ± 605.76h.ng.mL-1. In conclusion, the good accuracy and low LOQ of the bioanalytical HPLC method were suitable for monitoring the full pharmacokinetic profile of LNG in rats. The main advantages of the method were the sensitivity, small sample volume, single-step extraction procedure and the short time of analysis.

Keywords: HPLC, linagliptin, pharmacokinetic study, rat plasma

Procedia PDF Downloads 232