Search results for: signal classification
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3693

Search results for: signal classification

183 Web and Smart Phone-based Platform Combining Artificial Intelligence and Satellite Remote Sensing Data to Geoenable Villages for Crop Health Monitoring

Authors: Siddhartha Khare, Nitish Kr Boro, Omm Animesh Mishra

Abstract:

Recent food price hikes may signal the end of an era of predictable global grain crop plenty due to climate change, population expansion, and dietary changes. Food consumption will treble in 20 years, requiring enormous production expenditures. Climate and the atmosphere changed owing to rainfall and seasonal cycles in the past decade. India's tropical agricultural relies on evapotranspiration and monsoons. In places with limited resources, the global environmental change affects agricultural productivity and farmers' capacity to adjust to changing moisture patterns. Motivated by these difficulties, satellite remote sensing might be combined with near-surface imaging data (smartphones, UAVs, and PhenoCams) to enable phenological monitoring and fast evaluations of field-level consequences of extreme weather events on smallholder agriculture output. To accomplish this technique, we must digitally map all communities agricultural boundaries and crop kinds. With the improvement of satellite remote sensing technologies, a geo-referenced database may be created for rural Indian agriculture fields. Using AI, we can design digital agricultural solutions for individual farms. Main objective is to Geo-enable each farm along with their seasonal crop information by combining Artificial Intelligence (AI) with satellite and near-surface data and then prepare long term crop monitoring through in-depth field analysis and scanning of fields with satellite derived vegetation indices. We developed an AI based algorithm to understand the timelapse based growth of vegetation using PhenoCam or Smartphone based images. We developed an android platform where user can collect images of their fields based on the android application. These images will be sent to our local server, and then further AI based processing will be done at our server. We are creating digital boundaries of individual farms and connecting these farms with our smart phone application to collect information about farmers and their crops in each season. We are extracting satellite-based information for each farm from Google earth engine APIs and merging this data with our data of tested crops from our app according to their farm’s locations and create a database which will provide the data of quality of crops from their location.

Keywords: artificial intelligence, satellite remote sensing, crop monitoring, android and web application

Procedia PDF Downloads 100
182 In Silico Modeling of Drugs Milk/Plasma Ratio in Human Breast Milk Using Structures Descriptors

Authors: Navid Kaboudi, Ali Shayanfar

Abstract:

Introduction: Feeding infants with safe milk from the beginning of their life is an important issue. Drugs which are used by mothers can affect the composition of milk in a way that is not only unsuitable, but also toxic for infants. Consuming permeable drugs during that sensitive period by mother could lead to serious side effects to the infant. Due to the ethical restrictions of drug testing on humans, especially women, during their lactation period, computational approaches based on structural parameters could be useful. The aim of this study is to develop mechanistic models to predict the M/P ratio of drugs during breastfeeding period based on their structural descriptors. Methods: Two hundred and nine different chemicals with their M/P ratio were used in this study. All drugs were categorized into two groups based on their M/P value as Malone classification: 1: Drugs with M/P>1, which are considered as high risk 2: Drugs with M/P>1, which are considered as low risk Thirty eight chemical descriptors were calculated by ACD/labs 6.00 and Data warrior software in order to assess the penetration during breastfeeding period. Later on, four specific models based on the number of hydrogen bond acceptors, polar surface area, total surface area, and number of acidic oxygen were established for the prediction. The mentioned descriptors can predict the penetration with an acceptable accuracy. For the remaining compounds (N= 147, 158, 160, and 174 for models 1 to 4, respectively) of each model binary regression with SPSS 21 was done in order to give us a model to predict the penetration ratio of compounds. Only structural descriptors with p-value<0.1 remained in the final model. Results and discussion: Four different models based on the number of hydrogen bond acceptors, polar surface area, and total surface area were obtained in order to predict the penetration of drugs into human milk during breastfeeding period About 3-4% of milk consists of lipids, and the amount of lipid after parturition increases. Lipid soluble drugs diffuse alongside with fats from plasma to mammary glands. lipophilicity plays a vital role in predicting the penetration class of drugs during lactation period. It was shown in the logistic regression models that compounds with number of hydrogen bond acceptors, PSA and TSA above 5, 90 and 25 respectively, are less permeable to milk because they are less soluble in the amount of fats in milk. The pH of milk is acidic and due to that, basic compounds tend to be concentrated in milk than plasma while acidic compounds may consist lower concentrations in milk than plasma. Conclusion: In this study, we developed four regression-based models to predict the penetration class of drugs during the lactation period. The obtained models can lead to a higher speed in drug development process, saving energy, and costs. Milk/plasma ratio assessment of drugs requires multiple steps of animal testing, which has its own ethical issues. QSAR modeling could help scientist to reduce the amount of animal testing, and our models are also eligible to do that.

Keywords: logistic regression, breastfeeding, descriptors, penetration

Procedia PDF Downloads 71
181 Analyzing Data Protection in the Era of Big Data under the Framework of Virtual Property Layer Theory

Authors: Xiaochen Mu

Abstract:

Data rights confirmation, as a key legal issue in the development of the digital economy, is undergoing a transition from a traditional rights paradigm to a more complex private-economic paradigm. In this process, data rights confirmation has evolved from a simple claim of rights to a complex structure encompassing multiple dimensions of personality rights and property rights. Current data rights confirmation practices are primarily reflected in two models: holistic rights confirmation and process rights confirmation. The holistic rights confirmation model continues the traditional "one object, one right" theory, while the process rights confirmation model, through contractual relationships in the data processing process, recognizes rights that are more adaptable to the needs of data circulation and value release. In the design of the data property rights system, there is a hierarchical characteristic aimed at decoupling from raw data to data applications through horizontal stratification and vertical staging. This design not only respects the ownership rights of data originators but also, based on the usufructuary rights of enterprises, constructs a corresponding rights system for different stages of data processing activities. The subjects of data property rights include both data originators, such as users, and data producers, such as enterprises, who enjoy different rights at different stages of data processing. The intellectual property rights system, with the mission of incentivizing innovation and promoting the advancement of science, culture, and the arts, provides a complete set of mechanisms for protecting innovative results. However, unlike traditional private property rights, the granting of intellectual property rights is not an end in itself; the purpose of the intellectual property system is to balance the exclusive rights of the rights holders with the prosperity and long-term development of society's public learning and the entire field of science, culture, and the arts. Therefore, the intellectual property granting mechanism provides both protection and limitations for the rights holder. This perfectly aligns with the dual attributes of data. In terms of achieving the protection of data property rights, the granting of intellectual property rights is an important institutional choice that can enhance the effectiveness of the data property exchange mechanism. Although this is not the only path, the granting of data property rights within the framework of the intellectual property rights system helps to establish fundamental legal relationships and rights confirmation mechanisms and is more compatible with the classification and grading system of data. The modernity of the intellectual property rights system allows it to adapt to the needs of big data technology development through special clauses or industry guidelines, thus promoting the comprehensive advancement of data intellectual property rights legislation. This paper analyzes data protection under the virtual property layer theory and two-fold virtual property rights system. Based on the “bundle of right” theory, this paper establishes specific three-level data rights. This paper analyzes the cases: Google v. Vidal-Hall, Halliday v Creation Consumer Finance, Douglas v Hello Limited, Campbell v MGN and Imerman v Tchenquiz. This paper concluded that recognizing property rights over personal data and protecting data under the framework of intellectual property will be beneficial to establish the tort of misuse of personal information.

Keywords: data protection, property rights, intellectual property, Big data

Procedia PDF Downloads 39
180 Design of a Small and Medium Enterprise Growth Prediction Model Based on Web Mining

Authors: Yiea Funk Te, Daniel Mueller, Irena Pletikosa Cvijikj

Abstract:

Small and medium enterprises (SMEs) play an important role in the economy of many countries. When the overall world economy is considered, SMEs represent 95% of all businesses in the world, accounting for 66% of the total employment. Existing studies show that the current business environment is characterized as highly turbulent and strongly influenced by modern information and communication technologies, thus forcing SMEs to experience more severe challenges in maintaining their existence and expanding their business. To support SMEs at improving their competitiveness, researchers recently turned their focus on applying data mining techniques to build risk and growth prediction models. However, data used to assess risk and growth indicators is primarily obtained via questionnaires, which is very laborious and time-consuming, or is provided by financial institutes, thus highly sensitive to privacy issues. Recently, web mining (WM) has emerged as a new approach towards obtaining valuable insights in the business world. WM enables automatic and large scale collection and analysis of potentially valuable data from various online platforms, including companies’ websites. While WM methods have been frequently studied to anticipate growth of sales volume for e-commerce platforms, their application for assessment of SME risk and growth indicators is still scarce. Considering that a vast proportion of SMEs own a website, WM bears a great potential in revealing valuable information hidden in SME websites, which can further be used to understand SME risk and growth indicators, as well as to enhance current SME risk and growth prediction models. This study aims at developing an automated system to collect business-relevant data from the Web and predict future growth trends of SMEs by means of WM and data mining techniques. The envisioned system should serve as an 'early recognition system' for future growth opportunities. In an initial step, we examine how structured and semi-structured Web data in governmental or SME websites can be used to explain the success of SMEs. WM methods are applied to extract Web data in a form of additional input features for the growth prediction model. The data on SMEs provided by a large Swiss insurance company is used as ground truth data (i.e. growth-labeled data) to train the growth prediction model. Different machine learning classification algorithms such as the Support Vector Machine, Random Forest and Artificial Neural Network are applied and compared, with the goal to optimize the prediction performance. The results are compared to those from previous studies, in order to assess the contribution of growth indicators retrieved from the Web for increasing the predictive power of the model.

Keywords: data mining, SME growth, success factors, web mining

Procedia PDF Downloads 267
179 Mapping the Suitable Sites for Food Grain Crops Using Geographical Information System (GIS) and Analytical Hierarchy Process (AHP)

Authors: Md. Monjurul Islam, Tofael Ahamed, Ryozo Noguchi

Abstract:

Progress continues in the fight against hunger, yet an unacceptably large number of people still lack food they need for an active and healthy life. Bangladesh is one of the rising countries in the South-Asia but still lots of people are food insecure. In the last few years, Bangladesh has significant achievements in food grain production but still food security at national to individual levels remain a matter of major concern. Ensuring food security for all is one of the major challenges that Bangladesh faces today, especially production of rice in the flood and poverty prone areas. Northern part is more vulnerable than any other part of Bangladesh. To ensure food security, one of the best way is to increase domestic production. To increase production, it is necessary to secure lands for achieving optimum utilization of resources. One of the measures is to identify the vulnerable and potential areas using Land Suitability Assessment (LSA) to increase rice production in the poverty prone areas. Therefore, the aim of the study was to identify the suitable sites for food grain crop rice production in the poverty prone areas located at the northern part of Bangladesh. Lack of knowledge on the best combination of factors that suit production of rice has contributed to the low production. To fulfill the research objective, a multi-criteria analysis was done and produced a suitable map for crop production with the help of Geographical Information System (GIS) and Analytical Hierarchy Process (AHP). Primary and secondary data were collected from ground truth information and relevant offices. The suitability levels for each factor were ranked based on the structure of FAO land suitability classification as: Currently Not Suitable (N2), Presently Not Suitable (N1), Marginally Suitable (S3), Moderately Suitable (S2) and Highly Suitable (S1). The suitable sites were identified using spatial analysis and compared with the recent raster image from Google Earth Pro® to validate the reliability of suitability analysis. For producing a suitability map for rice farming using GIS and multi-criteria analysis tool, AHP was used to rank the relevant factors, and the resultant weights were used to create the suitability map using weighted sum overlay tool in ArcGIS 10.3®. Then, the suitability map for rice production in the study area was formed. The weighted overly was performed and found that 22.74 % (1337.02 km2) of the study area was highly suitable, while 28.54% (1678.04 km2) was moderately suitable, 14.86% (873.71 km2) was marginally suitable, and 1.19% (69.97 km2) was currently not suitable for rice farming. On the other hand, 32.67% (1920.87 km2) was permanently not suitable which occupied with settlements, rivers, water bodies and forests. This research provided information at local level that could be used by farmers to select suitable fields for rice production, and then it can be applied to other crops. It will also be helpful for the field workers and policy planner who serves in the agricultural sector.

Keywords: AHP, GIS, spatial analysis, land suitability

Procedia PDF Downloads 241
178 Evaluation of Modern Natural Language Processing Techniques via Measuring a Company's Public Perception

Authors: Burak Oksuzoglu, Savas Yildirim, Ferhat Kutlu

Abstract:

Opinion mining (OM) is one of the natural language processing (NLP) problems to determine the polarity of opinions, mostly represented on a positive-neutral-negative axis. The data for OM is usually collected from various social media platforms. In an era where social media has considerable control over companies’ futures, it’s worth understanding social media and taking actions accordingly. OM comes to the fore here as the scale of the discussion about companies increases, and it becomes unfeasible to gauge opinion on individual levels. Thus, the companies opt to automize this process by applying machine learning (ML) approaches to their data. For the last two decades, OM or sentiment analysis (SA) has been mainly performed by applying ML classification algorithms such as support vector machines (SVM) and Naïve Bayes to a bag of n-gram representations of textual data. With the advent of deep learning and its apparent success in NLP, traditional methods have become obsolete. Transfer learning paradigm that has been commonly used in computer vision (CV) problems started to shape NLP approaches and language models (LM) lately. This gave a sudden rise to the usage of the pretrained language model (PTM), which contains language representations that are obtained by training it on the large datasets using self-supervised learning objectives. The PTMs are further fine-tuned by a specialized downstream task dataset to produce efficient models for various NLP tasks such as OM, NER (Named-Entity Recognition), Question Answering (QA), and so forth. In this study, the traditional and modern NLP approaches have been evaluated for OM by using a sizable corpus belonging to a large private company containing about 76,000 comments in Turkish: SVM with a bag of n-grams, and two chosen pre-trained models, multilingual universal sentence encoder (MUSE) and bidirectional encoder representations from transformers (BERT). The MUSE model is a multilingual model that supports 16 languages, including Turkish, and it is based on convolutional neural networks. The BERT is a monolingual model in our case and transformers-based neural networks. It uses a masked language model and next sentence prediction tasks that allow the bidirectional training of the transformers. During the training phase of the architecture, pre-processing operations such as morphological parsing, stemming, and spelling correction was not used since the experiments showed that their contribution to the model performance was found insignificant even though Turkish is a highly agglutinative and inflective language. The results show that usage of deep learning methods with pre-trained models and fine-tuning achieve about 11% improvement over SVM for OM. The BERT model achieved around 94% prediction accuracy while the MUSE model achieved around 88% and SVM did around 83%. The MUSE multilingual model shows better results than SVM, but it still performs worse than the monolingual BERT model.

Keywords: BERT, MUSE, opinion mining, pretrained language model, SVM, Turkish

Procedia PDF Downloads 146
177 Variability Studies of Seyfert Galaxies Using Sloan Digital Sky Survey and Wide-Field Infrared Survey Explorer Observations

Authors: Ayesha Anjum, Arbaz Basha

Abstract:

Active Galactic Nuclei (AGN) are the actively accreting centers of the galaxies that host supermassive black holes. AGN emits radiation in all wavelengths and also shows variability across all the wavelength bands. The analysis of flux variability tells us about the morphology of the site of emission radiation. Some of the major classifications of AGN are (a) Blazars, with featureless spectra. They are subclassified as BLLacertae objects, Flat Spectrum Radio Quasars (FSRQs), and others; (b) Seyferts with prominent emission line features are classified into Broad Line, Narrow Line Seyferts of Type 1 and Type 2 (c) quasars, and other types. Sloan Digital Sky Survey (SDSS) is an optical telescope based in Mexico that has observed and classified billions of objects based on automated photometric and spectroscopic methods. A sample of blazars is obtained from the third Fermi catalog. For variability analysis, we searched for light curves for these objects in Wide-Field Infrared Survey Explorer (WISE) and Near Earth Orbit WISE (NEOWISE) in two bands: W1 (3.4 microns) and W2 (4.6 microns), reducing the final sample to 256 objects. These objects are also classified into 155 BLLacs, 99 FSRQs, and 2 Narrow Line Seyferts, namely, PMNJ0948+0022 and PKS1502+036. Mid-infrared variability studies of these objects would be a contribution to the literature. With this as motivation, the present work is focused on studying a final sample of 256 objects in general and the Seyferts in particular. Owing to the fact that the classification is automated, SDSS has miclassified these objects into quasars, galaxies, and stars. Reasons for the misclassification are explained in this work. The variability analysis of these objects is done using the method of flux amplitude variability and excess variance. The sample consists of observations in both W1 and W2 bands. PMN J0948+0022 is observed between MJD from 57154.79 to 58810.57. PKS 1502+036 is observed between MJD from 57232.42 to 58517.11, which amounts to a period of over six years. The data is divided into different epochs spanning not more than 1.2 days. In all the epochs, the sources are found to be variable in both W1 and W2 bands. This confirms that the object is variable in mid-infrared wavebands in both long and short timescales. Also, the sources are observed for color variability. Objects either show a bluer when brighter trend (BWB) or a redder when brighter trend (RWB). The possible claim for the object to be BWB (present objects) is that the longer wavelength radiation emitted by the source can be suppressed by the high-energy radiation from the central source. Another result is that the smallest radius of the emission source is one day since the epoch span used in this work is one day. The mass of the black holes at the centers of these sources is found to be less than or equal to 108 solar masses, respectively.

Keywords: active galaxies, variability, Seyfert galaxies, SDSS, WISE

Procedia PDF Downloads 129
176 Oxalate Method for Assessing the Electrochemical Surface Area for Ni-Based Nanoelectrodes Used in Formaldehyde Sensing Applications

Authors: S. Trafela, X. Xua, K. Zuzek Rozmana

Abstract:

In this study, we used an accurate and precise method to measure the electrochemically active surface areas (Aecsa) of nickel electrodes. Calculated Aecsa is really important for the evaluation of an electro-catalyst’s activity in electrochemical reaction of different organic compounds. The method involves the electrochemical formation of Ni(OH)₂ and NiOOH in the presence of adsorbed oxalate in alkaline media. The studies were carried out using cyclic voltammetry with polycrystalline nickel as a reference material and electrodeposited nickel nanowires, homogeneous and heterogeneous nickel films. From cyclic voltammograms, the charge (Q) values for the formation of Ni(OH)₂ and NiOOH surface oxides were calculated under various conditions. At sufficiently fast potential scan rates (200 mV s⁻¹), the adsorbed oxalate limits the growth of the surface hydroxides to a monolayer. Although the Ni(OH)₂/NiOOH oxidation peak overlaps with the oxygen evolution reaction, in the reverse scan, the NiOOH/ Ni(OH)₂ reduction peak is well-separated from other electrochemical processes and can be easily integrated. The values of these integrals were used to correlate experimentally measured charge density with an electrochemically active surface layer. The Aecsa of the nickel nanowires, homogeneous and heterogeneous nickel films were calculated to be Aecsa-NiNWs = 4.2066 ± 0.0472 cm², Aecsa-homNi = 1.7175 ± 0.0503 cm² and Aecsa-hetNi = 2.1862 ± 0.0154 cm². These valuable results were expanded and used in electrochemical studies of formaldehyde oxidation. As mentioned nickel nanowires, heterogeneous and homogeneous nickel films were used as simple and efficient sensor for formaldehyde detection. For this purpose, electrodeposited nickel electrodes were modified in 0.1 mol L⁻¹ solution of KOH in order to expect electrochemical activity towards formaldehyde. The investigation of the electrochemical behavior of formaldehyde oxidation in 0.1 mol L⁻¹ NaOH solution at the surface of modified nickel nanowires, homogeneous and heterogeneous nickel films were carried out by means of electrochemical techniques such as cyclic voltammetric and chronoamperometric methods. From investigations of effect of different formaldehyde concentrations (from 0.001 to 0.1 mol L⁻¹) on electrochemical signal - current we provided catalysis mechanism of formaldehyde oxidation, detection limit and sensitivity of nickel electrodes. The results indicated that nickel electrodes participate directly in the electrocatalytic oxidation of formaldehyde. In the overall reaction, formaldehyde in alkaline aqueous solution exists predominantly in form of CH₂(OH)O⁻, which is oxidized to CH₂(O)O⁻. Taking into account the determined (Aecsa) values we have been able to calculate the sensitivities: 7 mA mol L⁻¹ cm⁻² for nickel nanowires, 3.5 mA mol L⁻¹ cm⁻² for heterogeneous nickel film and 2 mA mol L⁻¹ cm⁻² for heterogeneous nickel film. The detection limit was 0.2 mM for nickel nanowires, 0.5 mM for porous Ni film and 0.8 mM for homogeneous Ni film. All of these results make nickel electrodes capable for further applications.

Keywords: electrochemically active surface areas, nickel electrodes, formaldehyde, electrocatalytic oxidation

Procedia PDF Downloads 161
175 Fabrication of High-Aspect Ratio Vertical Silicon Nanowire Electrode Arrays for Brain-Machine Interfaces

Authors: Su Yin Chiam, Zhipeng Ding, Guang Yang, Danny Jian Hang Tng, Peiyi Song, Geok Ing Ng, Ken-Tye Yong, Qing Xin Zhang

Abstract:

Brain-machine interfaces (BMI) is a ground rich of exploration opportunities where manipulation of neural activity are used for interconnect with myriad form of external devices. These research and intensive development were evolved into various areas from medical field, gaming and entertainment industry till safety and security field. The technology were extended for neurological disorders therapy such as obsessive compulsive disorder and Parkinson’s disease by introducing current pulses to specific region of the brain. Nonetheless, the work to develop a real-time observing, recording and altering of neural signal brain-machine interfaces system will require a significant amount of effort to overcome the obstacles in improving this system without delay in response. To date, feature size of interface devices and the density of the electrode population remain as a limitation in achieving seamless performance on BMI. Currently, the size of the BMI devices is ranging from 10 to 100 microns in terms of electrodes’ diameters. Henceforth, to accommodate the single cell level precise monitoring, smaller and denser Nano-scaled nanowire electrode arrays are vital in fabrication. In this paper, we would like to showcase the fabrication of high aspect ratio of vertical silicon nanowire electrodes arrays using microelectromechanical system (MEMS) method. Nanofabrication of the nanowire electrodes involves in deep reactive ion etching, thermal oxide thinning, electron-beam lithography patterning, sputtering of metal targets and bottom anti-reflection coating (BARC) etch. Metallization on the nanowire electrode tip is a prominent process to optimize the nanowire electrical conductivity and this step remains a challenge during fabrication. Metal electrodes were lithographically defined and yet these metal contacts outline a size scale that is larger than nanometer-scale building blocks hence further limiting potential advantages. Therefore, we present an integrated contact solution that overcomes this size constraint through self-aligned Nickel silicidation process on the tip of vertical silicon nanowire electrodes. A 4 x 4 array of vertical silicon nanowires electrodes with the diameter of 290nm and height of 3µm has been successfully fabricated.

Keywords: brain-machine interfaces, microelectromechanical systems (MEMS), nanowire, nickel silicide

Procedia PDF Downloads 435
174 Understanding the Cause(S) of Social, Emotional and Behavioural Difficulties of Adolescents with ADHD and Its Implications for the Successful Implementation of Intervention(S)

Authors: Elisavet Kechagia

Abstract:

Due to the interplay of different genetic and environmental risk factors and its heterogeneous nature, the concept of attention deficit hyperactivity disorder (ADHD) has shaped controversy and conflicts, which have been, in turn, reflected in the controversial arguments about its treatment. Taking into account recent well evidence-based researches suggesting that ADHD is a condition, in which biopsychosocial factors are all weaved together, the current paper explores the multiple risk-factors that are likely to influence ADHD, with a particular focus on adolescents with ADHD who might experience comorbid social, emotional and behavioural disorders (SEBD). In the first section of this paper, the primary objective was to investigate the conflicting ideas regarding the definition, diagnosis and treatment of ADHD at an international level as well as to critically examine and identify the limitations of the two most prevailing sets of diagnostic criteria that inform current diagnosis, the American Psychiatric Association’s (APA) diagnostic scheme, DSM-V, and the World Health Organisation’s (WHO) classification of diseases, ICD-10. Taking into consideration the findings of current longitudinal studies on ADHD association with high rates of comorbid conditions and social dysfunction, in the second section the author moves towards an investigation of the transitional points −physical, psychological and social ones− that students with ADHD might experience during early adolescence, as informed by neuroscience and developmental contextualism theory. The third section is an exploration of the different perspectives of ADHD as reflected in individuals’ with ADHD self-reports and the KENT project’s findings on school staff’s attitudes and practices. In the last section, given the high rates of SEBDs in adolescents with ADHD, it is examined how cognitive behavioural therapy (CBT), coupled with other interventions, could be effective in ameliorating anti-social behaviours and/or other emotional and behavioral difficulties of students with ADHD. The findings of a range of randomised control studies indicate that CBT might have positive outcomes in adolescents with multiple behavioural problems, hence it is suggested to be considered both in schools and other community settings. Finally, taking into account the heterogeneous nature of ADHD, the different biopsychosocial and environmental risk factors that take place during adolescence and the discourse and practices concerning ADHD and SEBD, it is suggested how it might be possible to make sense of and meaningful improvements to the education of adolescents with ADHD within a multi-modal and multi-disciplinary whole-school approach that addresses the multiple problems that not only students with ADHD but also their peers might experience. Further research that would be based on more large-scale controls and would investigate the effectiveness of various interventions, as well as the profiles of those students who have benefited from particular approaches and those who have not, will generate further evidence concerning the psychoeducation of adolescents with ADHD allowing for generalised conclusions to be drawn.

Keywords: adolescence, attention deficit hyperctivity disorder, cognitive behavioural theory, comorbid social emotional behavioural disorders, treatment

Procedia PDF Downloads 319
173 Factors Associated with Hand Functional Disability in People with Rheumatoid Arthritis: A Systematic Review and Best-Evidence Synthesis

Authors: Hisham Arab Alkabeya, A. M. Hughes, J. Adams

Abstract:

Background: People with Rheumatoid Arthritis (RA) continue to experience problems with hand function despite new drug advances and targeted medical treatment. Consequently, it is important to identify the factors that influence the impact of RA disease on hand function. This systematic review identified observational studies that reported factors that influenced the impact of RA on hand function. Methods: MEDLINE, EMBASE, CINAL, AMED, PsychINFO, and Web of Science database were searched from January 1990 up to March 2017. Full-text articles published in English that described factors related to hand functional disability in people with RA were selected following predetermined inclusion and exclusion criteria. Pertinent data were thoroughly extracted and documented using a pre-designed data extraction form by the lead author, and cross-checked by the review team for completion and accuracy. Factors related to hand function were classified under the domains of the International Classification of Functioning, Disability, and Health (ICF) framework and health-related factors. Three reviewers independently assessed the methodological quality of the included articles using the quality of cross-sectional studies (AXIS) tool. Factors related to hand function that was investigated in two or more studies were explored using a best-evidence synthesis. Results: Twenty articles form 19 studies met the inclusion criteria from 1,271 citations; all presented cross-sectional data (five high quality and 15 low quality studies), resulting in at best limited evidence in the best-evidence synthesis. For the factors classified under the ICF domains, the best-evidence synthesis indicates that there was a range of body structure and function factors that were related with hand functional disability. However, key factors were hand strength, disease activity, and pain intensity. Low functional status (physical, emotional and social) level was found to be related with limited hand function. For personal factors, there is limited evidence that gender is not related with hand function; whereas, conflicting evidence was found regarding the relationship between age and hand function. In the domain of environmental factors, there was limited evidence that work activity was not related with hand function. Regarding health-related factors, there was limited evidence that the level of the rheumatoid factor (RF) was not related to hand function. Finally, conflicting evidence was found regarding the relationship between hand function and disease duration and general health status. Conclusion: Studies focused on body structure and function factors, highlighting a lack of investigation into personal and environmental factors when considering the impact of RA on hand function. The level of evidence which exists was limited, but identified that modifiable factors such as grip or pinch strength, disease activity and pain are the most influential factors on hand function in people with RA. The review findings suggest that important personal and environmental factors that impact on hand function in people with RA are not yet considered or reported in clinical research. Well-designed longitudinal, preferably cohort, studies are now needed to better understand the causality between personal and environmental factors and hand functional disability in people with RA.

Keywords: factors, hand function, rheumatoid arthritis, systematic review

Procedia PDF Downloads 147
172 Impact Analysis of a School-Based Oral Health Program in Brazil

Authors: Fabio L. Vieira, Micaelle F. C. Lemos, Luciano C. Lemos, Rafaela S. Oliveira, Ian A. Cunha

Abstract:

Brazil has some challenges ahead related to population oral health, most of them associated with the need of expanding into the local level its promotion and prevention activities, offer equal access to services and promote changes in the lifestyle of the population. The program implemented an oral health initiative in public schools in the city of Salvador, Bahia. The mission was to improve oral health among students on primary and secondary education, from 2 to 15 years old, using the school as a pathway to increase access to healthcare. The main actions consisted of a team's visit to the schools with educational sessions for dental cavity prevention and individual assessment. The program incorporated a clinical surveillance component through a dental evaluation of every student searching for dental disease and caries, standardization of the dentists’ team to reach uniform classification on the assessments, and the use of an online platform to register data directly from the schools. Sequentially, the students with caries were referred for free clinical treatment on the program’s Health Centre. The primary purpose of this study was to analyze the effects and outcomes of this school-based oral health program. The study sample was composed by data of a period of 3 years - 2015 to 2017 - from 13 public schools on the suburb of the city of Salvador with a total number of assessments of 9,278 on this period. From the data collected the prevalence of children with decay on permanent teeth was chosen as the most reliable indicator. The prevalence was calculated for each one of the 13 schools using the number of children with 1 or more dental caries on permanent teeth divided by the total number of students assessed for school each year. Then the percentage change per year was calculated for each school. Some schools presented a higher variation on the total number of assessments in one of the three years, so for these, the percentage change calculation was done using the two years with less variation. The results show that 10 of the 13 schools presented significative improvements for the indicator of caries in permanent teeth. The mean for the number of students with caries percentage reduction on the 13 schools was 26.8%, and the median was 32.2% caries in permanent teeth institution. The highest percentage of improvement reached a decrease of 65.6% on the indicator. Three schools presented a rise in caries prevalence (8.9, 18.9 and 37.2% increase) that, on an initial analysis, seems to be explained with the students’ cohort rotation among other schools, as well as absenteeism on the treatment. In conclusion, the program shows a relevant impact on the reduction of caries in permanent teeth among students and the need for the continuity and expansion of this integrated healthcare approach. It has also been evident the significative of the articulation between health and educational systems representing a fundamental approach to improve healthcare access for children especially in scenarios such as presented in Brazil.

Keywords: primary care, public health, oral health, school-based oral health, data management

Procedia PDF Downloads 134
171 A Supply Chain Risk Management Model Based on Both Qualitative and Quantitative Approaches

Authors: Henry Lau, Dilupa Nakandala, Li Zhao

Abstract:

In today’s business, it is well-recognized that risk is an important factor that needs to be taken into consideration before a decision is made. Studies indicate that both the number of risks faced by organizations and their potential consequences are growing. Supply chain risk management has become one of the major concerns for practitioners and researchers. Supply chain leaders and scholars are now focusing on the importance of managing supply chain risk. In order to meet the challenge of managing and mitigating supply chain risk (SCR), we must first identify the different dimensions of SCR and assess its relevant probability and severity. SCR has been classified in many different ways, and there are no consistently accepted dimensions of SCRs and several different classifications are reported in the literature. Basically, supply chain risks can be classified into two dimensions namely disruption risk and operational risk. Disruption risks are those caused by events such as bankruptcy, natural disasters and terrorist attack. Operational risks are related to supply and demand coordination and uncertainty, such as uncertain demand and uncertain supply. Disruption risks are rare but severe and hard to manage, while operational risk can be reduced through effective SCM activities. Other SCRs include supply risk, process risk, demand risk and technology risk. In fact, the disorganized classification of SCR has created confusion for SCR scholars. Moreover, practitioners need to identify and assess SCR. As such, it is important to have an overarching framework tying all these SCR dimensions together for two reasons. First, it helps researchers use these terms for communication of ideas based on the same concept. Second, a shared understanding of the SCR dimensions will support the researchers to focus on the more important research objective: operationalization of SCR, which is very important for assessing SCR. In general, fresh food supply chain is subject to certain level of risks, such as supply risk (low quality, delivery failure, hot weather etc.) and demand risk (season food imbalance, new competitors). Effective strategies to mitigate fresh food supply chain risk are required to enhance operations. Before implementing effective mitigation strategies, we need to identify the risk sources and evaluate the risk level. However, assessing the supply chain risk is not an easy matter, and existing research mainly use qualitative method, such as risk assessment matrix. To address the relevant issues, this paper aims to analyze the risk factor of the fresh food supply chain using an approach comprising both fuzzy logic and hierarchical holographic modeling techniques. This novel approach is able to take advantage the benefits of both of these well-known techniques and at the same time offset their drawbacks in certain aspects. In order to develop this integrated approach, substantial research work is needed to effectively combine these two techniques in a seamless way, To validate the proposed integrated approach, a case study in a fresh food supply chain company was conducted to verify the feasibility of its functionality in a real environment.

Keywords: fresh food supply chain, fuzzy logic, hierarchical holographic modelling, operationalization, supply chain risk

Procedia PDF Downloads 242
170 Reconceptualizing Evidence and Evidence Types for Digital Journalism Studies

Authors: Hai L. Tran

Abstract:

In the digital age, evidence-based reporting is touted as a best practice for seeking the truth and keeping the public well-informed. Journalists are expected to rely on evidence to demonstrate the validity of a factual statement and lend credence to an individual account. Evidence can be obtained from various sources, and due to a rich supply of evidence types available, the definition of this important concept varies semantically. To promote clarity and understanding, it is necessary to break down the various types of evidence and categorize them in a more coherent, systematic way. There is a wide array of devices that digital journalists deploy as proof to back up or refute a truth claim. Evidence can take various formats, including verbal and visual materials. Verbal evidence encompasses quotes, soundbites, talking heads, testimonies, voice recordings, anecdotes, and statistics communicated through written or spoken language. There are instances where evidence is simply non-verbal, such as when natural sounds are provided without any verbalized words. On the other hand, other language-free items exhibited in photos, video footage, data visualizations, infographics, and illustrations can serve as visual evidence. Moreover, there are different sources from which evidence can be cited. Supporting materials, such as public or leaked records and documents, data, research studies, surveys, polls, or reports compiled by governments, organizations, and other entities, are frequently included as informational evidence. Proof can also come from human sources via interviews, recorded conversations, public and private gatherings, or press conferences. Expert opinions, eye-witness insights, insider observations, and official statements are some of the common examples of testimonial evidence. Digital journalism studies tend to make broad references when comparing qualitative versus quantitative forms of evidence. Meanwhile, limited efforts are being undertaken to distinguish between sister terms, such as “data,” “statistical,” and “base-rate” on one side of the spectrum and “narrative,” “anecdotal,” and “exemplar” on the other. The present study seeks to develop the evidence taxonomy, which classifies evidence through the quantitative-qualitative juxtaposition and in a hierarchical order from broad to specific. According to this scheme, data, statistics, and base rate belong to the quantitative evidence group, whereas narrative, anecdote, and exemplar fall into the qualitative evidence group. Subsequently, the taxonomical classification arranges data versus narrative at the top of the hierarchy of types of evidence, followed by statistics versus anecdote and base rate versus exemplar. This research reiterates the central role of evidence in how journalists describe and explain social phenomena and issues. By defining the various types of evidence and delineating their logical connections it helps remove a significant degree of conceptual inconsistency, ambiguity, and confusion in digital journalism studies.

Keywords: evidence, evidence forms, evidence types, taxonomy

Procedia PDF Downloads 67
169 Encephalon-An Implementation of a Handwritten Mathematical Expression Solver

Authors: Shreeyam, Ranjan Kumar Sah, Shivangi

Abstract:

Recognizing and solving handwritten mathematical expressions can be a challenging task, particularly when certain characters are segmented and classified. This project proposes a solution that uses Convolutional Neural Network (CNN) and image processing techniques to accurately solve various types of equations, including arithmetic, quadratic, and trigonometric equations, as well as logical operations like logical AND, OR, NOT, NAND, XOR, and NOR. The proposed solution also provides a graphical solution, allowing users to visualize equations and their solutions. In addition to equation solving, the platform, called CNNCalc, offers a comprehensive learning experience for students. It provides educational content, a quiz platform, and a coding platform for practicing programming skills in different languages like C, Python, and Java. This all-in-one solution makes the learning process engaging and enjoyable for students. The proposed methodology includes horizontal compact projection analysis and survey for segmentation and binarization, as well as connected component analysis and integrated connected component analysis for character classification. The compact projection algorithm compresses the horizontal projections to remove noise and obtain a clearer image, contributing to the accuracy of character segmentation. Experimental results demonstrate the effectiveness of the proposed solution in solving a wide range of mathematical equations. CNNCalc provides a powerful and user-friendly platform for solving equations, learning, and practicing programming skills. With its comprehensive features and accurate results, CNNCalc is poised to revolutionize the way students learn and solve mathematical equations. The platform utilizes a custom-designed Convolutional Neural Network (CNN) with image processing techniques to accurately recognize and classify symbols within handwritten equations. The compact projection algorithm effectively removes noise from horizontal projections, leading to clearer images and improved character segmentation. Experimental results demonstrate the accuracy and effectiveness of the proposed solution in solving a wide range of equations, including arithmetic, quadratic, trigonometric, and logical operations. CNNCalc features a user-friendly interface with a graphical representation of equations being solved, making it an interactive and engaging learning experience for users. The platform also includes tutorials, testing capabilities, and programming features in languages such as C, Python, and Java. Users can track their progress and work towards improving their skills. CNNCalc is poised to revolutionize the way students learn and solve mathematical equations with its comprehensive features and accurate results.

Keywords: AL, ML, hand written equation solver, maths, computer, CNNCalc, convolutional neural networks

Procedia PDF Downloads 122
168 Temporal Changes Analysis (1960-2019) of a Greek Rural Landscape

Authors: Stamatia Nasiakou, Dimitrios Chouvardas, Michael Vrahnakis, Vassiliki Kleftoyanni

Abstract:

Recent research in the mountainous and semi-mountainous rural landscapes of Greece shows that they have been significantly changed over the last 80 years. These changes have the form of structural modification of land cover/use patterns, with the main characteristic being the extensive expansion of dense forests and shrubs at the expense of grasslands and extensive agricultural areas. The aim of this research was to study the 60-year changes (1960-2019) of land cover/ use units in the rural landscape of Mouzaki (Karditsa Prefecture, central Greece). Relevant cartographic material such as forest land use maps, digital maps (Corine Land Cover -2018), 1960 aerial photos from Hellenic Military Geographical Service, and satellite imagery (Google Earth Pro 2014, 2016, 2017 and 2019) was collected and processed in order to study landscape evolution. ArcGIS v 10.2.2 software was used to process the cartographic material and to produce several sets of data. Main product of the analysis was a digitized photo-mosaic of the 1960 aerial photographs, a digitized photo-mosaic of recent satellite images (2014, 2016, 2017 and 2019), and diagrams and maps of temporal transformation of the rural landscape (1960 – 2019). Maps and diagrams were produced by applying photointerpretation techniques and a suitable land cover/ use classification system on the two photo-mosaics. Demographic and socioeconomic inventory data was also collected mainly from diachronic census reports of the Hellenic Statistical Authority and local sources. Data analysis of the temporal transformation of land cover/ use units showed that they are mainly located in the central and south-eastern part of the study area, which mainly includes the mountainous part of the landscape. The most significant change is the expansion of the dense forests that currently dominate the southern and eastern part of the landscape. In conclusion, the produced diagrams and maps of the land cover/ use evolution suggest that woody vegetation in the rural landscape of Mouzaki has significantly increased over the past 60 years at the expense of the open areas, especially grasslands and agricultural areas. Demographic changes, land abandonment and the transformation of traditional farming practices (e.g. agroforestry) were recognized as the main cause of the landscape change. This study is part of a broader research project entitled “Perspective of Agroforestry in Thessaly region: A research on social, environmental and economic aspects to enhance farmer participation”. The project is funded by the General Secretariat for Research and Technology (GSRT) and the Hellenic Foundation for Research and Innovation (HFRI).

Keywords: Agroforestry, Forest expansion, Land cover/ use changes, Mountainous and semi-mountainous areas

Procedia PDF Downloads 108
167 Identification of Hub Genes in the Development of Atherosclerosis

Authors: Jie Lin, Yiwen Pan, Li Zhang, Zhangyong Xia

Abstract:

Atherosclerosis is a chronic inflammatory disease characterized by the accumulation of lipids, immune cells, and extracellular matrix in the arterial walls. This pathological process can lead to the formation of plaques that can obstruct blood flow and trigger various cardiovascular diseases such as heart attack and stroke. The underlying molecular mechanisms still remain unclear, although many studies revealed the dysfunction of endothelial cells, recruitment and activation of monocytes and macrophages, and the production of pro-inflammatory cytokines and chemokines in atherosclerosis. This study aimed to identify hub genes involved in the progression of atherosclerosis and to analyze their biological function in silico, thereby enhancing our understanding of the disease’s molecular mechanisms. Through the analysis of microarray data, we examined the gene expression in media and neo-intima from plaques, as well as distant macroscopically intact tissue, across a cohort of 32 hypertensive patients. Initially, 112 differentially expressed genes (DEGs) were identified. Subsequent immune infiltration analysis indicated a predominant presence of 27 immune cell types in the atherosclerosis group, particularly noting an increase in monocytes and macrophages. In the Weighted gene co-expression network analysis (WGCNA), 10 modules with a minimum of 30 genes were defined as key modules, with blue, dark, Oliver green and sky-blue modules being the most significant. These modules corresponded respectively to monocyte, activated B cell, and activated CD4 T cell gene patterns, revealing a strong morphological-genetic correlation. From these three gene patterns (modules morphology), a total of 2509 key genes (Gene Significance >0.2, module membership>0.8) were extracted. Six hub genes (CD36, DPP4, HMOX1, PLA2G7, PLN2, and ACADL) were then identified by intersecting 2509 key genes, 102 DEGs with lipid-related genes from the Genecard database. The bio-functional analysis of six hub genes was estimated by a robust classifier with an area under the curve (AUC) of 0.873 in the ROC plot, indicating excellent efficacy in differentiating between the disease and control group. Moreover, PCA visualization demonstrated clear separation between the groups based on these six hub genes, suggesting their potential utility as classification features in predictive models. Protein-protein interaction (PPI) analysis highlighted DPP4 as the most interconnected gene. Within the constructed key gene-drug network, 462 drugs were predicted, with ursodeoxycholic acid (UDCA) being identified as a potential therapeutic agent for modulating DPP4 expression. In summary, our study identified critical hub genes implicated in the progression of atherosclerosis through comprehensive bioinformatic analyses. These findings not only advance our understanding of the disease but also pave the way for applying similar analytical frameworks and predictive models to other diseases, thereby broadening the potential for clinical applications and therapeutic discoveries.

Keywords: atherosclerosis, hub genes, drug prediction, bioinformatics

Procedia PDF Downloads 66
166 Multimodal Biometric Cryptography Based Authentication in Cloud Environment to Enhance Information Security

Authors: D. Pugazhenthi, B. Sree Vidya

Abstract:

Cloud computing is one of the emerging technologies that enables end users to use the services of cloud on ‘pay per usage’ strategy. This technology grows in a fast pace and so is its security threat. One among the various services provided by cloud is storage. In this service, security plays a vital factor for both authenticating legitimate users and protection of information. This paper brings in efficient ways of authenticating users as well as securing information on the cloud. Initial phase proposed in this paper deals with an authentication technique using multi-factor and multi-dimensional authentication system with multi-level security. Unique identification and slow intrusive formulates an advanced reliability on user-behaviour based biometrics than conventional means of password authentication. By biometric systems, the accounts are accessed only by a legitimate user and not by a nonentity. The biometric templates employed here do not include single trait but multiple, viz., iris and finger prints. The coordinating stage of the authentication system functions on Ensemble Support Vector Machine (SVM) and optimization by assembling weights of base SVMs for SVM ensemble after individual SVM of ensemble is trained by the Artificial Fish Swarm Algorithm (AFSA). Thus it helps in generating a user-specific secure cryptographic key of the multimodal biometric template by fusion process. Data security problem is averted and enhanced security architecture is proposed using encryption and decryption system with double key cryptography based on Fuzzy Neural Network (FNN) for data storing and retrieval in cloud computing . The proposing scheme aims to protect the records from hackers by arresting the breaking of cipher text to original text. This improves the authentication performance that the proposed double cryptographic key scheme is capable of providing better user authentication and better security which distinguish between the genuine and fake users. Thus, there are three important modules in this proposed work such as 1) Feature extraction, 2) Multimodal biometric template generation and 3) Cryptographic key generation. The extraction of the feature and texture properties from the respective fingerprint and iris images has been done initially. Finally, with the help of fuzzy neural network and symmetric cryptography algorithm, the technique of double key encryption technique has been developed. As the proposed approach is based on neural networks, it has the advantage of not being decrypted by the hacker even though the data were hacked already. The results prove that authentication process is optimal and stored information is secured.

Keywords: artificial fish swarm algorithm (AFSA), biometric authentication, decryption, encryption, fingerprint, fusion, fuzzy neural network (FNN), iris, multi-modal, support vector machine classification

Procedia PDF Downloads 259
165 Relationships of Plasma Lipids, Lipoproteins and Cardiovascular Outcomes with Climatic Variations: A Large 8-Year Period Brazilian Study

Authors: Vanessa H. S. Zago, Ana Maria H. de Avila, Paula P. Costa, Welington Corozolla, Liriam S. Teixeira, Eliana C. de Faria

Abstract:

Objectives: The outcome of cardiovascular disease is affected by environment and climate. This study evaluated the possible relationships between climatic and environmental changes and the occurrence of biological rhythms in serum lipids and lipoproteins in a large population sample in the city of Campinas, State of Sao Paulo, Brazil. In addition, it determined the temporal variations of death due to atherosclerotic events in Campinas during the time window examined. Methods: A large 8-year retrospective study was carried out to evaluate the lipid profiles of individuals attended at the University of Campinas (Unicamp). The study population comprised 27.543 individuals of both sexes and of all ages. Normolipidemic and dyslipidemic individuals classified according to Brazilian guidelines on dyslipidemias, participated in the study. For the same period, the temperature, relative humidity and daily brightness records were obtained from the Centro de Pesquisas Meteorologicas e Climaticas Aplicadas a Agricultura/Unicamp and frequencies of death due to atherosclerotic events in Campinas were acquired from the Brazilian official database DATASUS, according to the International Classification of Diseases. Statistical analyses were performed using both Cosinor and ARIMA temporal analysis methods. For cross-correlation analysis between climatic and lipid parameters, cross-correlation functions were used. Results: Preliminary results indicated that rhythmicity was significant for LDL-C and HDL-C in the cases of both normolipidemic and dyslipidemic subjects (n =respectively 11.892 and 15.651 both measures increasing in the winter and decreasing in the summer). On the other hand, for dyslipidemic subjects triglycerides increased in summer and decreased in winter, in contrast to normolipidemic ones, in which triglycerides did not show rhythmicity. The number of deaths due to atherosclerotic events showed significant rhythmicity, with maximum and minimum frequencies in winter and summer, respectively. Cross-correlation analyzes showed that low humidity and temperature, higher thermal amplitude and dark cycles are associated with increased levels of LDL-C and HDL-C during winter. In contrast, TG showed moderate cross-correlations with temperature and minimum humidity in an inverse way: maximum temperature and humidity increased TG during the summer. Conclusions: This study showed a coincident rhythmicity between low temperatures and high concentrations of LDL-C and HDL-C and the number of deaths due to atherosclerotic cardiovascular events in individuals from the city of Campinas. The opposite behavior of cholesterol and TG suggest different physiological mechanisms in their metabolic modulation by climate parameters change. Thus, new analyses are underway to better elucidate these mechanisms, as well as variations in lipid concentrations in relation to climatic variations and their associations with atherosclerotic disease and death outcomes in Campinas.

Keywords: atherosclerosis, climatic variations, lipids and lipoproteins, associations

Procedia PDF Downloads 117
164 Impaired Transient Receptor Potential Vanilloid 4-Mediated Dilation of Mesenteric Arteries in Spontaneously Hypertensive Rats

Authors: Ammar Boudaka, Maryam Al-Suleimani, Hajar BaOmar, Intisar Al-Lawati, Fahad Zadjali

Abstract:

Background: Hypertension is increasingly becoming a matter of medical and public health importance. The maintenance of normal blood pressure requires a balance between cardiac output and total peripheral resistance. The endothelium, through the release of vasodilating factors, plays an important role in the control of total peripheral resistance and hence blood pressure homeostasis. Transient Receptor Potential Vanilloid type 4 (TRPV4) is a mechanosensitive non-selective cation channel that is expressed on the endothelium and contributes to endothelium-mediated vasodilation. So far, no data are available about the morphological and functional status of this channel in hypertensive cases. Objectives: This study aimed to investigate whether there is any difference in the morphological and functional features of TRPV4 in the mesenteric artery of normotensive and hypertensive rats. Methods: Functional feature of TRPV4 in four experimental animal groups: young and adult Wistar-Kyoto rats (WKY-Y and WKY-A), young and adult spontaneously hypertensive rats (SHR-Y and SHR-A), was studied by adding 5 µM 4αPDD (TRPV4 agonist) to mesenteric arteries mounted in a four-chamber wire myograph and pre-contracted with 4 µM phenylephrine. The 4αPDD-induced response was investigated in the presence and absence of 1 µM HC067047 (TRPV4 antagonist), 100 µM L-NAME (nitric oxide synthase inhibitor), and endothelium. The morphological distribution of TRPV4 in the wall of rat mesenteric arteries was investigated by immunostaining. Real-time PCR was used in order to investigate mRNA expression level of TRPV4 in the mesenteric arteries of the four groups. The collected data were expressed as mean ± S.E.M. with n equal to the number of animals used (one vessel was taken from each rat). To determine the level of significance, statistical comparisons were performed using the student’s t-test and considered to be significantly different at p<0.05. Results: 4αPDD induced a relaxation response in the mesenteric arterial preparations (WKY-Y: 85.98% ± 4.18; n = 5) that was markedly inhibited by HC067047 (18.30% ± 2.86; n= 5; p<0.05), endothelium removal (19.93% ± 1.50; n = 5; p<0.05) and L-NAME (28.18% ± 3.09; n = 5; p<0.05). The 4αPDD-induced relaxation was significantly lower in SHR-Y compared to WKY-Y (SHR-Y: 70.96% ± 3.65; n = 6, WKY-Y: 85.98% ± 4.18; n = 5-6, p<0.05. Moreover, the 4αPDD-induced response was significantly lower in WKY-A than WKY-Y (WKY-A: 75.58 ± 1.30; n = 5, WKY-Y: 85.98% ± 4.18; n = 5, p<0.05). Immunostaining study showed immunofluorescent signal confined to the endothelial layer of the mesenteric arteries. The expression of TRPV4 mRNA in SHR-Y was significantly lower than in WKY-Y (SHR-Y; 0.67RU ± 0.34; n = 4, WKY-Y: 2.34RU ± 0.15; n = 4, p<0.05). Furthermore, TRPV4 mRNA expression in WKY-A was lower than its expression in WKY-Y (WKY-A: 0.62RU ± 0.37; n = 4, WKY-Y: 2.34RU ± 0.15; n = 4, p<0.05). Conclusion: Stimulation of TRPV4, which is expressed on the endothelium of rat mesenteric artery, triggers an endothelium-mediated relaxation response that markedly decreases with hypertension and growing up changes due to downregulation of TRPV4 expression.

Keywords: hypertension, endothelium, mesenteric artery, TRPV4

Procedia PDF Downloads 313
163 Metacognitive Processing in Early Readers: The Role of Metacognition in Monitoring Linguistic and Non-Linguistic Performance and Regulating Students' Learning

Authors: Ioanna Taouki, Marie Lallier, David Soto

Abstract:

Metacognition refers to the capacity to reflect upon our own cognitive processes. Although there is an ongoing discussion in the literature on the role of metacognition in learning and academic achievement, little is known about its neurodevelopmental trajectories in early childhood, when children begin to receive formal education in reading. Here, we evaluate the metacognitive ability, estimated under a recently developed Signal Detection Theory model, of a cohort of children aged between 6 and 7 (N=60), who performed three two-alternative-forced-choice tasks (two linguistic: lexical decision task, visual attention span task, and one non-linguistic: emotion recognition task) including trial-by-trial confidence judgements. Our study has three aims. First, we investigated how metacognitive ability (i.e., how confidence ratings track accuracy in the task) relates to performance in general standardized tasks related to students' reading and general cognitive abilities using Spearman's and Bayesian correlation analysis. Second, we assessed whether or not young children recruit common mechanisms supporting metacognition across the different task domains or whether there is evidence for domain-specific metacognition at this early stage of development. This was done by examining correlations in metacognitive measures across different task domains and evaluating cross-task covariance by applying a hierarchical Bayesian model. Third, using robust linear regression and Bayesian regression models, we assessed whether metacognitive ability in this early stage is related to the longitudinal learning of children in a linguistic and a non-linguistic task. Notably, we did not observe any association between students’ reading skills and metacognitive processing in this early stage of reading acquisition. Some evidence consistent with domain-general metacognition was found, with significant positive correlations between metacognitive efficiency between lexical and emotion recognition tasks and substantial covariance indicated by the Bayesian model. However, no reliable correlations were found between metacognitive performance in the visual attention span and the remaining tasks. Remarkably, metacognitive ability significantly predicted children's learning in linguistic and non-linguistic domains a year later. These results suggest that metacognitive skill may be dissociated to some extent from general (i.e., language and attention) abilities and further stress the importance of creating educational programs that foster students’ metacognitive ability as a tool for long term learning. More research is crucial to understand whether these programs can enhance metacognitive ability as a transferable skill across distinct domains or whether unique domains should be targeted separately.

Keywords: confidence ratings, development, metacognitive efficiency, reading acquisition

Procedia PDF Downloads 150
162 Influence of a High-Resolution Land Cover Classification on Air Quality Modelling

Authors: C. Silveira, A. Ascenso, J. Ferreira, A. I. Miranda, P. Tuccella, G. Curci

Abstract:

Poor air quality is one of the main environmental causes of premature deaths worldwide, and mainly in cities, where the majority of the population lives. It is a consequence of successive land cover (LC) and use changes, as a result of the intensification of human activities. Knowing these landscape modifications in a comprehensive spatiotemporal dimension is, therefore, essential for understanding variations in air pollutant concentrations. In this sense, the use of air quality models is very useful to simulate the physical and chemical processes that affect the dispersion and reaction of chemical species into the atmosphere. However, the modelling performance should always be evaluated since the resolution of the input datasets largely dictates the reliability of the air quality outcomes. Among these data, the updated LC is an important parameter to be considered in atmospheric models, since it takes into account the Earth’s surface changes due to natural and anthropic actions, and regulates the exchanges of fluxes (emissions, heat, moisture, etc.) between the soil and the air. This work aims to evaluate the performance of the Weather Research and Forecasting model coupled with Chemistry (WRF-Chem), when different LC classifications are used as an input. The influence of two LC classifications was tested: i) the 24-classes USGS (United States Geological Survey) LC database included by default in the model, and the ii) CLC (Corine Land Cover) and specific high-resolution LC data for Portugal, reclassified according to the new USGS nomenclature (33-classes). Two distinct WRF-Chem simulations were carried out to assess the influence of the LC on air quality over Europe and Portugal, as a case study, for the year 2015, using the nesting technique over three simulation domains (25 km2, 5 km2 and 1 km2 horizontal resolution). Based on the 33-classes LC approach, particular emphasis was attributed to Portugal, given the detail and higher LC spatial resolution (100 m x 100 m) than the CLC data (5000 m x 5000 m). As regards to the air quality, only the LC impacts on tropospheric ozone concentrations were evaluated, because ozone pollution episodes typically occur in Portugal, in particular during the spring/summer, and there are few research works relating to this pollutant with LC changes. The WRF-Chem results were validated by season and station typology using background measurements from the Portuguese air quality monitoring network. As expected, a better model performance was achieved in rural stations: moderate correlation (0.4 – 0.7), BIAS (10 – 21µg.m-3) and RMSE (20 – 30 µg.m-3), and where higher average ozone concentrations were estimated. Comparing both simulations, small differences grounded on the Leaf Area Index and air temperature values were found, although the high-resolution LC approach shows a slight enhancement in the model evaluation. This highlights the role of the LC on the exchange of atmospheric fluxes, and stresses the need to consider a high-resolution LC characterization combined with other detailed model inputs, such as the emission inventory, to improve air quality assessment.

Keywords: land use, spatial resolution, WRF-Chem, air quality assessment

Procedia PDF Downloads 157
161 Double Liposomes Based Dual Drug Delivery System for Effective Eradication of Helicobacter pylori

Authors: Yuvraj Singh Dangi, Brajesh Kumar Tiwari, Ashok Kumar Jain, Kamta Prasad Namdeo

Abstract:

The potential use of liposomes as drug carriers by i.v. injection is limited by their low stability in blood stream. Firstly, phospholipid exchange and transfer to lipoproteins, mainly HDL destabilizes and disintegrates liposomes with subsequent loss of content. To avoid the pain associated with injection and to obtain better patient compliance studies concerning various dosage forms, have been developed. Conventional liposomes (unilamellar and multilamellar) have certain drawbacks like low entrapment efficiency, stability and release of drug after single breach in external membrane, have led to the new type of liposomal systems. The challenge has been successfully met in the form of Double Liposomes (DL). DL is a recently developed type of liposome, consisting of smaller liposomes enveloped in lipid bilayers. The outer lipid layer of DL can protect inner liposomes against various enzymes, therefore DL was thought to be more effective than ordinary liposomes. This concept was also supported by in vitro release characteristics i.e. DL formation inhibited the release of drugs encapsulated in inner liposomes. DL consists of several small liposomes encapsulated in large liposomes, i.e., multivesicular vesicles (MVV), therefore, DL should be discriminated from ordinary classification of multilamellar vesicles (MLV), large unilamellar vesicles (LUV), small unilamellar vesicles (SUV). However, for these liposomes, the volume of inner phase is small and loading volume of water-soluble drugs is low. In the present study, the potential of phosphatidylethanolamine (PE) lipid anchored double liposomes (DL) to incorporate two drugs in a single system is exploited as a tool to augment the H. pylori eradication rate. Preparation of DL involves two steps, first formation of primary (inner) liposomes by thin film hydration method containing one drug, then addition of suspension of inner liposomes on thin film of lipid containing the other drug. The success of formation of DL was characterized by optical and transmission electron microscopy. Quantitation of DL-bacterial interaction was evaluated in terms of percent growth inhibition (%GI) on reference strain of H. pylori ATCC 26695. To confirm specific binding efficacy of DL to H. pylori PE surface receptor we performed an agglutination assay. Agglutination in DL treated H. pylori suspension suggested selectivity of DL towards the PE surface receptor of H. pylori. Monotherapy is generally not recommended for treatment of a H. pylori infection due to the danger of development of resistance and unacceptably low eradication rates. Therefore, combination therapy with amoxicillin trihydrate (AMOX) as anti-H. pylori agent and ranitidine bismuth citrate (RBC) as antisecretory agent were selected for the study with an expectation that this dual-drug delivery approach will exert acceptable anti-H. pylori activity.

Keywords: Helicobacter pylorI, amoxicillin trihydrate, Ranitidine Bismuth citrate, phosphatidylethanolamine, multi vesicular systems

Procedia PDF Downloads 207
160 A Multi-Perspective, Qualitative Study into Quality of Life for Elderly People Living at Home and the Challenges for Professional Services in the Netherlands

Authors: Hennie Boeije, Renate Verkaik, Joke Korevaar

Abstract:

In Dutch national policy, it is promoted that the elderly remain living at home longer. They are less often admitted to a nursing home or only later in life. While living at home, it is important that they experience a good quality of life. Care providers in primary care support this. In this study, it was investigated what quality of life means for the elderly and which characteristics care should have that supports living at home longer with quality of life. To explore this topic, a qualitative methodology was used. Four focus groups were conducted: two with elderly people who live at home and their family caregivers, one with district nurses employed in-home care services and one with elderly care physicians working in primary care. Next to this individual interviews were employed with general practitioners (GPs). In total 32 participants took part in the study. The data were thematically analysed with MaxQDA software for qualitative analysis and reported. Quality of life is a multi-faceted term for elderly. The essence of their description is that they can still undertake activities that matter to them. Good physical health, mental well-being and social connections enable them to do this. Own control over their life is important for some. They are of opinion that how they experience life and manage old age is related to their resilience and coping. Key terms in the definitions of quality of life by GPs are also physical and mental health and social contacts. These are the three pillars. Next, to this elderly care, physicians mention security and safety and district nurses add control over their own life and meaningful daily activities. They agree that with frail elderly people, the balance is delicate and a change in one of the three pillars can cause it to collapse like a house of cards. When discussing what support is needed, professionals agree on access to care with a low threshold, prevention, and life course planning. When care is provided in a timely manner, a worsening of the situation can be prevented. They agree that hospital care often is not needed since most of the problems with the elderly have to do with care and security rather than with a cure per se. GPs can consult elderly care physicians to lower their workload and to bring in specific knowledge. District nurses often signal changes in the situation of the elderly. According to them, the elderly predominantly need someone to watch over them and provide them with a feeling of security. Life course planning and advance care planning can contribute to uniform treatment in line with older adults’ wishes. In conclusion, all stakeholders, including elderly persons, agree on what entails quality of life and the quality of care that is needed to support that. A future challenge is to shape conditions for the right skill mix of professionals, cooperation between the professions and breaking down differences in financing and supply. For the elderly, the challenge is preparing for aging.

Keywords: elderly living at home, quality of life, quality of care, professional cooperation, life course planning, advance care planning

Procedia PDF Downloads 128
159 Factors Affecting Early Antibiotic Delivery in Open Tibial Shaft Fractures

Authors: William Elnemer, Nauman Hussain, Samir Al-Ali, Henry Shu, Diane Ghanem, Babar Shafiq

Abstract:

Introduction: The incidence of infection in open tibial shaft injuries varies depending on the severity of the injury, with rates ranging from 1.8% for Gustilo-Anderson type I to 42.9% for type IIIB fractures. The timely administration of antibiotics upon presentation to the emergency department (ED) is an essential component of fracture management, and evidence indicates that prompt delivery of antibiotics is associated with improved outcomes. The objective of this study is to identify factors that contribute to the expedient administration of antibiotics. Methods: This is a retrospective study of open tibial shaft fractures at an academic Level I trauma center. Current Procedural Terminology (CPT) codes identified all patients treated for open tibial shaft fractures between 2015 and 2021. Open fractures were identified by reviewing ED and provider notes, and with ballistic fractures were considered open. Chart reviews were performed to extract demographics, fracture characteristics, postoperative outcomes, time to operative room, time to antibiotic order, and delivery. Univariate statistical analysis compared patients who received early antibiotics (EA), which were delivered within one hour of ED presentation, and those who received late antibiotics (LA), which were delivered outside of one hour of ED presentation. A multivariate analysis was performed to investigate patient, fracture, and transport/ED characteristics contributing to faster delivery of antibiotics. The multivariate analysis included the dependent variables: ballistic fracture, activation of Delta Trauma, Gustilo-Andersen (Type III vs. Type I and II), AO-OTA Classification (Type C vs. Type A and B), arrival between 7 am and 11 pm, and arrival via Emergency Medical Services (EMS) or walk-in. Results: Seventy ED patients with open tibial shaft fractures were identified. Of these, 39 patients (55.7%) received EA, while 31 patients (44.3%) received LA. Univariate analysis shows that the arrival via EMS as opposed to walk-in (97.4% vs. 74.2%, respectively, p = 0.01) and activation of Delta Trauma (89.7% vs. 51.6%, respectively, p < 0.001) was significantly higher in the EA group vs. the LA group. Additionally, EA cases had significantly shorter intervals between the antibiotic order and delivery when compared to LA cases (0.02 hours vs. 0.35 hours, p = 0.007). No other significant differences were found in terms of postoperative outcomes or fracture characteristics. Multivariate analysis shows that a Delta Trauma Response, arrival via EMS, and presentation between 7 am and 11 pm were independent predictors of a shorter time to antibiotic administration (Odds Ratio = 11.9, 30.7, and 5.4, p = 0.001, 0.016, and 0.013, respectively). Discussion: Earlier antibiotic delivery is associated with arrival to the ED between 7 am and 11 pm, arrival via EMS, and a coordinated Delta Trauma activation. Our findings indicate that in cases where administering antibiotics is critical to achieving positive outcomes, it is advisable to employ a coordinated Delta Trauma response. Hospital personnel should be attentive to the rapid administration of antibiotics to patients with open fractures who arrive via walk-in or during late-night hours.

Keywords: antibiotics, emergency department, fracture management, open tibial shaft fractures, orthopaedic surgery, time to or, trauma fractures

Procedia PDF Downloads 65
158 Diagnosis, Treatment, and Prognosis in Cutaneous Anaplastic Lymphoma Kinase-Positive Anaplastic Large Cell Lymphoma: A Narrative Review Apropos of a Case

Authors: Laura Gleason, Sahithi Talasila, Lauren Banner, Ladan Afifi, Neda Nikbakht

Abstract:

Primary cutaneous anaplastic large cell lymphoma (pcALCL) accounts for 9% of all cutaneous T-cell lymphomas. pcALCL is classically characterized as a solitary papulonodule that often enlarges, ulcerates, and can be locally destructive, but overall exhibits an indolent course with overall 5-year survival estimated to be 90%. Distinguishing pcALCL from systemic ALCL (sALCL) is essential as sALCL confers a poorer prognosis with average 5-year survival being 40-50%. Although extremely rare, there have been several cases of ALK-positive ALCL diagnosed on skin biopsy without evidence of systemic involvement, which poses several challenges in the classification, prognostication, treatment, and follow-up of these patients. Objectives: We present a case of cutaneous ALK-positive ALCL without evidence of systemic involvement, and a narrative review of the literature to further characterize that ALK-positive ALCL limited to the skin is a distinct variant with a unique presentation, history, and prognosis. A 30-year-old woman presented for evaluation of an erythematous-violaceous papule present on her right chest for two months. With the development of multifocal disease and persistent lymphadenopathy, a bone marrow biopsy and lymph node excisional biopsy were performed to assess for systemic disease. Both biopsies were unrevealing. The patient was counseled on pursuing systemic therapy consisting of Brentuximab, Cyclophosphamide, Doxorubicin, and Prednisone given the concern for sALCL. Apropos of the patient we searched for clinically evident, cutaneous ALK-positive ALCL cases, with and without systemic involvement, in the English literature. Risk factors, such as tumor location, number, size, ALK localization, ALK translocations, and recurrence, were evaluated in cases of cutaneous ALK-positive ALCL. The majority of patients with cutaneous ALK-positive ALCL did not progress to systemic disease. The majority of cases that progressed to systemic disease in adults had recurring skin lesions and cytoplasmic localization of ALK. ALK translocations did not influence disease progression. Mean time to disease progression was 16.7 months, and significant mortality (50%) was observed in those cases that progressed to systemic disease. Pediatric cases did not exhibit a trend similar to adult cases. In both the adult and pediatric cases, a subset of cutaneous-limited ALK-positive ALCL were treated with chemotherapy. All cases treated with chemotherapy did not progress to systemic disease. Apropos of an ALK-positive ALCL patient with clinical cutaneous limited disease in the histologic presence of systemic markers, we discussed the literature data, highlighting the crucial issues related to developing a clinical strategy to approach this rare subtype of ALCL. Physicians need to be aware of the overall spectrum of ALCL, including cutaneous limited disease, systemic disease, disease with NPM-ALK translocation, disease with ALK and EMA positivity, and disease with skin recurrence.

Keywords: anaplastic large cell lymphoma, systemic, cutaneous, anaplastic lymphoma kinase, ALK, ALCL, sALCL, pcALCL, cALCL

Procedia PDF Downloads 83
157 A Versatile Data Processing Package for Ground-Based Synthetic Aperture Radar Deformation Monitoring

Authors: Zheng Wang, Zhenhong Li, Jon Mills

Abstract:

Ground-based synthetic aperture radar (GBSAR) represents a powerful remote sensing tool for deformation monitoring towards various geohazards, e.g. landslides, mudflows, avalanches, infrastructure failures, and the subsidence of residential areas. Unlike spaceborne SAR with a fixed revisit period, GBSAR data can be acquired with an adjustable temporal resolution through either continuous or discontinuous operation. However, challenges arise from processing high temporal-resolution continuous GBSAR data, including the extreme cost of computational random-access-memory (RAM), the delay of displacement maps, and the loss of temporal evolution. Moreover, repositioning errors between discontinuous campaigns impede the accurate measurement of surface displacements. Therefore, a versatile package with two complete chains is developed in this study in order to process both continuous and discontinuous GBSAR data and address the aforementioned issues. The first chain is based on a small-baseline subset concept and it processes continuous GBSAR images unit by unit. Images within a window form a basic unit. By taking this strategy, the RAM requirement is reduced to only one unit of images and the chain can theoretically process an infinite number of images. The evolution of surface displacements can be detected as it keeps temporarily-coherent pixels which are present only in some certain units but not in the whole observation period. The chain supports real-time processing of the continuous data and the delay of creating displacement maps can be shortened without waiting for the entire dataset. The other chain aims to measure deformation between discontinuous campaigns. Temporal averaging is carried out on a stack of images in a single campaign in order to improve the signal-to-noise ratio of discontinuous data and minimise the loss of coherence. The temporal-averaged images are then processed by a particular interferometry procedure integrated with advanced interferometric SAR algorithms such as robust coherence estimation, non-local filtering, and selection of partially-coherent pixels. Experiments are conducted using both synthetic and real-world GBSAR data. Displacement time series at the level of a few sub-millimetres are achieved in several applications (e.g. a coastal cliff, a sand dune, a bridge, and a residential area), indicating the feasibility of the developed GBSAR data processing package for deformation monitoring of a wide range of scientific and practical applications.

Keywords: ground-based synthetic aperture radar, interferometry, small baseline subset algorithm, deformation monitoring

Procedia PDF Downloads 161
156 Growth and Characterization of Cuprous Oxide (Cu2O) Nanorods by Reactive Ion Beam Sputter Deposition (Ibsd) Method

Authors: Assamen Ayalew Ejigu, Liang-Chiun Chao

Abstract:

In recent semiconductor and nanotechnology, quality material synthesis, proper characterizations, and productions are the big challenges. As cuprous oxide (Cu2O) is a promising semiconductor material for photovoltaic (PV) and other optoelectronic applications, this study was aimed at to grow and characterize high quality Cu2O nanorods for the improvement of the efficiencies of thin film solar cells and other potential applications. In this study, well-structured cuprous oxide (Cu2O) nanorods were successfully fabricated using IBSD method in which the Cu2O samples were grown on silicon substrates with a substrate temperature of 400°C in an IBSD chamber of pressure of 4.5 x 10-5 torr using copper as a target material. Argon, and oxygen gases were used as a sputter and reactive gases, respectively. The characterization of the Cu2O nanorods (NRs) were done in comparison with Cu2O thin film (TF) deposited with the same method but with different Ar:O2 flow rates. With Ar:O2 ratio of 9:1 single phase pure polycrystalline Cu2O NRs with diameter of ~500 nm and length of ~4.5 µm were grow. Increasing the oxygen flow rates, pure single phase polycrystalline Cu2O thin film (TF) was found at Ar:O2 ratio of 6:1. The field emission electron microscope (FE-SEM) measurements showed that both samples have smooth morphologies. X-ray diffraction and Rama scattering measurements reveals the presence of single phase Cu2O in both samples. The differences in Raman scattering and photoluminescence (PL) bands of the two samples were also investigated and the results showed us there are differences in intensities, in number of bands and in band positions. Raman characterization shows that the Cu2O NRs sample has pronounced Raman band intensities, higher numbers of Raman bands than the Cu2O TF which has only one second overtone Raman signal at 2 (217 cm-1). The temperature dependent photoluminescence (PL) spectra measurements, showed that the defect luminescent band centered at 720 nm (1.72 eV) is the dominant one for the Cu2O NRs and the 640 nm (1.937 eV) band was the only PL band observed from the Cu2O TF. The difference in optical and structural properties of the samples comes from the oxygen flow rate change in the process window of the samples deposition. This gave us a roadmap for further investigation of the electrical and other optical properties for the tunable fabrication of the Cu2O nano/micro structured sample for the improvement of the efficiencies of thin film solar cells in addition to other potential applications. Finally, the novel morphologies, excellent structural and optical properties seen exhibits the grown Cu2O NRs sample has enough quality to be used in further research of the nano/micro structured semiconductor materials.

Keywords: defect levels, nanorods, photoluminescence, Raman modes

Procedia PDF Downloads 241
155 A Multifactorial Algorithm to Automate Screening of Drug-Induced Liver Injury Cases in Clinical and Post-Marketing Settings

Authors: Osman Turkoglu, Alvin Estilo, Ritu Gupta, Liliam Pineda-Salgado, Rajesh Pandey

Abstract:

Background: Hepatotoxicity can be linked to a variety of clinical symptoms and histopathological signs, posing a great challenge in the surveillance of suspected drug-induced liver injury (DILI) cases in the safety database. Additionally, the majority of such cases are rare, idiosyncratic, highly unpredictable, and tend to demonstrate unique individual susceptibility; these qualities, in turn, lend to a pharmacovigilance monitoring process that is often tedious and time-consuming. Objective: Develop a multifactorial algorithm to assist pharmacovigilance physicians in identifying high-risk hepatotoxicity cases associated with DILI from the sponsor’s safety database (Argus). Methods: Multifactorial selection criteria were established using Structured Query Language (SQL) and the TIBCO Spotfire® visualization tool, via a combination of word fragments, wildcard strings, and mathematical constructs, based on Hy’s law criteria and pattern of injury (R-value). These criteria excluded non-eligible cases from monthly line listings mined from the Argus safety database. The capabilities and limitations of these criteria were verified by comparing a manual review of all monthly cases with system-generated monthly listings over six months. Results: On an average, over a period of six months, the algorithm accurately identified 92% of DILI cases meeting established criteria. The automated process easily compared liver enzyme elevations with baseline values, reducing the screening time to under 15 minutes as opposed to multiple hours exhausted using a cognitively laborious, manual process. Limitations of the algorithm include its inability to identify cases associated with non-standard laboratory tests, naming conventions, and/or incomplete/incorrectly entered laboratory values. Conclusions: The newly developed multifactorial algorithm proved to be extremely useful in detecting potential DILI cases, while heightening the vigilance of the drug safety department. Additionally, the application of this algorithm may be useful in identifying a potential signal for DILI in drugs not yet known to cause liver injury (e.g., drugs in the initial phases of development). This algorithm also carries the potential for universal application, due to its product-agnostic data and keyword mining features. Plans for the tool include improving it into a fully automated application, thereby completely eliminating a manual screening process.

Keywords: automation, drug-induced liver injury, pharmacovigilance, post-marketing

Procedia PDF Downloads 152
154 Cellular Targeting to Dual Gaseous Microenvironments by Polydimethylsiloxane Microchip

Authors: Samineh Barmaki, Ville Jokinen, Esko Kankuri

Abstract:

We report a microfluidic chip that can be used to modify the gaseous microenvironment of a cell-culture in ambient atmospheric conditions. The aim of the study is to show the cellular response to nitric oxide (NO) under hypoxic (oxygen < 5%) condition. Simultaneously targeting to hypoxic and nitric oxide will provide an opportunity for NO‑based therapeutics. Studies on cellular responses to lowered oxygen concentration or to gaseous mediators are usually carried out under a specific macro environment, such as hypoxia chambers, or with specific NO donor molecules that may have additional toxic effects. In our study, the chip consists of a microfluidic layer and a cell culture well, separated by a thin gas permeable polydimethylsiloxane (PDMS) membrane. The main design goal is to separate the gas oxygen scavenger and NO donor solutions, which are often toxic, from the cell media. Two different types of gas exchangers, titled 'pool' and 'meander' were tested. We find that the pool design allows us to reach a higher level of oxygen depletion than meander (24.32 ± 19.82 %vs -3.21 ± 8.81). Our microchip design can make the cells culture more simple and makes it easy to adapt existing cell culture protocols. Our first application is utilizing the chip to create hypoxic conditions on targeted areas of cell culture. In this study, oxygen scavenger sodium sulfite generates hypoxia and its effect on human embryonic kidney cells (HEK-293). The PDMS membrane was coated with fibronectin before initiating cell cultures, and the cells were grown for 48h on the chips before initiating the gas control experiments. The hypoxia experiments were performed by pumping of O₂-depleted H₂O into the microfluidic channel with a flow-rate of 0.5 ml/h. Image-iT® reagent as an oxygen level responser was mixed with HEK-293 cells. The fluorescent signal appears on cells stained with Image-iT® hypoxia reagent (after 6h of pumping oxygen-depleted H₂O through the microfluidic channel in pool area). The exposure to different levels of O₂ can be controlled by varying the thickness of the PDMS membrane. Recently, we improved the design of the microfluidic chip, which can control the microenvironment of two different gases at the same time. The hypoxic response was also improved from the new design of microchip. The cells were grown on the thin PDMS membrane for 30 hours, and with a flowrate of 0.1 ml/h; the oxygen scavenger was pumped into the microfluidic channel. We also show that by pumping sodium nitroprusside (SNP) as a nitric oxide donor activated under light and can generate nitric oxide on top of PDMS membrane. We are aiming to show cellular microenvironment response of HEK-293 cells to both nitric oxide (by pumping SNP) and hypoxia (by pumping oxygen scavenger solution) in separated channels in one microfluidic chip.

Keywords: hypoxia, nitric oxide, microenvironment, microfluidic chip, sodium nitroprusside, SNP

Procedia PDF Downloads 134