Search results for: heading time
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18209

Search results for: heading time

13649 Comparison Conventional with Microwave-Assisted Drying Method on the Physicochemical Characteristics of Rice Bran Noodle

Authors: Chien-Chun Huang, Yi-U Chiou, Chiun-C.R. Wang

Abstract:

For longer shelf life of noodles, air-dried method is the traditional way for the noodle preparation. Microwave drying has the specific advantage of rapid and uniform heating due to the penetration of microwaves into the body of the product. Microwave-assisted facility offers a quick and energy saving method during food dehydration as compares to the conventional air-dried method. Recently, numerous studies in the rheological characteristics of pasta or spaghetti were carried out with microwave–assisted air driers and many agricultural products were dried successfully. There are few researches about the evaluation of physicochemical characteristics and cooking quality of microwave-assisted air dried salted noodles. The purposes of this study were to compare the difference between conventional and microwave-assisted drying method on the physicochemical properties and eating quality of rice bran noodles. Three different microwave power including 0.5 KW, 0.75 KW and 1.0 KW installing with 50℃ hot air were applied for dehydration of rice bran noodles in this study. Three proportion of rice bran ranging in 0-20% were incorporated into salted noodles processing. The appearance, optimum cooking time, cooking yield and losses, textural profiles analysis, sensory evaluation of rice bran noodles were measured in this study. The results indicated that high power (1.0 KW) microwave facility caused partially burnt and porous on the surface of rice bran noodles. However, no characteristic of noodle was appeared on the surface of noodles preparing by low power (0.5 KW) microwave facility. The optimum cooking time of noodles was decreased as higher power microwave or higher proportion of rice bran was incorporated into noodles preparation. The higher proportion of rice bran (20%) or higher power of microwave-assisted dried noodles obtained the higher color intensity and the higher cooking losses as compared with conventional air dried noodles. The firmness of cooked rice bran noodles slightly decreased in the cooked noodles which were dried by high power microwave-assisted method. The shearing force, tensile strength, elasticity and texture profiles of cooked rice noodles decreased with the progress of the proportion of rice bran. The results of sensory evaluation indicated conventional dried noodles obtained the higher springiness, cohesiveness and acceptability of cooked noodles than high power (1.0 KW) microwave-assisted dried noodles. However, low power (0.5 KW) microwave-assisted dried noodles showed the comparable sensory attributes and acceptability with conventional dried noodles. Moreover, the sensory attributes including firmness, springiness, cohesiveness decreased, but stickiness increased, with the increases of rice bran proportion. These results inferred that incorporation of lower proportion of rice bran and lower power microwave-assisted dried noodles processing could produce faster cooking time and acceptable quality of cooked noodles as compared to conventional dried noodles.

Keywords: microwave-assisted drying method, physicochemical characteristics, rice bran noodles, sensory evaluation

Procedia PDF Downloads 482
13648 Temporal Profile of T2 MRI and 1H-MRS in the MDX Mouse Model of Duchenne Muscular Dystrophy

Authors: P. J. Sweeney, T. Ahtoniemi, J. Puoliväli, T. Laitinen, K.Lehtimäki, A. Nurmi, D. Wells

Abstract:

Duchenne muscular dystrophy (DMD) is an X-linked, lethal muscle wasting disease for which there are currently no treatment that effectively prevents the muscle necrosis and progressive muscle loss. DMD is among the most common of inherited diseases affecting around 1/3500 live male births. MDX (X-linked muscular dystrophy) mice only partially encapsulate the disease in humans and display weakness in muscles, muscle damage and edema during a period deemed the “critical period” when these mice go through cycles of muscular degeneration and regeneration. Although the MDX mutant mouse model has been extensively studied as a model for DMD, to-date an extensive temporal, non-invasive imaging profile that utilizes magnetic resonance imaging (MRI) and 1H-magnetic resonance spectroscopy (1H-MRS) has not been performed.. In addition, longitudinal imaging characterization has not coincided with attempts to exacerbate the progressive muscle damage by exercise. In this study we employed an 11.7 T small animal MRI in order to characterize the MRI and MRS profile of MDX mice longitudinally during a 12 month period during which MDX mice were subjected to exercise. Male mutant MDX mice (n=15) and male wild-type mice (n=15) were subjected to a chronic exercise regime of treadmill walking (30 min/ session) bi-weekly over the whole 12 month follow-up period. Mouse gastrocnemius and tibialis anterior muscles were profiled with baseline T2-MRI and 1H-MRS at 6 weeks of age. Imaging and spectroscopy was repeated again at 3 months, 6 months, 9 months and 12 months of age. Plasma creatine kinase (CK) level measurements were coincided with time-points for T2-MRI and 1H-MRS, but also after the “critical period” at 10 weeks of age. The results obtained from this study indicate that chronic exercise extends dystrophic phenotype of MDX mice as evidenced by T2-MRI and1H-MRS. T2-MRI revealed extent and location of the muscle damage in gastrocnemius and tibialis anterior muscles as hyperintensities (lesions and edema) in exercised MDX mice over follow-up period.. The magnitude of the muscle damage remained stable over time in exercised mice. No evident fat infiltration or cumulation to the muscle tissues was seen at any time-point in exercised MDX mice. Creatine, choline and taurine levels evaluated by 1H-MRS from the same muscles were found significantly decreased in each time-point, Extramyocellular (EMCL) and intramyocellular lipids (IMCL) did not change in exercised mice supporting the findings from anatomical T2-MRI scans for fat content. Creatine kinase levels were found to be significantly higher in exercised MDX mice during the follow-up period and importantly CK levels remained stable over the whole follow-up period. Taken together, we have described here longitudinal prophile for muscle damage and muscle metabolic changes in MDX mice subjected to chronic exercised. The extent of the muscle damage by T2-MRI was found to be stable through the follow-up period in muscles examined. In addition, metabolic profile, especially creatine, choline and taurine levels in muscles, was found to be sustained between time-points. The anatomical muscle damage evaluated by T2-MRI was supported by plasma CK levels which remained stable over the follow-up period. These findings show that non-invasive imaging and spectroscopy can be used effectively to evaluate chronic muscle pathology. These techniques can be also used to evaluate the effect of various manipulations, like here exercise, on the phenotype of the mice. Many of the findings we present here are translatable to clinical disease, such as decreased creatine, choline and taurine levels in muscles. Imaging by T2-MRI and 1H-MRS also revealed that fat content or extramyocellar and intramyocellular lipids, respectively, are not changed in MDX mice, which is in contrast to clinical manifestation of the Duchenne’s muscle dystrophy. Findings show that non-invasive imaging can be used to characterize the phenotype of a MDX model and its translatability to clinical disease, and to study events that have traditionally been not examined, like here rigorous exercise related sustained muscle damage after the “critical period”. The ability for this model to display sustained damage beyond the spontaneous “critical period“ and in turn to study drug effects on this extended phenotype will increase the value of the MDX mouse model as a tool to study therapies and treatments aimed at DMD and associated diseases.

Keywords: 1H-MRS, MRI, muscular dystrophy, mouse model

Procedia PDF Downloads 357
13647 Developing a Cloud Intelligence-Based Energy Management Architecture Facilitated with Embedded Edge Analytics for Energy Conservation in Demand-Side Management

Authors: Yu-Hsiu Lin, Wen-Chun Lin, Yen-Chang Cheng, Chia-Ju Yeh, Yu-Chuan Chen, Tai-You Li

Abstract:

Demand-Side Management (DSM) has the potential to reduce electricity costs and carbon emission, which are associated with electricity used in the modern society. A home Energy Management System (EMS) commonly used by residential consumers in a down-stream sector of a smart grid to monitor, control, and optimize energy efficiency to domestic appliances is a system of computer-aided functionalities as an energy audit for residential DSM. Implementing fault detection and classification to domestic appliances monitored, controlled, and optimized is one of the most important steps to realize preventive maintenance, such as residential air conditioning and heating preventative maintenance in residential/industrial DSM. In this study, a cloud intelligence-based green EMS that comes up with an Internet of Things (IoT) technology stack for residential DSM is developed. In the EMS, Arduino MEGA Ethernet communication-based smart sockets that module a Real Time Clock chip to keep track of current time as timestamps via Network Time Protocol are designed and implemented for readings of load phenomena reflecting on voltage and current signals sensed. Also, a Network-Attached Storage providing data access to a heterogeneous group of IoT clients via Hypertext Transfer Protocol (HTTP) methods is configured to data stores of parsed sensor readings. Lastly, a desktop computer with a WAMP software bundle (the Microsoft® Windows operating system, Apache HTTP Server, MySQL relational database management system, and PHP programming language) serves as a data science analytics engine for dynamic Web APP/REpresentational State Transfer-ful web service of the residential DSM having globally-Advanced Internet of Artificial Intelligence (AI)/Computational Intelligence. Where, an abstract computing machine, Java Virtual Machine, enables the desktop computer to run Java programs, and a mash-up of Java, R language, and Python is well-suited and -configured for AI in this study. Having the ability of sending real-time push notifications to IoT clients, the desktop computer implements Google-maintained Firebase Cloud Messaging to engage IoT clients across Android/iOS devices and provide mobile notification service to residential/industrial DSM. In this study, in order to realize edge intelligence that edge devices avoiding network latency and much-needed connectivity of Internet connections for Internet of Services can support secure access to data stores and provide immediate analytical and real-time actionable insights at the edge of the network, we upgrade the designed and implemented smart sockets to be embedded AI Arduino ones (called embedded AIduino). With the realization of edge analytics by the proposed embedded AIduino for data analytics, an Arduino Ethernet shield WizNet W5100 having a micro SD card connector is conducted and used. The SD library is included for reading parsed data from and writing parsed data to an SD card. And, an Artificial Neural Network library, ArduinoANN, for Arduino MEGA is imported and used for locally-embedded AI implementation. The embedded AIduino in this study can be developed for further applications in manufacturing industry energy management and sustainable energy management, wherein in sustainable energy management rotating machinery diagnostics works to identify energy loss from gross misalignment and unbalance of rotating machines in power plants as an example.

Keywords: demand-side management, edge intelligence, energy management system, fault detection and classification

Procedia PDF Downloads 251
13646 An Assessment of Airport Collaborative Decision-Making System Using Predictive Maintenance

Authors: Faruk Aras, Melih Inal, Tansel Cinar

Abstract:

The coordination of airport staff especially in the operations and maintenance departments is important for the airport operation. As a result, this coordination will increase the efficiency in all operation. Therefore, a Collaborative Decision-Making (CDM) system targets on improving the overall productivity of all operations by optimizing the use of resources and improving the predictability of actions. Enlarged productivity can be of major benefit for all airport operations. It also increases cost-efficiency. This study explains how predictive maintenance using IoT (Internet of Things), predictive operations and the statistical data such as Mean Time To Failure (MTTF) improves airport terminal operations and utilize airport terminal equipment in collaboration with collaborative decision making system/Airport Operation Control Center (AOCC). Data generated by the predictive maintenance methods is retrieved and analyzed by maintenance managers to predict when a problem is about to occur. With that information, maintenance can be scheduled when needed. As an example, AOCC operator would have chance to assign a new gate that towards to this gate all the equipment such as travellator, elevator, escalator etc. are operational if the maintenance team is in collaboration with AOCC since maintenance team is aware of the health of the equipment because of predictive maintenance methods. Applying predictive maintenance methods based on analyzing the health of airport terminal equipment dramatically reduces the risk of downtime by on time repairs. We can classify the categories as high priority calls for urgent repair action, as medium priority requires repair at the earliest opportunity, and low priority allows maintenance to be scheduled when convenient. In all cases, identifying potential problems early resulted in better allocation airport terminal resources by AOCC.

Keywords: airport, predictive maintenance, collaborative decision-making system, Airport Operation Control Center (AOCC)

Procedia PDF Downloads 365
13645 Predicting Match Outcomes in Team Sport via Machine Learning: Evidence from National Basketball Association

Authors: Jacky Liu

Abstract:

This paper develops a team sports outcome prediction system with potential for wide-ranging applications across various disciplines. Despite significant advancements in predictive analytics, existing studies in sports outcome predictions possess considerable limitations, including insufficient feature engineering and underutilization of advanced machine learning techniques, among others. To address these issues, we extend the Sports Cross Industry Standard Process for Data Mining (SRP-CRISP-DM) framework and propose a unique, comprehensive predictive system, using National Basketball Association (NBA) data as an example to test this extended framework. Our approach follows a holistic methodology in feature engineering, employing both Time Series and Non-Time Series Data, as well as conducting Explanatory Data Analysis and Feature Selection. Furthermore, we contribute to the discourse on target variable choice in team sports outcome prediction, asserting that point spread prediction yields higher profits as opposed to game-winner predictions. Using machine learning algorithms, particularly XGBoost, results in a significant improvement in predictive accuracy of team sports outcomes. Applied to point spread betting strategies, it offers an astounding annual return of approximately 900% on an initial investment of $100. Our findings not only contribute to academic literature, but have critical practical implications for sports betting. Our study advances the understanding of team sports outcome prediction a burgeoning are in complex system predictions and pave the way for potential profitability and more informed decision making in sports betting markets.

Keywords: machine learning, team sports, game outcome prediction, sports betting, profits simulation

Procedia PDF Downloads 102
13644 Rethinking Urban Floodplain Management: The Case of Colombo, Sri Lanka

Authors: Malani Herath, Sohan Wijesekera, Jagath Munasingha

Abstract:

The impact of recent floods become significant, and the extraordinary flood events cause considerable damage to lives, properties, environment and negatively affect the whole development of Colombo urban region. Even though the Colombo urban region experiences recurrent flood impacts, several spatial planning interventions have been taken from time to time since early 20th century. All past plans have adopted a traditional approach to flood management, using infrastructural measures to reduce the chance of flooding together with rigid planning regulations. The existing flood risk management practices do not operate to be acceptable by the local community particular the urban poor. Researchers have constantly reported the differences in estimations of flood risk, priorities, concerns of experts and the local community. Risk-based decision making in flood management is not only a matter of technical facts; it has a significant bearing on how flood risk is viewed by local community and individuals. Moreover, sustainable flood management is an integrated approach, which highlights joint actions of experts and community. This indicates the necessity of further societal discussion on the acceptable level of flood risk indicators to prioritize and identify the appropriate flood management measures in Colombo. The understanding and evaluation of flood risk by local people are important to integrate in the decision-making process. This research questioned about the gap between the acceptable level of flood risk to spatial planners and to the local communities in Colombo. A comprehensive literature review was conducted to prepare a framework to analyze the public perception in Colombo. This research work identifies the factors that affect the variation of flood risk and acceptable levels to both local community and planning authorities.

Keywords: Colombo basin, public perception, urban flood risk, multi-criteria analysis

Procedia PDF Downloads 314
13643 Ajmer Dargah: Sustaining the Identity of a Religious Precinct

Authors: Vinod Chovvayil Panengal

Abstract:

The idea of secularism in India has taken a different direction after independence when religion became a reason for a great divide in, otherwise harmonious society. Since then the religious spaces became protected and more sacred and not shared. However, there is a larger threat on beliefs, rituals, and the spirituality of these religions in the form of technology, tourism and globalization. In a way, they weaken the importance of religion from our society over a period of time. The importance of religion to a sense of place has been overlooked or diminished. Religion provides symbolic meaning to places which distinguishes certain physical environments from otherwise similar ones. The rapid transformation of urban spaces, eliminating the territorial differences of sense, spirit and identity have started creating urban centers rooting out this genre of unique urban spaces from our cities. Indian cities, with a strong identity created by rich and colorful overlays of culture through its evolution, have been threatened by this de-territorialization. This paper enquires the relationship of the symbol of the identity and religiosity of a place, through spatial form, rituals and activity, and accommodating the technology and the changing social structure within the bounds of that relationship. The subjects for this enquiry are Sufism and the Sufi city- Ajmer. The internal transformations in the ideologies of Islam and Sufism and the changes in the society surround it triggered the phenomena of de- territorialization. The need for establishing a symbiotic relationship between the spiritual content and the social life, through the manifestation of space, time and activity derived from this concern on abated territory of Sufism inside the city. Redirecting transformation catalyst such as tourism, technology, etc, towards the improvement of physical and social conditions, preservation of the heritage and the expansion of the notional idea of religion over the city will help to re- territorialize city as a Sufi city.

Keywords: sense of place, religion, Islam, identity

Procedia PDF Downloads 273
13642 Psychological Predictors in Performance: An Exploratory Study of a Virtual Ultra-Marathon

Authors: Michael McTighe

Abstract:

Background: The COVID-19 pandemic caused the cancellation of many large-scale in-person sporting events, which led to an increase in the availability of virtual ultra-marathons. This study intended to assess how participation in virtual long distances races relates to levels of physical activity for an extended period of time. Moreover, traditional ultra-marathons are known for being not only physically demanding, but also mentally and emotionally challenging. A second component of this study was to assess how psychological contructs related to emotion regulation and mental toughness predict overall performance in the sport. Method: 83 virtual runners participating in a four-month 1000-kilometer race with the option to exceed 1000 kilometers completed a questionnaire exploring demographics, their performance, and experience in the virtual race. Participants also completed the Difficulties in Emotions Regulation Scale (DERS) and the Sports Mental Toughness Questionnaire (SMTQ). Logistics regressions assessed these constructs’ utility in predicting completion of the 1000-kilometer distance in the time allotted. Multiple regression was employed to predict the total distance traversed during the fourmonth race beyond 1000-kilometers. Result: Neither mental toughness nor emotional regulation was a significant predictor of completing the virtual race’s basic 1000-kilometer finish. However, both variables included together were marginally significant predictors of total miles traversed over the entire event beyond 1000 K (p = .051). Additionally, participation in the event promoted an increase in healthy activity with participants running and walking significantly more in the four months during the event than the four months leading up to it. Discussion: This research intended to explore how psychological constructs relate to performance in a virtual type of endurance event, and how involvement in these types of events related to levels of activity. Higher levels of mental toughness and lower levels in difficulties in emotion regulation were associated with greater performance, and participation in the event promoted an increase in athletic involvement. Future psychological skill training aimed at improving emotion regulation and mental toughness may be used to enhance athletic performance in these sports, and future investigations into these events could explore how general participation may influence these constructs over time. Finally, these results suggest that participation in this logistically accessible, and affordable type of sport can promote greater involvement in healthy activities related to running and walking.

Keywords: virtual races, emotion regulation, mental toughness, ultra-marathon, predictors in performance

Procedia PDF Downloads 94
13641 The Construction Technology of Dryer Silo Materials to Grains Made from Webbing Bamboo: A Drying Technology Solutions to Empowerment Farmers in Yogyakarta, Indonesia

Authors: Nursigit Bintoro, Abadi Barus, Catur Setyo Dedi Pamungkas

Abstract:

Indonesia is an agrarian country have almost population work as farmers. One of the popular agriculture commodity in Indonesia is paddy and corn. Production of paddy and corn are increased, but not balanced to the development of appropriate technology to farmers. Methods of drying applied with farmers still using sunshine. Drying by this method has some drawbacks, such as differences moisture content of corn grains, time used to dry around 3 days, and less quality of the products obtained. Beside it, the method of drying by using sunshine can’t do when the rainy season arrives. On this season the product obtained has less quality. One solution to the above problems is to create a dryer with simple technology. That technology is made silo dryer from webbing bamboo and wood. This technology is applicable to be applied to farmers' groups as well as the creation technology is quite cheap. The experiment material used in this research will be obtained from the corn grains. The equipment used are woven bamboo with a height of 3 meters and have capacity of up to 900 kgs as a silo, gas, burner, blower, bucket elevators, thermocouple, Arduino microcontroller 2560. This tools automatically records all the data of temperature and relative humidity. During on drying, each 30 minutes take 9 sample for measuring moisture content with moisture meter. By using this technology, farmers can save time, energy, and cost to the drying their agriculture product. In addition, by using this technology have good quality moisture content of grains and have a longer shelf life because the temperature when the heating process is controlled. Therefore, this technology is applicable to be applied to the public because the materials used to make the dryer easier to find, cheaper, and manufacture of the dryer made simple with good quality.

Keywords: grains, dryer, moisture content, appropriate technology

Procedia PDF Downloads 358
13640 Assessment of Five Photoplethysmographic Methods for Estimating Heart Rate Variability

Authors: Akshay B. Pawar, Rohit Y. Parasnis

Abstract:

Heart Rate Variability (HRV) is a widely used indicator of the regulation between the autonomic nervous system (ANS) and the cardiovascular system. Besides being non-invasive, it also has the potential to predict mortality in cases involving critical injuries. The gold standard method for determining HRV is based on the analysis of RR interval time series extracted from ECG signals. However, because it is much more convenient to obtain photoplethysmogramic (PPG) signals as compared to ECG signals (which require the attachment of several electrodes to the body), many researchers have used pulse cycle intervals instead of RR intervals to estimate HRV. They have also compared this method with the gold standard technique. Though most of their observations indicate a strong correlation between the two methods, recent studies show that in healthy subjects, except for a few parameters, the pulse-based method cannot be a surrogate for the standard RR interval- based method. Moreover, the former tends to overestimate short-term variability in heart rate. This calls for improvements in or alternatives to the pulse-cycle interval method. In this study, besides the systolic peak-peak interval method (PP method) that has been studied several times, four recent PPG-based techniques, namely the first derivative peak-peak interval method (P1D method), the second derivative peak-peak interval method (P2D method), the valley-valley interval method (VV method) and the tangent-intersection interval method (TI method) were compared with the gold standard technique. ECG and PPG signals were obtained from 10 young and healthy adults (consisting of both males and females) seated in the armchair position. In order to de-noise these signals and eliminate baseline drift, they were passed through certain digital filters. After filtering, the following HRV parameters were computed from PPG using each of the five methods and also from ECG using the gold standard method: time domain parameters (SDNN, pNN50 and RMSSD), frequency domain parameters (Very low-frequency power (VLF), Low-frequency power (LF), High-frequency power (HF) and Total power or “TP”). Besides, Poincaré plots were also plotted and their SD1/SD2 ratios determined. The resulting sets of parameters were compared with those yielded by the standard method using measures of statistical correlation (correlation coefficient) as well as statistical agreement (Bland-Altman plots). From the viewpoint of correlation, our results show that the best PPG-based methods for the determination of most parameters and Poincaré plots are the P2D method (shows more than 93% correlation with the standard method) and the PP method (mean correlation: 88%) whereas the TI, VV and P1D methods perform poorly (<70% correlation in most cases). However, our evaluation of statistical agreement using Bland-Altman plots shows that none of the five techniques agrees satisfactorily well with the gold standard method as far as time-domain parameters are concerned. In conclusion, excellent statistical correlation implies that certain PPG-based methods provide a good amount of information on the pattern of heart rate variation, whereas poor statistical agreement implies that PPG cannot completely replace ECG in the determination of HRV.

Keywords: photoplethysmography, heart rate variability, correlation coefficient, Bland-Altman plot

Procedia PDF Downloads 324
13639 Evaluation of the Cytotoxicity and Cellular Uptake of a Cyclodextrin-Based Drug Delivery System for Cancer Therapy

Authors: Caroline Mendes, Mary McNamara, Orla Howe

Abstract:

Drug delivery systems are proposed for use in cancer treatment to specifically target cancer cells and deliver a therapeutic dose without affecting normal cells. For that purpose, the use of folate receptors (FR) can be considered a key strategy, since they are commonly over-expressed in cancer cells. In this study, cyclodextrins (CD) have being used as vehicles to target FR and deliver the chemotherapeutic drug, methotrexate (MTX). CDs have the ability to form inclusion complexes, in which molecules of suitable dimensions are included within their cavities. Here, β-CD has been modified using folic acid so as to specifically target the FR. Thus, this drug delivery system consists of β-CD, folic acid and MTX (CDEnFA:MTX). Cellular uptake of folic acid is mediated with high affinity by folate receptors while the cellular uptake of antifolates, such as MTX, is mediated with high affinity by the reduced folate carriers (RFCs). This study addresses the gene (mRNA) and protein expression levels of FRs and RFCs in the cancer cell lines CaCo-2, SKOV-3, HeLa, MCF-7, A549 and the normal cell line BEAS-2B, quantified by real-time polymerase chain reaction (real-time PCR) and flow cytometry, respectively. From that, four cell lines with different levels of FRs, were chosen for cytotoxicity assays of MTX and CDEnFA:MTX using the MTT assay. Real-time PCR and flow cytometry data demonstrated that all cell lines ubiquitously express moderate levels of RFC. These experiments have also shown that levels of FR protein in CaCo-2 cells are high, while levels in SKOV-3, HeLa and MCF-7 cells are moderate. A549 and BEAS-2B cells express low levels of FR protein. FRs are highly expressed in all the cancer cell lines analysed when compared to the normal cell line BEAS-2B. The cell lines CaCo-2, MCF-7, A549 and BEAS-2B were used in the cell viability assays. 48 hours treatment with the free drug and the complex resulted in IC50 values of 93.9 µM ± 15.2 and 56.0 µM ± 4.0 for CaCo-2 for free MTX and CDEnFA:MTX respectively, 118.2 µM ± 16.8 and 97.8 µM ± 12.3 for MCF-7, 36.4 µM ± 6.9 and 75.0 µM ± 10.5 for A549 and 132.6 µM ± 16.1 and 288.1 µM ± 26.3 for BEAS-2B. These results demonstrate that free MTX is more toxic towards cell lines expressing low levels of FR, such as the BEAS-2B. More importantly, these results demonstrate that the inclusion complex CDEnFA:MTX showed greater cytotoxicity than the free drug towards the high FR expressing CaCo-2 cells, indicating that it has potential to target this receptor, enhancing the specificity and the efficiency of the drug. The use of cell imaging by confocal microscopy has allowed visualisation of FR targeting in cancer cells, as well as the identification of the interlisation pathway of the drug. Hence, the cellular uptake and internalisation process of this drug delivery system is being addressed.

Keywords: cancer treatment, cyclodextrins, drug delivery, folate receptors, reduced folate carriers

Procedia PDF Downloads 310
13638 Optimal Design of Tuned Inerter Damper-Based System for the Control of Wind-Induced Vibration in Tall Buildings through Cultural Algorithm

Authors: Luis Lara-Valencia, Mateo Ramirez-Acevedo, Daniel Caicedo, Jose Brito, Yosef Farbiarz

Abstract:

Controlling wind-induced vibrations as well as aerodynamic forces, is an essential part of the structural design of tall buildings in order to guarantee the serviceability limit state of the structure. This paper presents a numerical investigation on the optimal design parameters of a Tuned Inerter Damper (TID) based system for the control of wind-induced vibration in tall buildings. The control system is based on the conventional TID, with the main difference that its location is changed from the ground level to the last two story-levels of the structural system. The TID tuning procedure is based on an evolutionary cultural algorithm in which the optimum design variables defined as the frequency and damping ratios were searched according to the optimization criteria of minimizing the root mean square (RMS) response of displacements at the nth story of the structure. A Monte Carlo simulation was used to represent the dynamic action of the wind in the time domain in which a time-series derived from the Davenport spectrum using eleven harmonic functions with randomly chosen phase angles was reproduced. The above-mentioned methodology was applied on a case-study derived from a 37-story prestressed concrete building with 144 m height, in which the wind action overcomes the seismic action. The results showed that the optimally tuned TID is effective to reduce the RMS response of displacements up to 25%, which demonstrates the feasibility of the system for the control of wind-induced vibrations in tall buildings.

Keywords: evolutionary cultural algorithm, Monte Carlo simulation, tuned inerter damper, wind-induced vibrations

Procedia PDF Downloads 135
13637 Using Eigenvalues and Eigenvectors in Population Growth and Stability Obtaining

Authors: Abubakar Sadiq Mensah

Abstract:

The Knowledge of the population growth of a nation is paramount to national planning. The population of a place is studied and a model developed over a period of time, Matrices is used to form model for population growth. The eigenvalue ƛ of the matrix A and its corresponding eigenvector X is such that AX = ƛX is calculated. The stable age distribution of the population is obtained using the eigenvalue and the characteristic polynomial. Hence, estimation could be made using eigenvalues and eigenvectors.

Keywords: eigenvalues, eigenvectors, population, growth/stability

Procedia PDF Downloads 521
13636 Central American Security Issue: Civil War Legacy and Contemporary Challenges

Authors: Olga Andrianova, Lazar Jeifets

Abstract:

The security issue has always been one of the most sensitive and significant in Latin American context, especially focused on Central American region. Despite the fact that the time of the civil wars has ended, violence, delinquency, insecurity, discrimination still exist and keep relevance in the 21st century. This article is dedicated to consider this kind of problems, to find out the main causes and to propose solution approaches.

Keywords: Central America, insecurity, instability, post-war countries, violence

Procedia PDF Downloads 473
13635 The Comparison of Personality Background of Volunteer and Non-Volunteer Subjects

Authors: Laszlo Dorner

Abstract:

Background: In the last few decades there has been a significant discussion within the researchers of prosocial behavior about to what extent personality characteristics matter in determining the quality and frequency of helping behaviors. Of these community activities the most important is formal volunteering which mainly realises in civil services and organizations. Recently many researches have been showed up regarding the personality factors and motivations behind volunteering). Most of these researches found strong correlation between Agreeableness and Extraversion as global traits and the time spent on volunteering and its frequency as well. Aims of research: In this research we investigate the relation between formal volunteer activities and global traits in a Hungarian volunteer sample. We hypothetise that the results appeared in the previous researches show the same pattern in Hungary as well: volunteering would be related to Agreeableness and Extraversion. We also assume that the time spent on volunteering is related to these traits, since these traits would serve as an indicator of long-term volunteering. Methods: We applied the Hungarian adaptation of Big Five Questionnaire created by Caprara, Barbaranelli és Borgogni. This self-reported questionnaire contains 132 items, and explore 5 main traits examining the person’s most important emotional and motivational features regarding its personality. This research took into account the most important socio-economical factors (age, gender, religiosity, income) which can determine volunteer activities per se. The data is evaluated by SPSS 19.0 Statistical Software. Sample: 92 volunteer (formal, mainly the volunteers of Hungarian Red Cross and Hospice Organizations)and 92 non volunteer person, with matched subsamples by the factors of age, gender and qualification. Results: The volunteer subsample shows higher values of Energy and significantly higher values of Agreeableness and Openness, however, regarding Conscientiousness and Emotional Stability the differences are not significant between the volunteer and non-volunteer subsamples.

Keywords: Big Five, comparative analysis, global traits, volunteering

Procedia PDF Downloads 350
13634 A Novel Method for Face Detection

Authors: H. Abas Nejad, A. R. Teymoori

Abstract:

Facial expression recognition is one of the open problems in computer vision. Robust neutral face recognition in real time is a major challenge for various supervised learning based facial expression recognition methods. This is due to the fact that supervised methods cannot accommodate all appearance variability across the faces with respect to race, pose, lighting, facial biases, etc. in the limited amount of training data. Moreover, processing each and every frame to classify emotions is not required, as the user stays neutral for the majority of the time in usual applications like video chat or photo album/web browsing. Detecting neutral state at an early stage, thereby bypassing those frames from emotion classification would save the computational power. In this work, we propose a light-weight neutral vs. emotion classification engine, which acts as a preprocessor to the traditional supervised emotion classification approaches. It dynamically learns neutral appearance at Key Emotion (KE) points using a textural statistical model, constructed by a set of reference neutral frames for each user. The proposed method is made robust to various types of user head motions by accounting for affine distortions based on a textural statistical model. Robustness to dynamic shift of KE points is achieved by evaluating the similarities on a subset of neighborhood patches around each KE point using the prior information regarding the directionality of specific facial action units acting on the respective KE point. The proposed method, as a result, improves ER accuracy and simultaneously reduces the computational complexity of ER system, as validated on multiple databases.

Keywords: neutral vs. emotion classification, Constrained Local Model, procrustes analysis, Local Binary Pattern Histogram, statistical model

Procedia PDF Downloads 338
13633 Predictors of Survival of Therapeutic Hypothermia Based on Analysis of a Consecutive American Inner City Population over 4 Years

Authors: Jorge Martinez, Brandon Roberts, Holly Payton Toca

Abstract:

Background: Therapeutic hypothermia (TH) is the international standard of care for all comatose patients after cardiac arrest, but criticism focuses on poor outcomes. We sought to develop criteria to identify American urban patients more likely to benefit from TH. Methods: Retrospective chart review of 107 consecutive adults undergoing TH in downtown New Orleans from 2010-2014 yielded records for 99 patients with all 44 survivors or families contacted up to four years. Results: 69 males and 38 females with a mean age of 60.2 showed 63 dead (58%) and 44 survivors (42%). Presenting cardiac rhythm was divided into shockable (Pulseless Ventricular Tachycardia, Ventricular Fibrillation) and non-shockable (Pulseless Electrical Activity, Asystole). Presenting in shockable rhythms with ROSC <20 minutes were 21 patients with 15 (71%) survivors (p=.001). Time >20 minutes until ROSC in shockable rhythms had 5 patients with 3 survivors (78%, p=0.001). Presenting in non-shockable rhythms with ROSC <20 minutes were 54 patients with 18 survivors (33%, p=.001). ROSC >20 minutes in non-shockable rhythms had 19 patients with 2 survivors (8%, p=.001). Survivors of shockable rhythms showed 19 (100%) living post TH. 15 survivors (79%, n=19, p=.001) had CPC score 1 or 2 with 4 survivors (21%, n=19) having a CPC score of 3. A total of 25 survived non-shockable rhythm. Acute survival of patients with non-shockable rhythm showed 18 expired <72 hours (72%, n=25) with long-term survival of 4 patients (5%, n=74) and CPC scores of 1 or 2 (p=.001). Interestingly, patients with time to ROSC <20 minutes exhibiting more than one loss of sustained ROSC showed 100% mortality (p=.001). Patients presenting with shockable >20 minutes ROSC had overall survival of 70% (p=.001), but those undergoing >3 cardiac rhythm changes had 100% mortality (p=.001). Conclusion: Patients presenting with shockable rhythms undergoing TH had overall acute survival of 70% followed by long-term survival of 100% after 4 years. In contrast, patients presenting with non-shockable rhythm had long-term survival of 5%. TH is not recommended for patients presenting with non-shockable rhythm and requiring greater than 20 minutes for restoration of ROSC.

Keywords: cardiac rhythm changes, Pulseless Electrical Activity (PEA), Therapeutic Hypothermia (TH)

Procedia PDF Downloads 211
13632 Sublethal Effects of Thiamethoxam-Lambda Cyhalothrin on the Life Table Parameters and Population Projection of Trialeurodes vaporariorum (Hemiptera: Aleyrodidae) and Its Parasitoid, Encarsia formosa (Hymenoptera: Aphelinidae)

Authors: Sevda Ddras, Fariba Mehrkhou, Remzi Atlihan, Maryam Fourouzan

Abstract:

The greenhouse whitefly, Trialeurodes vaporariorum Westwood (Hemiptera: Aleyrodidae), is one of the most important pest on vegetables and ornamental host plants. In this research, the sub-lethal concentration (LC30) of thiamethoxam-lambda cyhalothrin (TLC) on the biological properties, life table parameters and population projection of T. vaporarium and its parasitoid, Encarsia formosa Gahan, were studied at controlled condition (25 ±5 ℃, R.H. 60 ±10 % and a photoperiod of 16:8 h (L:D). Bioassays were conducted by dipping tomato leaves containing third instar nymphs of the whitefly T. vaporariorum, in the obtained LC30 concentration of eforia. The life table data were analyzed using the computer program TWOSEX–MSChart based on the age-stage, two-sex life table theory. The results showed that, usage of sublethal concentration of TLC effected the biological properties and population growth parameters of greenhouse whitefly by shortening the developmentl time, adult longevity, decreasing the fecundity and population growth paramters. Also, the LC30 concentration of TLC had negative effects on life history and life table parameters of E.formosa. The obtained results illustrated that the sublethal concentration of TLC resulted in prolonging of developmental time, decreasing of adult longevity, survival rate and population growth parameters of E.formosa. Additionally, the population projection results were accordance with the population growth rate of either greenhouse whitefly or E.formosa. We conclude that, TLC should not be used in integrated pest management programs where E. formosa exists.

Keywords: greenhouse whitefly, Encarsia formosa, thiamethoxam-lambda cyhalothrin, population projection, life table parameters

Procedia PDF Downloads 71
13631 Harmonizing Cities: Integrating Land Use Diversity and Multimodal Transit for Social Equity

Authors: Zi-Yan Chao

Abstract:

With the rapid development of urbanization and increasing demand for efficient transportation systems, the interaction between land use diversity and transportation resource allocation has become a critical issue in urban planning. Achieving a balance of land use types, such as residential, commercial, and industrial areas, is crucial role in ensuring social equity and sustainable urban development. Simultaneously, optimizing multimodal transportation networks, including bus, subway, and car routes, is essential for minimizing total travel time and costs, while ensuring fairness for all social groups, particularly in meeting the transportation needs of low-income populations. This study develops a bilevel programming model to address these challenges, with land use diversity as the foundation for measuring equity. The upper-level model maximizes land use diversity for balanced land distribution across regions. The lower-level model optimizes multimodal transportation networks to minimize travel time and costs while maintaining user equilibrium. The model also incorporates constraints to ensure fair resource allocation, such as balancing transportation accessibility and cost differences across various social groups. A solution approach is developed to solve the bilevel optimization problem, ensuring efficient exploration of the solution space for land use and transportation resource allocation. This study maximizes social equity by maximizing land use diversity and achieving user equilibrium with optimal transportation resource distribution. The proposed method provides a robust framework for addressing urban planning challenges, contributing to sustainable and equitable urban development.

Keywords: bilevel programming model, genetic algorithms, land use diversity, multimodal transportation optimization, social equity

Procedia PDF Downloads 22
13630 Benefits of a Topical Emollient Product in the Management of Canine Nasal Hyperkeratosis

Authors: Christelle Navarro, Sébastien Viaud, Carole Gard, Bruno Jahier

Abstract:

Background: Idiopathic or familial nasal hyperkeratosis (NHK) may be considered a cosmetic issue in its uncomplicated form. Nevertheless, prevention of secondary lesions such as fissures or infections could be advised by proper management. The objective of this open-field study is to evaluate the benefits of a moisturizing balm in privately owned dogs with NHK, using an original validation grid for both investigator and owner assessments. Methods: Dogs with idiopathic or familial NHK received a vegetable-based ointment (Sensiderm® Balm, MP Labo, France) BID for 60 days. A global dermatological score (GDS) was defined using the sum of 4 criteria (“dryness,” “lichenification”, “crusts,” and “affected area”) on a 0 (no) to 3 (severe or > 2/3 extension) scale. Evaluation of this GDS (0-12) on D0, D30, and D60, by owners and investigators was the main outcome. The score’s percentage decrease versus D0, the evolution of each individual score, the correlation between observers, and the evaluation of clinical improvement and animal discomfort on VAS (0-10) during follow-up were analysed. Results: The global dermatological score significantly decreased over time (p<0.0001) for all observers. The decrease reached 44.9% and 54.3% at D30 and 54.5% and 62.3% at D60, for investigators and owners, respectively. “Dryness”, “Lichenification,” and “Affected area scores” decreased significantly and steadily over time compared to Day 0 for both investigators and owners (p < 0.001 and p = 0.001 for investigator assessment of dryness). All but one score (lichenification) were correlated at all times between observers (only at D60 for crusts). Whoever the observer, clinical improvement was always above 7. At D30 and until D60, “animal discomfort” was more than halved. Owner satisfaction was high as soon as D30 (8.1/10). No adverse effects were reported. Conclusion and clinical importance: The positive results confirm the benefits and safety of a moisturizing balm when used in dogs with uncomplicated NHK.

Keywords: hyperkeratosis, nose, dog, moisturizer

Procedia PDF Downloads 129
13629 Design of a Surveillance Drone with Computer Aided Durability

Authors: Maram Shahad Dana Anfal

Abstract:

This research paper presents the design of a surveillance drone with computer-aided durability and model analyses that provides a cost-effective and efficient solution for various applications. The quadcopter's design is based on a lightweight and strong structure made of materials such as aluminum and titanium, which provide a durable structure for the quadcopter. The structure of this product and the computer-aided durability system are both designed to ensure frequent repairs or replacements, which will save time and money in the long run. Moreover, the study discusses the drone's ability to track, investigate, and deliver objects more quickly than traditional methods, makes it a highly efficient and cost-effective technology. In this paper, a comprehensive analysis of the quadcopter's operation dynamics and limitations is presented. In both simulation and experimental data, the computer-aided durability system and the drone's design demonstrate their effectiveness, highlighting the potential for a variety of applications, such as search and rescue missions, infrastructure monitoring, and agricultural operations. Also, the findings provide insights into possible areas for improvement in the design and operation of the drone. Ultimately, this paper presents a reliable and cost-effective solution for surveillance applications by designing a drone with computer-aided durability and modeling. With its potential to save time and money, increase reliability, and enhance safety, it is a promising technology for the future of surveillance drones. operation dynamic equations have been evaluated successfully for different flight conditions of a quadcopter. Also, CAE modeling techniques have been applied for the modal risk assessment at operating conditions.Stress analysis have been performed under the loadings of the worst-case combined motion flight conditions.

Keywords: drone, material, solidwork, hypermesh

Procedia PDF Downloads 144
13628 IoT Continuous Monitoring Biochemical Oxygen Demand Wastewater Effluent Quality: Machine Learning Algorithms

Authors: Sergio Celaschi, Henrique Canavarro de Alencar, Claaudecir Biazoli

Abstract:

Effluent quality is of the highest priority for compliance with the permit limits of environmental protection agencies and ensures the protection of their local water system. Of the pollutants monitored, the biochemical oxygen demand (BOD) posed one of the greatest challenges. This work presents a solution for wastewater treatment plants - WWTP’s ability to react to different situations and meet treatment goals. Delayed BOD5 results from the lab take 7 to 8 analysis days, hindered the WWTP’s ability to react to different situations and meet treatment goals. Reducing BOD turnaround time from days to hours is our quest. Such a solution is based on a system of two BOD bioreactors associated with Digital Twin (DT) and Machine Learning (ML) methodologies via an Internet of Things (IoT) platform to monitor and control a WWTP to support decision making. DT is a virtual and dynamic replica of a production process. DT requires the ability to collect and store real-time sensor data related to the operating environment. Furthermore, it integrates and organizes the data on a digital platform and applies analytical models allowing a deeper understanding of the real process to catch sooner anomalies. In our system of continuous time monitoring of the BOD suppressed by the effluent treatment process, the DT algorithm for analyzing the data uses ML on a chemical kinetic parameterized model. The continuous BOD monitoring system, capable of providing results in a fraction of the time required by BOD5 analysis, is composed of two thermally isolated batch bioreactors. Each bioreactor contains input/output access to wastewater sample (influent and effluent), hydraulic conduction tubes, pumps, and valves for batch sample and dilution water, air supply for dissolved oxygen (DO) saturation, cooler/heater for sample thermal stability, optical ODO sensor based on fluorescence quenching, pH, ORP, temperature, and atmospheric pressure sensors, local PLC/CPU for TCP/IP data transmission interface. The dynamic BOD system monitoring range covers 2 mg/L < BOD < 2,000 mg/L. In addition to the BOD monitoring system, there are many other operational WWTP sensors. The CPU data is transmitted/received to/from the digital platform, which in turn performs analyses at periodic intervals, aiming to feed the learning process. BOD bulletins and their credibility intervals are made available in 12-hour intervals to web users. The chemical kinetics ML algorithm is composed of a coupled system of four first-order ordinary differential equations for the molar masses of DO, organic material present in the sample, biomass, and products (CO₂ and H₂O) of the reaction. This system is solved numerically linked to its initial conditions: DO (saturated) and initial products of the kinetic oxidation process; CO₂ = H₂0 = 0. The initial values for organic matter and biomass are estimated by the method of minimization of the mean square deviations. A real case of continuous monitoring of BOD wastewater effluent quality is being conducted by deploying an IoT application on a large wastewater purification system located in S. Paulo, Brazil.

Keywords: effluent treatment, biochemical oxygen demand, continuous monitoring, IoT, machine learning

Procedia PDF Downloads 73
13627 Clinical Prediction Rules for Using Open Kinetic Chain Exercise in Treatment of Knee Osteoarthritis

Authors: Mohamed Aly, Aliaa Rehan Youssef, Emad Sawerees, Mounir Guirgis

Abstract:

Relevance: Osteoarthritis (OA) is the most common degenerative disease seen in all populations. It causes disability and substantial socioeconomic burden. Evidence supports that exercise are the most effective conservative treatment for patients with OA. Therapists experience and clinical judgment play major role in exercise prescription and scientific evidence for this regard is lacking. The development of clinical prediction rules to identify patients who are most likely benefit from exercise may help solving this dilemma. Purpose: This study investigated whether body mass index and functional ability at baseline can predict patients’ response to a selected exercise program. Approach: Fifty-six patients, aged 35 to 65 years, completed an exercise program consisting of open kinetic chain strengthening and passive stretching exercises. The program was given for 3 sessions per week, 45 minutes per session, for 6 weeks Evaluation: At baseline and post treatment, pain severity was assessed using the numerical pain rating scale, whereas functional ability was being assessed by step test (ST), time up and go test (TUG) and 50 feet time walk test (50 FTW). After completing the program, global rate of change (GROC) score of greater than 4 was used to categorize patients as successful and non-successful. Thirty-eight patients (68%) had successful response to the intervention. Logistic regression showed that BMI and 50 FTW test were the only significant predictors. Based on the results, patients with BMI less than 34.71 kg/m2 and 50 FTW test less than 25.64 sec are 68% to 89% more likely to benefit from the exercise program. Conclusions: Clinicians should consider the described strengthening and flexibility exercise program for patents with BMI less than 34.7 Kg/m2 and 50 FTW faster than 25.6 seconds. The validity of these predictors should be investigated for other exercise.

Keywords: clinical prediction rule, knee osteoarthritis, physical therapy exercises, validity

Procedia PDF Downloads 423
13626 Shear Strength Envelope Characteristics of LimeTreated Clays

Authors: Mohammad Moridzadeh, Gholamreza Mesri

Abstract:

The effectiveness of lime treatment of soils has been commonly evaluated in terms of improved workability and increased undrained unconfined compressive strength in connection to road and airfield construction. The most common method of strength measurement has been the unconfined compression test. However, if the objective of lime treatment is to improve long-term stability of first-time or reactivated landslides in stiff clays and shales, permanent changes in the size and shape of clay particles must be realized to increase drained frictional resistance. Lime-soil interactions that may produce less platy and larger soil particles begin and continue with time under the highly alkaline pH environment. In this research, pH measurements are used to monitor chemical environment and progress of reactions. Atterberg limits are measured to identify changes in particle size and shape indirectly. Also, fully softened and residual strength measurements are used to examine an improvement in frictional resistance due to lime-soil interactions. The main variables are soil plasticity and mineralogy, lime content, water content, and curing period. Lime effect on frictional resistance is examined using samples of clays with different mineralogy and characteristics which may react with lime to various extents. Drained direct shear tests on reconstituted lime-treated clay specimens with various properties have been performed to measure fully softened shear strength. To measure residual shear strength, drained multiple reversal direct shear tests on precut specimens were conducted. This way, soil particles are oriented along the direction of shearing to the maximum possible extent and provide minimum frictional resistance. This is applicable to reactivated and part of first-time landslides. The Brenna clay, which is the highly plastic lacustrine clay of Lake Agassiz causing slope instability along the banks of the Red River, is one of the soil samples used in this study. The Brenna Formation characterized as a uniform, soft to firm, dark grey, glaciolacustrine clay with little or no visible stratification, is full of slickensided surfaces. The major source of sediment for the Brenna Formation was the highly plastic montmorillonitic Pierre Shale bedrock. The other soil used in this study is one of the main sources of slope instability in Harris County Flood Control District (HCFCD), i.e. the Beaumont clay. The shear strengths of untreated and treated clays were obtained under various normal pressures to evaluate the shear envelope nonlinearity.

Keywords: Brenna clay, friction resistance, lime treatment, residual

Procedia PDF Downloads 159
13625 Evaluation of Polymerisation Shrinkage of Randomly Oriented Micro-Sized Fibre Reinforced Dental Composites Using Fibre-Bragg Grating Sensors and Their Correlation with Degree of Conversion

Authors: Sonam Behl, Raju, Ginu Rajan, Paul Farrar, B. Gangadhara Prusty

Abstract:

Reinforcing dental composites with micro-sized fibres can significantly improve the physio-mechanical properties of dental composites. The short fibres can be oriented randomly within dental composites, thus providing quasi-isotropic reinforcing efficiency unlike unidirectional/bidirectional fibre reinforced composites enhancing anisotropic properties. Thus, short fibres reinforced dental composites are getting popular among practitioners. However, despite their popularity, resin-based dental composites are prone to failure on account of shrinkage during photo polymerisation. The shrinkage in the structure may lead to marginal gap formation, causing secondary caries, thus ultimately inducing failure of the restoration. The traditional methods to evaluate polymerisation shrinkage using strain gauges, density-based measurements, dilatometer, or bonded-disk focuses on average value of volumetric shrinkage. Moreover, the results obtained from traditional methods are sensitive to the specimen geometry. The present research aims to evaluate the real-time shrinkage strain at selected locations in the material with the help of optical fibre Bragg grating (FBG) sensors. Due to the miniature size (diameter 250 µm) of FBG sensors, they can be easily embedded into small samples of dental composites. Furthermore, an FBG array into the system can map the real-time shrinkage strain at different regions of the composite. The evaluation of real-time monitoring of shrinkage values may help to optimise the physio-mechanical properties of composites. Previously, FBG sensors have been able to rightfully measure polymerisation strains of anisotropic (unidirectional or bidirectional) reinforced dental composites. However, very limited study exists to establish the validity of FBG based sensors to evaluate volumetric shrinkage for randomly oriented fibres reinforced composites. The present study aims to fill this research gap and is focussed on establishing the usage of FBG based sensors for evaluating the shrinkage of dental composites reinforced with randomly oriented fibres. Three groups of specimens were prepared by mixing the resin (80% UDMA/20% TEGDMA) with 55% of silane treated BaAlSiO₂ particulate fillers or by adding 5% of micro-sized fibres of diameter 5 µm, and length 250/350 µm along with 50% of silane treated BaAlSiO₂ particulate fillers into the resin. For measurement of polymerisation shrinkage strain, an array of three fibre Bragg grating sensors was embedded at a depth of 1 mm into a circular Teflon mould of diameter 15 mm and depth 2 mm. The results obtained are compared with the traditional method for evaluation of the volumetric shrinkage using density-based measurements. Degree of conversion was measured using FTIR spectroscopy (Spotlight 400 FT-IR from PerkinElmer). It is expected that the average polymerisation shrinkage strain values for dental composites reinforced with micro-sized fibres can directly correlate with the measured degree of conversion values, implying that more C=C double bond conversion to C-C single bond values also leads to higher shrinkage strain within the composite. Moreover, it could be established the photonics approach could help assess the shrinkage at any point of interest in the material, suggesting that fibre-Bragg grating sensors are a suitable means for measuring real-time polymerisation shrinkage strain for randomly fibre reinforced dental composites as well.

Keywords: dental composite, glass fibre, polymerisation shrinkage strain, fibre-Bragg grating sensors

Procedia PDF Downloads 154
13624 Temperature Contour Detection of Salt Ice Using Color Thermal Image Segmentation Method

Authors: Azam Fazelpour, Saeed Reza Dehghani, Vlastimil Masek, Yuri S. Muzychka

Abstract:

The study uses a novel image analysis based on thermal imaging to detect temperature contours created on salt ice surface during transient phenomena. Thermal cameras detect objects by using their emissivities and IR radiance. The ice surface temperature is not uniform during transient processes. The temperature starts to increase from the boundary of ice towards the center of that. Thermal cameras are able to report temperature changes on the ice surface at every individual moment. Various contours, which show different temperature areas, appear on the ice surface picture captured by a thermal camera. Identifying the exact boundary of these contours is valuable to facilitate ice surface temperature analysis. Image processing techniques are used to extract each contour area precisely. In this study, several pictures are recorded while the temperature is increasing throughout the ice surface. Some pictures are selected to be processed by a specific time interval. An image segmentation method is applied to images to determine the contour areas. Color thermal images are used to exploit the main information. Red, green and blue elements of color images are investigated to find the best contour boundaries. The algorithms of image enhancement and noise removal are applied to images to obtain a high contrast and clear image. A novel edge detection algorithm based on differences in the color of the pixels is established to determine contour boundaries. In this method, the edges of the contours are obtained according to properties of red, blue and green image elements. The color image elements are assessed considering their information. Useful elements proceed to process and useless elements are removed from the process to reduce the consuming time. Neighbor pixels with close intensities are assigned in one contour and differences in intensities determine boundaries. The results are then verified by conducting experimental tests. An experimental setup is performed using ice samples and a thermal camera. To observe the created ice contour by the thermal camera, the samples, which are initially at -20° C, are contacted with a warmer surface. Pictures are captured for 20 seconds. The method is applied to five images ,which are captured at the time intervals of 5 seconds. The study shows the green image element carries no useful information; therefore, the boundary detection method is applied on red and blue image elements. In this case study, the results indicate that proposed algorithm shows the boundaries more effective than other edges detection methods such as Sobel and Canny. Comparison between the contour detection in this method and temperature analysis, which states real boundaries, shows a good agreement. This color image edge detection method is applicable to other similar cases according to their image properties.

Keywords: color image processing, edge detection, ice contour boundary, salt ice, thermal image

Procedia PDF Downloads 314
13623 Adapting Tools for Text Monitoring and for Scenario Analysis Related to the Field of Social Disasters

Authors: Svetlana Cojocaru, Mircea Petic, Inga Titchiev

Abstract:

Humanity faces more and more often with different social disasters, which in turn can generate new accidents and catastrophes. To mitigate their consequences, it is important to obtain early possible signals about the events which are or can occur and to prepare the corresponding scenarios that could be applied. Our research is focused on solving two problems in this domain: identifying signals related that an accident occurred or may occur and mitigation of some consequences of disasters. To solve the first problem, methods of selecting and processing texts from global network Internet are developed. Information in Romanian is of special interest for us. In order to obtain the mentioned tools, we should follow several steps, divided into preparatory stage and processing stage. Throughout the first stage, we manually collected over 724 news articles and classified them into 10 categories of social disasters. It constitutes more than 150 thousand words. Using this information, a controlled vocabulary of more than 300 keywords was elaborated, that will help in the process of classification and identification of the texts related to the field of social disasters. To solve the second problem, the formalism of Petri net has been used. We deal with the problem of inhabitants’ evacuation in useful time. The analysis methods such as reachability or coverability tree and invariants technique to determine dynamic properties of the modeled systems will be used. To perform a case study of properties of extended evacuation system by adding time, the analysis modules of PIPE such as Generalized Stochastic Petri Nets (GSPN) Analysis, Simulation, State Space Analysis, and Invariant Analysis have been used. These modules helped us to obtain the average number of persons situated in the rooms and the other quantitative properties and characteristics related to its dynamics.

Keywords: lexicon of disasters, modelling, Petri nets, text annotation, social disasters

Procedia PDF Downloads 197
13622 Sensing of Cancer DNA Using Resonance Frequency

Authors: Sungsoo Na, Chanho Park

Abstract:

Lung cancer is one of the most common severe diseases driving to the death of a human. Lung cancer can be divided into two cases of small-cell lung cancer (SCLC) and non-SCLC (NSCLC), and about 80% of lung cancers belong to the case of NSCLC. From several studies, the correlation between epidermal growth factor receptor (EGFR) and NSCLCs has been investigated. Therefore, EGFR inhibitor drugs such as gefitinib and erlotinib have been used as lung cancer treatments. However, the treatments result showed low response (10~20%) in clinical trials due to EGFR mutations that cause the drug resistance. Patients with resistance to EGFR inhibitor drugs usually are positive to KRAS mutation. Therefore, assessment of EGFR and KRAS mutation is essential for target therapies of NSCLC patient. In order to overcome the limitation of conventional therapies, overall EGFR and KRAS mutations have to be monitored. In this work, the only detection of EGFR will be presented. A variety of techniques has been presented for the detection of EGFR mutations. The standard detection method of EGFR mutation in ctDNA relies on real-time polymerase chain reaction (PCR). Real-time PCR method provides high sensitive detection performance. However, as the amplification step increases cost effect and complexity increase as well. Other types of technology such as BEAMing, next generation sequencing (NGS), an electrochemical sensor and silicon nanowire field-effect transistor have been presented. However, those technologies have limitations of low sensitivity, high cost and complexity of data analyzation. In this report, we propose a label-free and high-sensitive detection method of lung cancer using quartz crystal microbalance based platform. The proposed platform is able to sense lung cancer mutant DNA with a limit of detection of 1nM.

Keywords: cancer DNA, resonance frequency, quartz crystal microbalance, lung cancer

Procedia PDF Downloads 233
13621 Estimate Robert Gordon University's Scope Three Emissions by Nearest Neighbor Analysis

Authors: Nayak Amar, Turner Naomi, Gobina Edward

Abstract:

The Scottish Higher Education Institutes must report their scope 1 & 2 emissions, whereas reporting scope 3 is optional. Scope 3 is indirect emissions which embodies a significant component of total carbon footprint and therefore it is important to record, measure and report it accurately. Robert Gordon University (RGU) reported only business travel, grid transmission and distribution, water supply and transport, and recycling scope 3 emissions. This study estimates the RGUs total scope 3 emissions by comparing it with a similar HEI in scale. The scope 3 emission reporting of sixteen Scottish HEI was studied. Glasgow Caledonian University was identified as the nearest neighbour by comparing its students' full time equivalent, staff full time equivalent, research-teaching split, budget, and foundation year. Apart from the peer, data was also collected from the Higher Education Statistics Agency database. RGU reported emissions from business travel, grid transmission and distribution, water supply, and transport and recycling. This study estimated RGUs scope 3 emissions from procurement, student-staff commute, and international student trip. The result showed that RGU covered only 11% of the scope 3 emissions. The major contributor to scope 3 emissions were procurement (48%), student commute (21%), international student trip (16%), and staff commute (4%). The estimated scope 3 emission was more than 14 times the reported emissions. This study has shown the relative importance of each scope 3 emissions source, which gives a guideline for the HEIs, on where to focus their attention to capture maximum scope 3 emissions. Moreover, it has demonstrated that it is possible to estimate the scope 3 emissions with limited data.

Keywords: HEI, university, emission calculations, scope 3 emissions, emissions reporting

Procedia PDF Downloads 100
13620 Gnss Aided Photogrammetry for Digital Mapping

Authors: Muhammad Usman Akram

Abstract:

This research work based on GNSS-Aided Photogrammetry for Digital Mapping. It focuses on topographic survey of an area or site which is to be used in future Planning & development (P&D) or can be used for further, examination, exploration, research and inspection. Survey and Mapping in hard-to-access and hazardous areas are very difficult by using traditional techniques and methodologies; as well it is time consuming, labor intensive and has less precision with limited data. In comparison with the advance techniques it is saving with less manpower and provides more precise output with a wide variety of multiple data sets. In this experimentation, Aerial Photogrammetry technique is used where an UAV flies over an area and captures geocoded images and makes a Three-Dimensional Model (3-D Model), UAV operates on a user specified path or area with various parameters; Flight altitude, Ground sampling distance (GSD), Image overlapping, Camera angle etc. For ground controlling, a network of points on the ground would be observed as a Ground Control point (GCP) using Differential Global Positioning System (DGPS) in PPK or RTK mode. Furthermore, that raw data collected by UAV and DGPS will be processed in various Digital image processing programs and Computer Aided Design software. From which as an output we obtain Points Dense Cloud, Digital Elevation Model (DEM) and Ortho-photo. The imagery is converted into geospatial data by digitizing over Ortho-photo, DEM is further converted into Digital Terrain Model (DTM) for contour generation or digital surface. As a result, we get Digital Map of area to be surveyed. In conclusion, we compared processed data with exact measurements taken on site. The error will be accepted if the amount of error is not breached from survey accuracy limits set by concerned institutions.

Keywords: photogrammetry, post processing kinematics, real time kinematics, manual data inquiry

Procedia PDF Downloads 32