Search results for: advanced modeling techniques
341 Functional Ingredients from Potato By-Products: Innovative Biocatalytic Processes
Authors: Salwa Karboune, Amanda Waglay
Abstract:
Recent studies indicate that health-promoting functional ingredients and nutraceuticals can help support and improve the overall public health, which is timely given the aging of the population and the increasing cost of health care. The development of novel ‘natural’ functional ingredients is increasingly challenging. Biocatalysis offers powerful approaches to achieve this goal. Our recent research has been focusing on the development of innovative biocatalytic approaches towards the isolation of protein isolates from potato by-products and the generation of peptides. Potato is a vegetable whose high-quality proteins are underestimated. In addition to their high proportion in the essential amino acids, potato proteins possess angiotensin-converting enzyme-inhibitory potency, an ability to reduce plasma triglycerides associated with a reduced risk of atherosclerosis, and stimulate the release of the appetite regulating hormone CCK. Potato proteins have long been considered not economically feasible due to the low protein content (27% dry matter) found in tuber (Solanum tuberosum). However, potatoes rank the second largest protein supplying crop grown per hectare following wheat. Potato proteins include patatin (40-45 kDa), protease inhibitors (5-25 kDa), and various high MW proteins. Non-destructive techniques for the extraction of proteins from potato pulp and for the generation of peptides are needed in order to minimize functional losses and enhance quality. A promising approach for isolating the potato proteins was developed, which involves the use of multi-enzymatic systems containing selected glycosyl hydrolase enzymes that synergistically work to open the plant cell wall network. This enzymatic approach is advantageous due to: (1) the use of milder reaction conditions, (2) the high selectivity and specificity of enzymes, (3) the low cost and (4) the ability to market natural ingredients. Another major benefit to this enzymatic approach is the elimination of a costly purification step; indeed, these multi-enzymatic systems have the ability to isolate proteins, while fractionating them due to their specificity and selectivity with minimal proteolytic activities. The isolated proteins were used for the enzymatic generation of active peptides. In addition, they were applied into a reduced gluten cookie formulation as consumers are putting a high demand for easy ready to eat snack foods, with high nutritional quality and limited to no gluten incorporation. The addition of potato protein significantly improved the textural hardness of reduced gluten cookies, more comparable to wheat flour alone. The presentation will focus on our recent ‘proof-of principle’ results illustrating the feasibility and the efficiency of new biocatalytic processes for the production of innovative functional food ingredients, from potato by-products, whose potential health benefits are increasingly being recognized.Keywords: biocatalytic approaches, functional ingredients, potato proteins, peptides
Procedia PDF Downloads 380340 Assessment of Groundwater Potential Sampled in Hand Dug Wells and Boreholes in Ado-Ekiti, Southwestern Nigeria
Authors: A. J. Olatunji, Adebolu Temitope Johnson
Abstract:
Groundwater samples were collected randomly from hand-dug wells and boreholes in parts of the Ado Ekiti metropolis and were subjected to quality assessment and characterization. Physicochemical analyses, which include the in-situ parameters (pH units, Turbidity, and Electrical Conductivity) and laboratory analysis of selected ionic concentrations, were carried out following standard methods. Hydrochemistry of the present study revealed relative mean concentrations of cations in the order Ca2+ > Na+ > Mg2+ > Cu2+> Fe > Mn2+ and that of anions: Cl- > NO3- > SO42- > F - respectively considering World Health Organisation Standard (WHO) range of values for potable water. The result shows that values of certain parameters (Total Dissolved Solid (TDS), Manganese, Calcium, Magnesium, Fluoride, and Sulphate) were below the Highest Desirable Level of the Standards, while values of some other parameters (pH Units, Electrical Conductivity, Turbidity, Alkalinity, Sodium, Copper, Chloride, and Total Hardness) were within the range of figures between Highest Desirable Level (HDL) and Maximum Permissible Level (MPL) of World Health Organization (WHO) drinking water Standards. The reduction in the mean concentration value of Total Dissolved Solids (TDS) of most borehole samples follows the fact that water had been allowed to settle in the overhead tanks before usage; we discussed and brainstormed in the course of sampling and agreed to take a sample that way because that represents what the people consume, it also shows an indication while there was slightly concentration increase of these soluble ions in hand-dug wells samples than borehole samples only with the exception of borehole sample seven BH7 because BH7 uses the mono-pumping system. These in-situ parameters and ionic concentrations were further displayed and or represented on bar charts along with the WHO standards for better pictorial clarifications. Deductions from field observation indices revealed the imprints of natural weathering, ion-exchange processes, and anthropogenic activities influencing groundwater quality. A strong degree of association was found to exist between sodium and chlorine ions in both hand-dug well and borehole groundwater samples through the use of Pearson’s correlation coefficient; this association can further be supported by the chemistry of the parent bedrock associated with the study area because the chemistry of groundwater is a replica of its host rock. The correlation of those two ions must have begun from the period of mountain building, indicating an identical source from which they were released to the groundwater. Moreover, considering the comparison of ionic species concentrations of all samples with the (WHO) standards, there were no anomalous increases or decreases in the laboratory analysis results; this simply reveals an insignificant state of pollution of the groundwater. The study and its sampling techniques were not set to target the likely area and extent of groundwater pollution but its portability. It could be said that the samples were safe for human consumption.Keywords: groundwater, physicochemical, parameters ionic, concentrations, WHO standards
Procedia PDF Downloads 42339 Magnetofluidics for Mass Transfer and Mixing Enhancement in a Micro Scale Device
Authors: Majid Hejazian, Nam-Trung Nguyen
Abstract:
Over the past few years, microfluidic devices have generated significant attention from industry and academia due to advantages such as small sample volume, low cost and high efficiency. Microfluidic devices have applications in chemical, biological and industry analysis and can facilitate assay of bio-materials and chemical reactions, separation, and sensing. Micromixers are one of the important microfluidic concepts. Micromixers can work as stand-alone devices or be integrated in a more complex microfluidic system such as a lab on a chip (LOC). Micromixers are categorized as passive and active types. Passive micromixers rely only on the arrangement of the phases to be mixed and contain no moving parts and require no energy. Active micromixers require external fields such as pressure, temperature, electric and acoustic fields. Rapid and efficient mixing is important for many applications such as biological, chemical and biochemical analysis. Achieving fast and homogenous mixing of multiple samples in the microfluidic devices has been studied and discussed in the literature recently. Improvement in mixing rely on effective mass transport in microscale, but are currently limited to molecular diffusion due to the predominant laminar flow in this size scale. Using magnetic field to elevate mass transport is an effective solution for mixing enhancement in microfluidics. The use of a non-uniform magnetic field to improve mass transfer performance in a microfluidic device is demonstrated in this work. The phenomenon of mixing ferrofluid and DI-water streams has been reported before, but mass transfer enhancement for other non-magnetic species through magnetic field have not been studied and evaluated extensively. In the present work, permanent magnets were used in a simple microfluidic device to create a non-uniform magnetic field. Two streams are introduced into the microchannel: one contains fluorescent dye mixed with diluted ferrofluid to induce enhanced mass transport of the dye, and the other one is a non-magnetic DI-water stream. Mass transport enhancement of fluorescent dye is evaluated using fluorescent measurement techniques. The concentration field is measured for different flow rates. Due to effect of magnetic field, a body force is exerted on the paramagnetic stream and expands the ferrofluid stream into non-magnetic DI-water flow. The experimental results demonstrate that without a magnetic field, both magnetic nanoparticles of the ferrofluid and the fluorescent dye solely rely on molecular diffusion to spread. The non-uniform magnetic field, created by the permanent magnets around the microchannel, and diluted ferrofluid can improve mass transport of non-magnetic solutes in a microfluidic device. The susceptibility mismatch between the fluids results in a magnetoconvective secondary flow towards the magnets and subsequently the mass transport of the non-magnetic fluorescent dye. A significant enhancement in mass transport of the fluorescent dye was observed. The platform presented here could be used as a microfluidics-based micromixer for chemical and biological applications.Keywords: ferrofluid, mass transfer, micromixer, microfluidics, magnetic
Procedia PDF Downloads 225338 Influence of Iron Content in Carbon Nanotubes on the Intensity of Hyperthermia in the Cancer Treatment
Authors: S. Wiak, L. Szymanski, Z. Kolacinski, G. Raniszewski, L. Pietrzak, Z. Staniszewska
Abstract:
The term ‘cancer’ is given to a collection of related diseases that may affect any part of the human body. It is a pathological behaviour of cells with the potential to undergo abnormal breakdown in the processes that control cell proliferation, differentiation, and death of particular cells. Although cancer is commonly considered as modern disease, there are beliefs that drastically growing number of new cases can be linked to the extensively prolonged life expectancy and enhanced techniques for cancer diagnosis. Magnetic hyperthermia therapy is a novel approach to cancer treatment, which may greatly contribute to higher efficiency of the therapy. Employing carbon nanotubes as nanocarriers for magnetic particles, it is possible to decrease toxicity and invasiveness of the treatment by surface functionalisation. Despite appearing in recent years, magnetic particle hyperthermia has already become of the highest interest in the scientific and medical environment. The reason why hyperthermia therapy brings so much hope for future treatment of cancer lays in the effect that it produces in malignant cells. Subjecting them to thermal shock results in activation of numerous degradation processes inside and outside the cell. The heating process initiates mechanisms of DNA destruction, protein denaturation and induction of cell apoptosis, which may lead to tumour shrinkage, and in some cases, it may even cause complete disappearance of cancer. The factors which have the major impact on the final efficiency of the treatment include temperatures generated inside the tissues, time of exposure to the heating process, and the character of an individual cancer cell type. The vast majority of cancer cells is characterised by lower pH, persistent hypoxia and lack of nutrients, which can be associated to abnormal microvasculature. Since in healthy tissues we cannot observe presence of these conditions, they should not be seriously affected by elevation of the temperature. The aim of this work is to investigate the influence of iron content in iron filled Carbon Nanotubes on the desired nanoparticles for cancer therapy. In the article, the development and demonstration of the method and the model device for hyperthermic selective destruction of cancer cells are presented. This method was based on the synthesis and functionalization of carbon nanotubes serving as ferromagnetic material nanocontainers. The methodology of the production carbon- ferromagnetic nanocontainers (FNCs) includes the synthesis of carbon nanotubes, chemical, and physical characterization, increasing the content of a ferromagnetic material and biochemical functionalization involving the attachment of the key addresses. The ferromagnetic nanocontainers were synthesised in CVD and microwave plasma system. The research work has been financed from the budget of science as a research project No. PBS2/A5/31/2013.Keywords: hyperthermia, carbon nanotubes, cancer colon cells, radio frequency field
Procedia PDF Downloads 123337 An Algebraic Geometric Imaging Approach for Automatic Dairy Cow Body Condition Scoring System
Authors: Thi Thi Zin, Pyke Tin, Ikuo Kobayashi, Yoichiro Horii
Abstract:
Today dairy farm experts and farmers have well recognized the importance of dairy cow Body Condition Score (BCS) since these scores can be used to optimize milk production, managing feeding system and as an indicator for abnormality in health even can be utilized to manage for having healthy calving times and process. In tradition, BCS measures are done by animal experts or trained technicians based on visual observations focusing on pin bones, pin, thurl and hook area, tail heads shapes, hook angles and short and long ribs. Since the traditional technique is very manual and subjective, the results can lead to different scores as well as not cost effective. Thus this paper proposes an algebraic geometric imaging approach for an automatic dairy cow BCS system. The proposed system consists of three functional modules. In the first module, significant landmarks or anatomical points from the cow image region are automatically extracted by using image processing techniques. To be specific, there are 23 anatomical points in the regions of ribs, hook bones, pin bone, thurl and tail head. These points are extracted by using block region based vertical and horizontal histogram methods. According to animal experts, the body condition scores depend mainly on the shape structure these regions. Therefore the second module will investigate some algebraic and geometric properties of the extracted anatomical points. Specifically, the second order polynomial regression is employed to a subset of anatomical points to produce the regression coefficients which are to be utilized as a part of feature vector in scoring process. In addition, the angles at thurl, pin, tail head and hook bone area are computed to extend the feature vector. Finally, in the third module, the extracted feature vectors are trained by using Markov Classification process to assign BCS for individual cows. Then the assigned BCS are revised by using multiple regression method to produce the final BCS score for dairy cows. In order to confirm the validity of proposed method, a monitoring video camera is set up at the milk rotary parlor to take top view images of cows. The proposed method extracts the key anatomical points and the corresponding feature vectors for each individual cows. Then the multiple regression calculator and Markov Chain Classification process are utilized to produce the estimated body condition score for each cow. The experimental results tested on 100 dairy cows from self-collected dataset and public bench mark dataset show very promising with accuracy of 98%.Keywords: algebraic geometric imaging approach, body condition score, Markov classification, polynomial regression
Procedia PDF Downloads 161336 Preparation of Metallic Nanoparticles with the Use of Reagents of Natural Origin
Authors: Anna Drabczyk, Sonia Kudlacik-Kramarczyk, Dagmara Malina, Bozena Tyliszczak, Agnieszka Sobczak-Kupiec
Abstract:
Nowadays, nano-size materials are very popular group of materials among scientists. What is more, these materials find an application in a wide range of various areas. Therefore constantly increasing demand for nanomaterials including metallic nanoparticles such as silver of gold ones is observed. Therefore, new routes of their preparation are sought. Considering potential application of nanoparticles, it is important to select an adequate methodology of their preparation because it determines their size and shape. Among the most commonly applied methods of preparation of nanoparticles chemical and electrochemical techniques are leading. However, currently growing attention is directed into the biological or biochemical aspects of syntheses of metallic nanoparticles. This is associated with a trend of developing of new routes of preparation of given compounds according to the principles of green chemistry. These principles involve e.g. the reduction of the use of toxic compounds in the synthesis as well as the reduction of the energy demand or minimization of the generated waste. As a result, a growing popularity of the use of such components as natural plant extracts, infusions or essential oils is observed. Such natural substances may be used both as a reducing agent of metal ions and as a stabilizing agent of formed nanoparticles therefore they can replace synthetic compounds previously used for the reduction of metal ions or for the stabilization of obtained nanoparticles suspension. Methods that proceed in the presence of previously mentioned natural compounds are environmentally friendly and proceed without the application of any toxic reagents. Methodology: Presented research involves preparation of silver nanoparticles using selected plant extracts, e.g. artichoke extract. Extracts of natural origin were used as reducing and stabilizing agents at the same time. Furthermore, syntheses were carried out in the presence of additional polymeric stabilizing agent. Next, such features of obtained suspensions of nanoparticles as total antioxidant activity as well as content of phenolic compounds have been characterized. First of the mentioned studies involved the reaction with DPPH (2,2-Diphenyl-1-picrylhydrazyl) radical. The content of phenolic compounds was determined using Folin-Ciocalteu technique. Furthermore, an essential issue was also the determining of the stability of formed suspensions of nanoparticles. Conclusions: In the research it was demonstrated that metallic nanoparticles may be obtained using plant extracts or infusions as stabilizing or reducing agent. The methodology applied, i.e. a type of plant extract used during the synthesis, had an impact on the content of phenolic compounds as well as on the size and polydispersity of obtained nanoparticles. What is more, it is possible to prepare nano-size particles that will be characterized by properties desirable from the viewpoint of their potential application and such an effect may be achieved with the use of non-toxic reagents of natural origin. Furthermore, proposed methodology stays in line with the principles of green chemistry.Keywords: green chemistry principles, metallic nanoparticles, plant extracts, stabilization of nanoparticles
Procedia PDF Downloads 125335 Spatio-Temporal Variation of Gaseous Pollutants and the Contribution of Particulate Matters in Chao Phraya River Basin, Thailand
Authors: Samart Porncharoen, Nisa Pakvilai
Abstract:
The elevated levels of air pollutants in regional atmospheric environments is a significant problem that affects human health in Thailand, particularly in the Chao Phraya River Basin. Of concern are issues surrounding ambient air pollution such as particulate matter, gaseous pollutants and more specifically concerning air pollution along the river. Therefore, the spatio-temporal study of air pollution in this real environment can gain more accurate air quality data for making formalized environmental policy in river basins. In order to inform such a policy, a study was conducted over a period of January –December, 2015 to continually collect measurements of various pollutants in both urban and regional locations in the Chao Phraya River Basin. This study investigated the air pollutants in many diverse environments along the Chao Phraya River Basin, Thailand in 2015. Multivariate Analysis Techniques such as Principle Component Analysis (PCA) and Path analysis were utilised to classify air pollution in the surveyed location. Measurements were collected in both urban and rural areas to see if significant differences existed between the two locations in terms of air pollution levels. The meteorological parameters of various particulates were collected continually from a Thai pollution control department monitoring station over a period of January –December, 2015. Of interest to this study were the readings of SO2, CO, NOx, O3, and PM10. Results showed a daily arithmetic mean concentration of SO2, CO, NOx, O3, PM10 reading at 3±1 ppb, 0.5± 0.5 ppm, 30±21 ppb, 19±16 ppb, and 40±20 ug/m3 in urban locations (Bangkok). During the same time period, the readings for the same measurements in rural areas, Ayutthaya (were 1±0.5 ppb, 0.1± 0.05 ppm, 25±17 ppb, 30±21 ppb, and 35±10 ug/m3respectively. This show that Bangkok were located in highly polluted environments that are dominated source emitted from vehicles. Further, results were analysed to ascertain if significant seasonal variation existed in the measurements. It was found that levels of both gaseous pollutants and particle matter in dry season were higher than the wet season. More broadly, the results show that levels of pollutants were measured highest in locations along the Chao Phraya. River Basin known to have a large number of vehicles and biomass burning. This correlation suggests that the principle pollutants were from these anthropogenic sources. This study contributes to the body of knowledge surrounding ambient air pollution such as particulate matter, gaseous pollutants and more specifically concerning air pollution along the Chao Phraya River Basin. Further, this study is one of the first to utilise continuous mobile monitoring along a river in order to gain accurate measurements during a data collection period. Overall, the results of this study can be used for making formalized environmental policy in river basins in order to reduce the physical effects on human health.Keywords: air pollution, Chao Phraya river basin, meteorology, seasonal variation, principal component analysis
Procedia PDF Downloads 286334 Comparison of a Capacitive Sensor Functionalized with Natural or Synthetic Receptors Selective towards Benzo(a)Pyrene
Authors: Natalia V. Beloglazova, Pieterjan Lenain, Martin Hedstrom, Dietmar Knopp, Sarah De Saeger
Abstract:
In recent years polycyclic aromatic hydrocarbons (PAHs), which represent a hazard to humans and entire ecosystem, have been receiving an increased interest due to their mutagenic, carcinogenic and endocrine disrupting properties. They are formed in all incomplete combustion processes of organic matter and, as a consequence, ubiquitous in the environment. Benzo(a)pyrene (BaP) is on the priority list published by the Environmental Agency (US EPA) as the first PAH to be identified as a carcinogen and has often been used as a marker for PAHs contamination in general. It can be found in different types of water samples, therefore, the European Commission set up a limit value of 10 ng L–1 (10 ppt) for BAP in water intended for human consumption. Generally, different chromatographic techniques are used for PAHs determination, but these assays require pre-concentration of analyte, create large amounts of solvent waste, and are relatively time consuming and difficult to perform on-site. An alternative robust, stand-alone, and preferably cheap solution is needed. For example, a sensing unit which can be submerged in a river to monitor and continuously sample BaP. An affinity sensor based on capacitive transduction was developed. Natural antibodies or their synthetic analogues can be used as ligands. Ideally the sensor should operate independently over a longer period of time, e.g. several weeks or months, therefore the use of molecularly imprinted polymers (MIPs) was discussed. MIPs are synthetic antibodies which are selective for a chosen target molecule. Their robustness allows application in environments for which biological recognition elements are unsuitable or denature. They can be reused multiple times, which is essential to meet the stand-alone requirement. BaP is a highly lipophilic compound and does not contain any functional groups in its structure, thus excluding non-covalent imprinting methods based on ionic interactions. Instead, the MIPs syntheses were based on non-covalent hydrophobic and π-π interactions. Different polymerization strategies were compared and the best results were demonstrated by the MIPs produced using electropolymerization. 4-vinylpyridin (VP) and divinylbenzene (DVB) were used as monomer and cross-linker in the polymerization reaction. The selectivity and recovery of the MIP were compared to a non-imprinted polymer (NIP). Electrodes were functionalized with natural receptor (monoclonal anti-BaP antibody) and with MIPs selective towards BaP. Different sets of electrodes were evaluated and their properties such as sensitivity, selectivity and linear range were determined and compared. It was found that both receptor can reach the cut-off level comparable to the established ML, and despite the fact that the antibody showed the better cross-reactivity and affinity, MIPs were more convenient receptor due to their ability to regenerate and stability in river till 7 days.Keywords: antibody, benzo(a)pyrene, capacitive sensor, MIPs, river water
Procedia PDF Downloads 304333 Analysis of Overall Thermo-Elastic Properties of Random Particulate Nanocomposites with Various Interphase Models
Authors: Lidiia Nazarenko, Henryk Stolarski, Holm Altenbach
Abstract:
In the paper, a (hierarchical) approach to analysis of thermo-elastic properties of random composites with interphases is outlined and illustrated. It is based on the statistical homogenization method – the method of conditional moments – combined with recently introduced notion of the energy-equivalent inhomogeneity which, in this paper, is extended to include thermal effects. After exposition of the general principles, the approach is applied in the investigation of the effective thermo-elastic properties of a material with randomly distributed nanoparticles. The basic idea of equivalent inhomogeneity is to replace the inhomogeneity and the surrounding it interphase by a single equivalent inhomogeneity of constant stiffness tensor and coefficient of thermal expansion, combining thermal and elastic properties of both. The equivalent inhomogeneity is then perfectly bonded to the matrix which allows to analyze composites with interphases using techniques devised for problems without interphases. From the mechanical viewpoint, definition of the equivalent inhomogeneity is based on Hill’s energy equivalence principle, applied to the problem consisting only of the original inhomogeneity and its interphase. It is more general than the definitions proposed in the past in that, conceptually and practically, it allows to consider inhomogeneities of various shapes and various models of interphases. This is illustrated considering spherical particles with two models of interphases, Gurtin-Murdoch material surface model and spring layer model. The resulting equivalent inhomogeneities are subsequently used to determine effective thermo-elastic properties of randomly distributed particulate composites. The effective stiffness tensor and coefficient of thermal extension of the material with so defined equivalent inhomogeneities are determined by the method of conditional moments. Closed-form expressions for the effective thermo-elastic parameters of a composite consisting of a matrix and randomly distributed spherical inhomogeneities are derived for the bulk and the shear moduli as well as for the coefficient of thermal expansion. Dependence of the effective parameters on the interphase properties is included in the resulting expressions, exhibiting analytically the nature of the size-effects in nanomaterials. As a numerical example, the epoxy matrix with randomly distributed spherical glass particles is investigated. The dependence of the effective bulk and shear moduli, as well as of the effective thermal expansion coefficient on the particle volume fraction (for different radii of nanoparticles) and on the radius of nanoparticle (for fixed volume fraction of nanoparticles) for different interphase models are compared to and discussed in the context of other theoretical predictions. Possible applications of the proposed approach to short-fiber composites with various types of interphases are discussed.Keywords: effective properties, energy equivalence, Gurtin-Murdoch surface model, interphase, random composites, spherical equivalent inhomogeneity, spring layer model
Procedia PDF Downloads 185332 Exploring Accessible Filmmaking and Video for Deafblind Audiences through Multisensory Participatory Design
Authors: Aikaterini Tavoulari, Mike Richardson
Abstract:
Objective: This abstract presents a multisensory participatory design project, inspired by a deafblind PhD student's ambition to climb Mount Everest. The project aims to explore accessible routes for filmmaking and video content creation, catering to the needs of individuals with hearing and sight loss. By engaging participants from the Southwest area of England, recruited through multiple networks, the project seeks to gather qualitative data and insights to inform the development of inclusive media practices. Design: It will be a community-based participatory research design. The workshop will feature various stations that stimulate different senses, such as scent, touch, sight, hearing as well as movement. Participants will have the opportunity to engage with these multisensory experiences, providing valuable feedback on their effectiveness and potential for enhancing accessibility in filmmaking and video content. Methods: Brief semi-structured interviews will be conducted to collect qualitative data, allowing participants to share their perspectives, challenges, and suggestions for improvement. The participatory design approach emphasizes the importance of involving the target audience in the creative process. By actively engaging individuals with hearing and sight loss, the project aims to ensure that their needs and preferences are central to the development of accessible filmmaking techniques and video content. This collaborative effort seeks to bridge the gap between content creators and diverse audiences, fostering a more inclusive media landscape. Results: The findings from this study will contribute to the growing body of research on accessible filmmaking and video content creation. Via inductive thematic analysis of the qualitative data collected through interviews and observations, the researchers aim to identify key themes, challenges, and opportunities for creating engaging and inclusive media experiences for deafblind audiences. The insights will inform the development of best practices and guidelines for accessible filmmaking, empowering content creators to produce more inclusive and immersive video content. Conclusion: The abstract targets the hybrid International Conference for Disability and Diversity in Canada (January 2025), as this platform provides an excellent opportunity to share the outcomes of the project with a global audience of researchers, practitioners, and advocates working towards inclusivity and accessibility in various disability domains. By presenting this research at the conference in person, the authors aim to contribute to the ongoing discourse on disability and diversity, highlighting the importance of multisensory experiences and participatory design in creating accessible media content for the deafblind community and the community with sensory impairments more broadly.Keywords: vision impairment, hearing impairment, deafblindness, accessibility, filmmaking
Procedia PDF Downloads 45331 Evaluation of Modern Natural Language Processing Techniques via Measuring a Company's Public Perception
Authors: Burak Oksuzoglu, Savas Yildirim, Ferhat Kutlu
Abstract:
Opinion mining (OM) is one of the natural language processing (NLP) problems to determine the polarity of opinions, mostly represented on a positive-neutral-negative axis. The data for OM is usually collected from various social media platforms. In an era where social media has considerable control over companies’ futures, it’s worth understanding social media and taking actions accordingly. OM comes to the fore here as the scale of the discussion about companies increases, and it becomes unfeasible to gauge opinion on individual levels. Thus, the companies opt to automize this process by applying machine learning (ML) approaches to their data. For the last two decades, OM or sentiment analysis (SA) has been mainly performed by applying ML classification algorithms such as support vector machines (SVM) and Naïve Bayes to a bag of n-gram representations of textual data. With the advent of deep learning and its apparent success in NLP, traditional methods have become obsolete. Transfer learning paradigm that has been commonly used in computer vision (CV) problems started to shape NLP approaches and language models (LM) lately. This gave a sudden rise to the usage of the pretrained language model (PTM), which contains language representations that are obtained by training it on the large datasets using self-supervised learning objectives. The PTMs are further fine-tuned by a specialized downstream task dataset to produce efficient models for various NLP tasks such as OM, NER (Named-Entity Recognition), Question Answering (QA), and so forth. In this study, the traditional and modern NLP approaches have been evaluated for OM by using a sizable corpus belonging to a large private company containing about 76,000 comments in Turkish: SVM with a bag of n-grams, and two chosen pre-trained models, multilingual universal sentence encoder (MUSE) and bidirectional encoder representations from transformers (BERT). The MUSE model is a multilingual model that supports 16 languages, including Turkish, and it is based on convolutional neural networks. The BERT is a monolingual model in our case and transformers-based neural networks. It uses a masked language model and next sentence prediction tasks that allow the bidirectional training of the transformers. During the training phase of the architecture, pre-processing operations such as morphological parsing, stemming, and spelling correction was not used since the experiments showed that their contribution to the model performance was found insignificant even though Turkish is a highly agglutinative and inflective language. The results show that usage of deep learning methods with pre-trained models and fine-tuning achieve about 11% improvement over SVM for OM. The BERT model achieved around 94% prediction accuracy while the MUSE model achieved around 88% and SVM did around 83%. The MUSE multilingual model shows better results than SVM, but it still performs worse than the monolingual BERT model.Keywords: BERT, MUSE, opinion mining, pretrained language model, SVM, Turkish
Procedia PDF Downloads 148330 The Dynamics of a Droplet Spreading on a Steel Surface
Authors: Evgeniya Orlova, Dmitriy Feoktistov, Geniy Kuznetsov
Abstract:
Spreading of a droplet over a solid substrate is a key phenomenon observed in the following engineering applications: thin film coating, oil extraction, inkjet printing, and spray cooling of heated surfaces. Droplet cooling systems are known to be more effective than film or rivulet cooling systems. It is caused by the greater evaporation surface area of droplets compared with the film of the same mass and wetting surface. And the greater surface area of droplets is connected with the curvature of the interface. Location of the droplets on the cooling surface influences on the heat transfer conditions. The close distance between the droplets provides intensive heat removal, but there is a possibility of their coalescence in the liquid film. The long distance leads to overheating of the local areas of the cooling surface and the occurrence of thermal stresses. To control the location of droplets is possible by changing the roughness, structure and chemical composition of the surface. Thus, control of spreading can be implemented. The most important characteristic of spreading of droplets on solid surfaces is a dynamic contact angle, which is a function of the contact line speed or capillary number. However, there is currently no universal equation, which would describe the relationship between these parameters. This paper presents the results of the experimental studies of water droplet spreading on metal substrates with different surface roughness. The effect of the droplet growth rate and the surface roughness on spreading characteristics was studied at low capillary numbers. The shadow method using high speed video cameras recording up to 10,000 frames per seconds was implemented. A droplet profile was analyzed by Axisymmetric Drop Shape Analyses techniques. According to change of the dynamic contact angle and the contact line speed three sequential spreading stages were observed: rapid increase in the dynamic contact angle; monotonous decrease in the contact angle and the contact line speed; and form of the equilibrium contact angle at constant contact line. At low droplet growth rate, the dynamic contact angle of the droplet spreading on the surfaces with the maximum roughness is found to increase throughout the spreading time. It is due to the fact that the friction force on such surfaces is significantly greater than the inertia force; and the contact line is pinned on microasperities of a relief. At high droplet growth rate the contact angle decreases during the second stage even on the surfaces with the maximum roughness, as in this case, the liquid does not fill the microcavities, and the droplet moves over the “air cushion”, i.e. the interface is a liquid/gas/solid system. Also at such growth rates pulsation of liquid flow was detected; and the droplet oscillates during the spreading. Thus, obtained results allow to conclude that it is possible to control spreading by using the surface roughness and the growth rate of droplets on surfaces as varied factors. Also, the research findings may be used for analyzing heat transfer in rivulet and drop cooling systems of high energy equipment.Keywords: contact line speed, droplet growth rate, dynamic contact angle, shadow system, spreading
Procedia PDF Downloads 332329 Artificial Neural Network Approach for GIS-Based Soil Macro-Nutrients Mapping
Authors: Shahrzad Zolfagharnassab, Abdul Rashid Mohamed Shariff, Siti Khairunniza Bejo
Abstract:
Conventional methods for nutrient soil mapping are based on laboratory tests of samples that are obtained from surveys. The time and cost involved in gathering and analyzing soil samples are the reasons that researchers use Predictive Soil Mapping (PSM). PSM can be defined as the development of a numerical or statistical model of the relationship among environmental variables and soil properties, which is then applied to a geographic database to create a predictive map. Kriging is a group of geostatistical techniques to spatially interpolate point values at an unobserved location from observations of values at nearby locations. The main problem with using kriging as an interpolator is that it is excessively data-dependent and requires a large number of closely spaced data points. Hence, there is a need to minimize the number of data points without sacrificing the accuracy of the results. In this paper, an Artificial Neural Networks (ANN) scheme was used to predict macronutrient values at un-sampled points. ANN has become a popular tool for prediction as it eliminates certain difficulties in soil property prediction, such as non-linear relationships and non-normality. Back-propagation multilayer feed-forward network structures were used to predict nitrogen, phosphorous and potassium values in the soil of the study area. A limited number of samples were used in the training, validation and testing phases of ANN (pattern reconstruction structures) to classify soil properties and the trained network was used for prediction. The soil analysis results of samples collected from the soil survey of block C of Sawah Sempadan, Tanjung Karang rice irrigation project at Selangor of Malaysia were used. Soil maps were produced by the Kriging method using 236 samples (or values) that were a combination of actual values (obtained from real samples) and virtual values (neural network predicted values). For each macronutrient element, three types of maps were generated with 118 actual and 118 virtual values, 59 actual and 177 virtual values, and 30 actual and 206 virtual values, respectively. To evaluate the performance of the proposed method, for each macronutrient element, a base map using 236 actual samples and test maps using 118, 59 and 30 actual samples respectively produced by the Kriging method. A set of parameters was defined to measure the similarity of the maps that were generated with the proposed method, termed the sample reduction method. The results show that the maps that were generated through the sample reduction method were more accurate than the corresponding base maps produced through a smaller number of real samples. For example, nitrogen maps that were produced from 118, 59 and 30 real samples have 78%, 62%, 41% similarity, respectively with the base map (236 samples) and the sample reduction method increased similarity to 87%, 77%, 71%, respectively. Hence, this method can reduce the number of real samples and substitute ANN predictive samples to achieve the specified level of accuracy.Keywords: artificial neural network, kriging, macro nutrient, pattern recognition, precision farming, soil mapping
Procedia PDF Downloads 71328 Influence of Torrefied Biomass on Co-Combustion Behaviors of Biomass/Lignite Blends
Authors: Aysen Caliskan, Hanzade Haykiri-Acma, Serdar Yaman
Abstract:
Co-firing of coal and biomass blends is an effective method to reduce carbon dioxide emissions released by burning coals, thanks to the carbon-neutral nature of biomass. Besides, usage of biomass that is renewable and sustainable energy resource mitigates the dependency on fossil fuels for power generation. However, most of the biomass species has negative aspects such as low calorific value, high moisture and volatile matter contents compared to coal. Torrefaction is a promising technique in order to upgrade the fuel properties of biomass through thermal treatment. That is, this technique improves the calorific value of biomass along with serious reductions in the moisture and volatile matter contents. In this context, several woody biomass materials including Rhododendron, hybrid poplar, and ash-tree were subjected to torrefaction process in a horizontal tube furnace at 200°C under nitrogen flow. In this way, the solid residue obtained from torrefaction that is also called as 'biochar' was obtained and analyzed to monitor the variations taking place in biomass properties. On the other hand, some Turkish lignites from Elbistan, Adıyaman-Gölbaşı and Çorum-Dodurga deposits were chosen as coal samples since these lignites are of great importance in lignite-fired power stations in Turkey. These lignites were blended with the obtained biochars for which the blending ratio of biochars was kept at 10 wt% and the lignites were the dominant constituents in the fuel blends. Burning tests of the lignites, biomasses, biochars, and blends were performed using a thermogravimetric analyzer up to 900°C with a heating rate of 40°C/min under dry air atmosphere. Based on these burning tests, properties relevant to burning characteristics such as the burning reactivity and burnout yields etc. could be compared to justify the effects of torrefaction and blending. Besides, some characterization techniques including X-Ray Diffraction (XRD), Fourier Transform Infrared (FTIR) spectroscopy and Scanning Electron Microscopy (SEM) were also conducted for the untreated biomass and torrefied biomass (biochar) samples, lignites and their blends to examine the co-combustion characteristics elaborately. Results of this study revealed the fact that blending of lignite with 10 wt% biochar created synergistic behaviors during co-combustion in comparison to the individual burning of the ingredient fuels in the blends. Burnout and ignition performances of each blend were compared by taking into account the lignite and biomass structures and characteristics. The blend that has the best co-combustion profile and ignition properties was selected. Even though final burnouts of the lignites were decreased due to the addition of biomass, co-combustion process acts as a reasonable and sustainable solution due to its environmentally friendly benefits such as reductions in net carbon dioxide (CO2), SOx and hazardous organic chemicals derived from volatiles.Keywords: burnout performance, co-combustion, thermal analysis, torrefaction pretreatment
Procedia PDF Downloads 339327 Strategies for Synchronizing Chocolate Conching Data Using Dynamic Time Warping
Authors: Fernanda A. P. Peres, Thiago N. Peres, Flavio S. Fogliatto, Michel J. Anzanello
Abstract:
Batch processes are widely used in food industry and have an important role in the production of high added value products, such as chocolate. Process performance is usually described by variables that are monitored as the batch progresses. Data arising from these processes are likely to display a strong correlation-autocorrelation structure, and are usually monitored using control charts based on multiway principal components analysis (MPCA). Process control of a new batch is carried out comparing the trajectories of its relevant process variables with those in a reference set of batches that yielded products within specifications; it is clear that proper determination of the reference set is key for the success of a correct signalization of non-conforming batches in such quality control schemes. In chocolate manufacturing, misclassifications of non-conforming batches in the conching phase may lead to significant financial losses. In such context, the accuracy of process control grows in relevance. In addition to that, the main assumption in MPCA-based monitoring strategies is that all batches are synchronized in duration, both the new batch being monitored and those in the reference set. Such assumption is often not satisfied in chocolate manufacturing process. As a consequence, traditional techniques as MPCA-based charts are not suitable for process control and monitoring. To address that issue, the objective of this work is to compare the performance of three dynamic time warping (DTW) methods in the alignment and synchronization of chocolate conching process variables’ trajectories, aimed at properly determining the reference distribution for multivariate statistical process control. The power of classification of batches in two categories (conforming and non-conforming) was evaluated using the k-nearest neighbor (KNN) algorithm. Real data from a milk chocolate conching process was collected and the following variables were monitored over time: frequency of soybean lecithin dosage, rotation speed of the shovels, current of the main motor of the conche, and chocolate temperature. A set of 62 batches with durations between 495 and 1,170 minutes was considered; 53% of the batches were known to be conforming based on lab test results and experts’ evaluations. Results showed that all three DTW methods tested were able to align and synchronize the conching dataset. However, synchronized datasets obtained from these methods performed differently when inputted in the KNN classification algorithm. Kassidas, MacGregor and Taylor’s (named KMT) method was deemed the best DTW method for aligning and synchronizing a milk chocolate conching dataset, presenting 93.7% accuracy, 97.2% sensitivity and 90.3% specificity in batch classification, being considered the best option to determine the reference set for the milk chocolate dataset. Such method was recommended due to the lowest number of iterations required to achieve convergence and highest average accuracy in the testing portion using the KNN classification technique.Keywords: batch process monitoring, chocolate conching, dynamic time warping, reference set distribution, variable duration
Procedia PDF Downloads 167326 Effects of Macro and Micro Nutrients on Growth and Yield Performances of Tomato (Lycopersicon esculentum MILL.)
Authors: K. M. S. Weerasinghe, A. H. K. Balasooriya, S. L. Ransingha, G. D. Krishantha, R. S. Brhakamanagae, L. C. Wijethilke
Abstract:
Tomato (Lycopersicon esculentum Mill.) is a major horticultural crop with an estimated global production of over 120 million metric tons and ranks first as a processing crop. The average tomato productivity in Sri Lanka (11 metric tons/ha) is much lower than the world average (24 metric tons/ha).To meet the tomato demand for the increasing population the productivity has to be intensified through the agronomic-techniques. Nutrition is one of the main factors which govern the growth and yield of tomato and the main nutrient source soil affect the plant growth and quality of the produce. Continuous cropping, improper fertilizer usage etc., cause widespread nutrient deficiencies. Therefore synthetic fertilizers and organic manures were introduced to enhance plant growth and maximize the crop yields. In this study, effects of macro and micronutrient supplementations on improvement of growth and yield of tomato were investigated. Selected tomato variety is Maheshi and plants were grown in Regional Agricultural and Research Centre Makadura under the Department of Agriculture recommended (DOA) macro nutrients and various combination of Ontario recommended dosages of secondary and micro fertilizer supplementations. There were six treatments in this experiment and each treatment was replicated in three times and each replicate consisted of six plants. Other than the DOA recommendation, five combinations of Ontario recommended dosage of secondary and micronutrients for tomato were also used as treatments. The treatments were arranged in a Randomized Complete Block Design. All cultural practices were carried out according to the DOA recommendations. The mean data was subjected to the statistical analysis using SAS package and mean separation (Duncan’s Multiple Range test at 5% probability level) procedures. Secondary and micronutrients containing treatments significantly increased most of the growth parameters. Plant height, plant girth, number of leaves, leaf area index etc. Fruits harvested from pots amended with macro, secondary and micronutrients performed best in terms of total yield; yield quality; to pots amended with DOA recommended dosage of fertilizer for tomato. It could be due to the application of all essential macro and micro nutrients that rise in photosynthetic activity, efficient translocation and utilization of photosynthates causing rapid cell elongation and cell division in actively growing region of the plant leading to stimulation of growth and yield were caused. The experiment revealed and highlighted the requirements of essential macro, secondary and micro nutrient fertilizer supplementations for tomato farming. The study indicated that, macro and micro nutrient supplementation practices can influence growth and yield performances of tomato fruits and it is a promising approach to get potential tomato yields.Keywords: macro and micronutrients, tomato, SAS package, photosynthates
Procedia PDF Downloads 476325 Effect of the Polymer Modification on the Cytocompatibility of Human and Rat Cells
Authors: N. Slepickova Kasalkova, P. Slepicka, L. Bacakova, V. Svorcik
Abstract:
Tissue engineering includes combination of materials and techniques used for the improvement, repair or replacement of the tissue. Scaffolds, permanent or temporally material, are used as support for the creation of the "new cell structures". For this important component (scaffold), a variety of materials can be used. The advantage of some polymeric materials is their cytocompatibility and possibility of biodegradation. Poly(L-lactic acid) (PLLA) is a biodegradable, semi-crystalline thermoplastic polymer. PLLA can be fully degraded into H2O and CO2. In this experiment, the effect of the surface modification of biodegradable polymer (performed by plasma treatment) on the various cell types was studied. The surface parameters and changes of the physicochemical properties of modified PLLA substrates were studied by different methods. Surface wettability was determined by goniometry, surface morphology and roughness study were performed with atomic force microscopy and chemical composition was determined using photoelectron spectroscopy. The physicochemical properties were studied in relation to cytocompatibility of human osteoblast (MG 63 cells), rat vascular smooth muscle cells (VSMC), and human stem cells (ASC) of the adipose tissue in vitro. A fluorescence microscopy was chosen to study and compare cell-material interaction. Important parameters of the cytocompatibility like adhesion, proliferation, viability, shape, spreading of the cells were evaluated. It was found that the modification leads to the change of the surface wettability depending on the time of modification. Short time of exposition (10-120 s) can reduce the wettability of the aged samples, exposition longer than 150 s causes to increase of contact angle of the aged PLLA. The surface morphology is significantly influenced by duration of modification, too. The plasma treatment involves the formation of the crystallites, whose number increases with increasing time of modification. On the basis of physicochemical properties evaluation, the cells were cultivated on the selected samples. Cell-material interactions are strongly affected by material chemical structure and surface morphology. It was proved that the plasma treatment of PLLA has a positive effect on the adhesion, spreading, homogeneity of distribution and viability of all cultivated cells. This effect was even more apparent for the VSMCs and ASCs which homogeneously covered almost the whole surface of the substrate after 7 days of cultivation. The viability of these cells was high (more than 98% for VSMCs, 89-96% for ASCs). This experiment is one part of the basic research, which aims to easily create scaffolds for tissue engineering with subsequent use of stem cells and their subsequent "reorientation" towards the bone cells or smooth muscle cells.Keywords: poly(L-lactic acid), plasma treatment, surface characterization, cytocompatibility, human osteoblast, rat vascular smooth muscle cells, human stem cells
Procedia PDF Downloads 229324 Linguistic Analysis of Borderline Personality Disorder: Using Language to Predict Maladaptive Thoughts and Behaviours
Authors: Charlotte Entwistle, Ryan Boyd
Abstract:
Recent developments in information retrieval techniques and natural language processing have allowed for greater exploration of psychological and social processes. Linguistic analysis methods for understanding behaviour have provided useful insights within the field of mental health. One area within mental health that has received little attention though, is borderline personality disorder (BPD). BPD is a common mental health disorder characterised by instability of interpersonal relationships, self-image and affect. It also manifests through maladaptive behaviours, such as impulsivity and self-harm. Examination of language patterns associated with BPD could allow for a greater understanding of the disorder and its links to maladaptive thoughts and behaviours. Language analysis methods could also be used in a predictive way, such as by identifying indicators of BPD or predicting maladaptive thoughts, emotions and behaviours. Additionally, associations that are uncovered between language and maladaptive thoughts and behaviours could then be applied at a more general level. This study explores linguistic characteristics of BPD, and their links to maladaptive thoughts and behaviours, through the analysis of social media data. Data were collected from a large corpus of posts from the publicly available social media platform Reddit, namely, from the ‘r/BPD’ subreddit whereby people identify as having BPD. Data were collected using the Python Reddit API Wrapper and included all users which had posted within the BPD subreddit. All posts were manually inspected to ensure that they were not posted by someone who clearly did not have BPD, such as people posting about a loved one with BPD. These users were then tracked across all other subreddits of which they had posted in and data from these subreddits were also collected. Additionally, data were collected from a random control group of Reddit users. Disorder-relevant behaviours, such as self-harming or aggression-related behaviours, outlined within Reddit posts were coded to by expert raters. All posts and comments were aggregated by user and split by subreddit. Language data were then analysed using the Linguistic Inquiry and Word Count (LIWC) 2015 software. LIWC is a text analysis program that identifies and categorises words based on linguistic and paralinguistic dimensions, psychological constructs and personal concern categories. Statistical analyses of linguistic features could then be conducted. Findings revealed distinct linguistic features associated with BPD, based on Reddit posts, which differentiated these users from a control group. Language patterns were also found to be associated with the occurrence of maladaptive thoughts and behaviours. Thus, this study demonstrates that there are indeed linguistic markers of BPD present on social media. It also implies that language could be predictive of maladaptive thoughts and behaviours associated with BPD. These findings are of importance as they suggest potential for clinical interventions to be provided based on the language of people with BPD to try to reduce the likelihood of maladaptive thoughts and behaviours occurring. For example, by social media tracking or engaging people with BPD in expressive writing therapy. Overall, this study has provided a greater understanding of the disorder and how it manifests through language and behaviour.Keywords: behaviour analysis, borderline personality disorder, natural language processing, social media data
Procedia PDF Downloads 353323 Text Mining Past Medical History in Electrophysiological Studies
Authors: Roni Ramon-Gonen, Amir Dori, Shahar Shelly
Abstract:
Background and objectives: Healthcare professionals produce abundant textual information in their daily clinical practice. The extraction of insights from all the gathered information, mainly unstructured and lacking in normalization, is one of the major challenges in computational medicine. In this respect, text mining assembles different techniques to derive valuable insights from unstructured textual data, so it has led to being especially relevant in Medicine. Neurological patient’s history allows the clinician to define the patient’s symptoms and along with the result of the nerve conduction study (NCS) and electromyography (EMG) test, assists in formulating a differential diagnosis. Past medical history (PMH) helps to direct the latter. In this study, we aimed to identify relevant PMH, understand which PMHs are common among patients in the referral cohort and documented by the medical staff, and examine the differences by sex and age in a large cohort based on textual format notes. Methods: We retrospectively identified all patients with abnormal NCS between May 2016 to February 2022. Age, gender, and all NCS attributes reports were recorded, including the summary text. All patients’ histories were extracted from the text report by a query. Basic text cleansing and data preparation were performed, as well as lemmatization. Very popular words (like ‘left’ and ‘right’) were deleted. Several words were replaced with their abbreviations. A bag of words approach was used to perform the analyses. Different visualizations which are common in text analysis, were created to easily grasp the results. Results: We identified 5282 unique patients. Three thousand and five (57%) patients had documented PMH. Of which 60.4% (n=1817) were males. The total median age was 62 years (range 0.12 – 97.2 years), and the majority of patients (83%) presented after the age of forty years. The top two documented medical histories were diabetes mellitus (DM) and surgery. DM was observed in 16.3% of the patients, and surgery at 15.4%. Other frequent patient histories (among the top 20) were fracture, cancer (ca), motor vehicle accident (MVA), leg, lumbar, discopathy, back and carpal tunnel release (CTR). When separating the data by sex, we can see that DM and MVA are more frequent among males, while cancer and CTR are less frequent. On the other hand, the top medical history in females was surgery and, after that, DM. Other frequent histories among females are breast cancer, fractures, and CTR. In the younger population (ages 18 to 26), the frequent PMH were surgery, fractures, trauma, and MVA. Discussion: By applying text mining approaches to unstructured data, we were able to better understand which medical histories are more relevant in these circumstances and, in addition, gain additional insights regarding sex and age differences. These insights might help to collect epidemiological demographical data as well as raise new hypotheses. One limitation of this work is that each clinician might use different words or abbreviations to describe the same condition, and therefore using a coding system can be beneficial.Keywords: abnormal studies, healthcare analytics, medical history, nerve conduction studies, text mining, textual analysis
Procedia PDF Downloads 96322 The Different Effects of Mindfulness-Based Relapse Prevention Group Therapy on QEEG Measures in Various Severity Substance Use Disorder Involuntary Clients
Authors: Yu-Chi Liao, Nai-Wen Guo, Chun‑Hung Lee, Yung-Chin Lu, Cheng-Hung Ko
Abstract:
Objective: The incidence of behavioral addictions, especially substance use disorders (SUDs), is gradually be taken seriously with various physical health problems. Mindfulness-based relapse prevention (MBRP) is a treatment option for promoting long-term health behavior change in recent years. MBRP is a structured protocol that integrates formal meditation practices with the cognitive-behavioral approach of relapse prevention treatment by teaching participants not to engage in reappraisal or savoring techniques. However, considering SUDs as a complex brain disease, questionnaires and symptom evaluation are not sufficient to evaluate the effect of MBRP. Neurophysiological biomarkers such as quantitative electroencephalogram (QEEG) may improve accurately represent the curative effects. This study attempted to find out the neurophysiological indicator of MBRP in various severity SUD involuntary clients. Participants and Methods: Thirteen participants (all males) completed 8-week mindfulness-based treatment provided by trained, licensed clinical psychologists. The behavioral data were from the Severity of Dependence Scale (SDS) and Negative Mood Regulation Scale (NMR) before and afterMBRP treatment. The QEEG data were simultaneously recorded with executive attention tasks, called comprehensive nonverbal attention test(CNAT). The two-way repeated-measures (treatment * severity) ANOVA and independent t-test were used for statistical analysis. Results: Thirteen participants regrouped into high substance dependence (HS) and low substance dependence (LS) by SDS cut-off. The HS group showed more SDS total score and lower gamma wave in the Go/No Go task of CNAT at pretest. Both groups showed the main effect that they had a lower frontal theta/beta ratio (TBR) during the simple reaction time task of CNAT. The main effect showed that the delay errors of CNAT were lower after MBRP. There was no other difference in CNAT between groups. However, after MBRP, compared to LS, the HS group have resonant progress in improving SDS and NMR scores. The neurophysiological index, the frontal TBR of the HS during the Go/No Go task of CNATdecreased than that of the LS group. Otherwise, the LS group’s gamma wave was a significant reduction on the Go/No Go task of CNAT. Conclusion: The QEEG data supports the MBRP can restore the prefrontal function of involuntary addicts and lower their errors in executive attention tasks. However, the improvement of MBRPfor the addict with high addiction severity is significantly more than that with low severity, including QEEG’s indicators and negative emotion regulation. Future directions include investigating the reasons for differences in efficacy among different severity of the addiction.Keywords: mindfulness, involuntary clients, QEEG, emotion regulation
Procedia PDF Downloads 147321 Teaching English as a Foreign Language: Insights from the Philippine Context
Authors: Arlene Villarama, Micol Grace Guanzon, Zenaida Ramos
Abstract:
This paper provides insights into teaching English as a Foreign Language in the Philippines. The authors reviewed relevant theories and literature, and provide an analysis of the issues in teaching English in the Philippine setting in the light of these theories. The authors made an investigation in Bagong Barrio National High School (BBNHS) - a public school in Caloocan City. The institution has a population of nearly 3,000 students. The performances of randomly chosen 365 respondents were scrutinised. The study regarding the success of teaching English as a foreign language to Filipino children were highlighted. This includes the respondents’ family background, surroundings, way of living, and their behavior and understanding regarding education. The results show that there is a significant relationship between demonstrative, communal, and logical areas that touch the efficacy of introducing English as a foreign Dialectal. Filipino children, by nature, are adventurous and naturally joyful even for little things. They are born with natural skills and capabilities to discover new things. They highly consider activities and work that ignite their curiosity. They love to be recognised and are inspired the most when given the assurance of acceptance and belongingness. Fun is the appealing influence to ignite and motivate learning. The magic word is excitement. The study reveals the many facets of the accumulation and transmission of erudition, in introduction and administration of English as a foreign phonological; it runs and passes through different channels of diffusion. Along the way, there are particles that act as obstructions in protocols where knowledge are to be gathered. Data gained from the respondents conceals a reality that is beyond one’s imagination. One significant factor that touches the inefficacy of understanding and using English as a foreign language is an erroneous outset gained from an old belief handed down from generation to generation. This accepted perception about the power and influence of the use of language, gives the novices either a negative or a positive notion. The investigation shows that a higher number of dislikes in the use of English can be tracked down from the belief of the story on how the English language came into existence. The belief that only the great and the influential have the right to use English as a means of communication kills the joy of acceptance. A significant notation has to be examined so as to provide a solution or if not eradicate the misconceptions that lie behind the substance of the matter. The result of the authors’ research depicts a substantial correlation between the emotional (demonstrative), social (communal), and intellectual (logical). The focus of this paper is to bring out the right notation and disclose the misconceptions with regards to teaching English as a foreign language. This will concentrate on the emotional, social, and intellectual areas of the Filipino learners and how these areas affect the transmittance and accumulation of learning. The authors’ aim is to formulate logical ways and techniques that would open up new beginnings in understanding and acceptance of the subject matter.Keywords: accumulation, behaviour, facets, misconceptions, transmittance
Procedia PDF Downloads 205320 The Readaptation of the Subscale 3 of the NLit-IT (Nutrition Literacy Assessment Instrument for Italian Subjects)
Authors: Virginia Vettori, Chiara Lorini, Vieri Lastrucci, Giulia Di Pisa, Alessia De Blasi, Sara Giuggioli, Guglielmo Bonaccorsi
Abstract:
The design of the Nutrition Literacy Assessment Instrument (NLit) responds to the need to provide a tool to adequately assess the construct of nutrition literacy (NL), which is strictly connected to the quality of the diet and nutritional health status. The NLit was originally developed and validated in the US context, and it was recently validated for Italian people too (NLit-IT), involving a sample of N = 74 adults. The results of the cross-cultural adaptation of the tool confirmed its validity since it was established that the level of NL contributed to predicting the level of adherence to the Mediterranean Diet (convergent validity). Additionally, results obtained proved that Internal Consistency and reliability of the NLit-IT were good (Cronbach’s alpha (ρT) = 0.78; 95% CI, 0.69–0.84; Intraclass Correlation Coefficient (ICC) = 0.68, 95% CI, 0.46–0.85). However, the Subscale 3 of the NLit-IT “Household Food Measurement” showed lower values of ρT and ICC (ρT = 0.27; 95% CI, 0.1–0.55; ICC = 0.19, 95% CI, 0.01–0.63) than the entire instrument. Subscale 3 includes nine items which are constituted by written questions and the corresponding pictures of the meals. In particular, items 2, 3, and 8 of Subscale 3 had the lowest level of correct answers. The purpose of the present study was to identify the factors that influenced the Internal Consistency and reliability of Subscale 3 of NLit-IT using the methodology of a focus group. A panel of seven experts was formed, involving professionals in the field of public health nutrition, dietetics, and health promotion and all of them were trained on the concepts of nutrition literacy and food appearance. A member of the group drove the discussion, which was oriented in the identification of the reasons for the low levels of reliability and Internal Consistency. The members of the group discussed the level of comprehension of the items and how they could be readapted. From the discussion, it emerges that the written questions were clear and easy to understand, but it was observed that the representations of the meal needed to be improved. Firstly, it has been decided to introduce a fork or a spoon as a reference dimension to better understand the dimension of the food portion (items 1, 4 and 8). Additionally, the flat plate of items 3 and 5 should be substituted with a soup plate because, in the Italian national context, it is common to eat pasta or rice on this kind of plate. Secondly, specific measures should be considered for some kind of foods such as the brick of yogurt instead of a cup of yogurt (items 1 and 4). Lastly, it has been decided to redo the photos of the meals basing on professional photographic techniques. In conclusion, we noted that the graphical representation of the items strictly influenced the level of participants’ comprehension of the questions; moreover, the research group agreed that the level of knowledge about nutrition and food portion size is low in the general population.Keywords: nutritional literacy, cross cultural adaptation, misinformation, food design
Procedia PDF Downloads 172319 Need for Elucidation of Palaeoclimatic Variability in the High Himalayan Mountains: A Multiproxy Approach
Authors: Sheikh Nawaz Ali, Pratima Pandey, P. Morthekai, Jyotsna Dubey, Md. Firoze Quamar
Abstract:
The high mountain glaciers are one of the most sensitive recorders of climate changes, because they have the tendency to respond to the combined effect of snow fall and temperature. The Himalayan glaciers have been studied with a good pace during the last decade. However, owing to its large ecological diversity and geographical vividness, major part of the Indian Himalaya is uninvestigated, and hence the palaeoclimatic patterns as well as the chronology of past glaciations in particular remain controversial for the entire Indian Himalayan transect. Although the Himalayan glaciers are nourished by two important climatic systems viz. the southwest summer monsoon and the mid-latitude westerlies, however, the influence of these systems is yet to be understood. Nevertheless, existing chronology (mostly exposure ages) indicate that irrespective of the geographical position, glaciers seem to grow during enhanced Indian summer monsoon (ISM). The Himalayan mountain glaciers are referred to the third pole or water tower of Asia as they form a huge reservoir of the fresh water supplies for the Asian countries. Mountain glaciers are sensitive probes of the local climate, and, thus, they present an opportunity and a challenge to interpret climates of the past as well as to predict future changes. The principle object of all the palaeoclimatic studies is to develop a futuristic models/scenario. However, it has been found that the glacial chronologies bracket the major phases of climatic events only, and other climatic proxies are sparse in Himalaya. This is the reason that compilation of data for rapid climatic change during the Holocene shows major gaps in this region. The sedimentation in proglacial lakes, conversely, is more continuous and, hence, can be used to reconstruct a more complete record of past climatic variability that is modulated by changing ice volume of the valley glacier. The Himalayan region has numerous proglacial lacustrine deposits formed during the late Quaternary period. However, there are only few such deposits which have been studied so far. Therefore, this is the high time when efforts have to be made to systematically map the moraines located in different climatic zones, reconstruct the local and regional moraine stratigraphy and use multiple dating techniques to bracket the events of glaciation. Besides this, emphasis must be given on carrying multiproxy studies on the lacustrine sediments that will provide a high resolution palaeoclimatic data from the alpine region of the Himalaya. Although the Himalayan glaciers fluctuated in accordance with the changing climatic conditions (natural forcing), however, it is too early to arrive at any conclusion. It is very crucial to generate multiproxy data sets covering wider geographical and ecological domains taking into consideration multiple parameters that directly or indirectly influence the glacier mass balance as well as the local climate of a region.Keywords: glacial chronology, palaeoclimate, multiproxy, Himalaya
Procedia PDF Downloads 263318 Optimizing the Effectiveness of Docetaxel with Solid Lipid Nanoparticles: Formulation, Characterization, in Vitro and in Vivo Assessment
Authors: Navid Mosallaei, Mahmoud Reza Jaafari, Mohammad Yahya Hanafi-Bojd, Shiva Golmohammadzadeh, Bizhan Malaekeh-Nikouei
Abstract:
Background: Docetaxel (DTX), a potent anticancer drug derived from the European yew tree, is effective against various human cancers by inhibiting microtubule depolymerization. Solid lipid nanoparticles (SLNs) have gained attention as drug carriers for enhancing drug effectiveness and safety. SLNs, submicron-sized lipid-based particles, can passively target tumors through the "enhanced permeability and retention" (EPR) effect, providing stability, drug protection, and controlled release while being biocompatible. Methods: The SLN formulation included biodegradable lipids (Compritol and Precirol), hydrogenated soy phosphatidylcholine (H-SPC) as a lipophilic co-surfactant, and Poloxamer 188 as a non-ionic polymeric stabilizer. Two SLN preparation techniques, probe sonication and microemulsion, were assessed. Characterization encompassed SLNs' morphology, particle size, zeta potential, matrix, and encapsulation efficacy. In-vitro cytotoxicity and cellular uptake studies were conducted using mouse colorectal (C-26) and human malignant melanoma (A-375) cell lines, comparing SLN-DTX with Taxotere®. In-vivo studies evaluated tumor inhibitory efficacy and survival in mice with colorectal (C-26) tumors, comparing SLNDTX withTaxotere®. Results: SLN-DTX demonstrated stability, with an average size of 180 nm and a low polydispersity index (PDI) of 0.2 and encapsulation efficacy of 98.0 ± 0.1%. Differential scanning calorimetry (DSC) suggested amorphous encapsulation of DTX within SLNs. In vitro studies revealed that SLN-DTX exhibited nearly equivalent cytotoxicity to Taxotere®, depending on concentration and exposure time. Cellular uptake studies demonstrated superior intracellular DTX accumulation with SLN-DTX. In a C-26 mouse model, SLN-DTX at 10 mg/kg outperformed Taxotere® at 10 and 20 mg/kg, with no significant differences in body weight changes and a remarkably high survival rate of 60%. Conclusion: This study concludes that SLN-DTX, prepared using the probe sonication, offers stability and enhanced therapeutic effects. It displayed almost same in vitro cytotoxicity to Taxotere® but showed superior cellular uptake. In a mouse model, SLN-DTX effectively inhibited tumor growth, with 10 mg/kg outperforming even 20 mg/kg of Taxotere®, without adverse body weight changes and with higher survival rates. This suggests that SLN-DTX has the potential to reduce adverse effects while maintaining or enhancing docetaxel's therapeutic profile, making it a promising drug delivery strategy suitable for industrialization.Keywords: docetaxel, Taxotere®, solid lipid nanoparticles, enhanced permeability and retention effect, drug delivery, cancer chemotherapy, cytotoxicity, cellular uptake, tumor inhibition
Procedia PDF Downloads 83317 Biotechnology Approach: A Tool of Enhancement of Sticky Mucilage of Pulicaria Incisa (Medicinal Plant) for Wounds Treatment
Authors: Djamila Chabane, Asma Rouane, Karim Arab
Abstract:
Depending of the chemical substances responsible for the pharmacological effects, a future therapeutic drug might be produced by extraction from whole plants or by callus initiated from some parts. The optimized callus culture protocols now offer the possibility to use cell culture techniques for vegetative propagation and open minds for further studies on secondary metabolites and drug establishment. In Algerian traditional medicine, Pulicaria incisa (Asteraceae) is used in the treatment of daily troubles (stomachache, headhache., cold, sore throat and rheumatic arthralgia). Field findings revealed that many healers use some fresh parts (leaves, flowers) of this plant to treat skin wounds. This study aims to evaluate the healing efficiency of artisanal cream prepared from sticky mucilage isolated from calluses on dermal wounds of animal models. Callus cultures were initiated from reproductive explants (young inflorescences) excised from adult plants and transferred to a MS basal medium supplemented with growth regulators and maintained under dark for for months. Many calluses types were obtained with various color and aspect (friable, compact). Several subcultures of calli were performed to enhance the mucilage accumulation. After extraction, the mucilage extracts were tested on animal models as follows. The wound healing potential was studied by causing dermal wounds (1 cm diameter) at the dorsolumbar part of Rattus norvegicus; different samples of the cream were applied after hair removal on three rats each, including two controls (one treated by Vaseline and one without any treatment), two experimental groups (experimental group 1, treated with a reference ointment "Madecassol® and experimental group 2 treated by callus mucilage cream for a period of seventeen days. The evolution of the healing activity was estimated by calculating the percentage reduction of the area wounds treated by all compounds tested compared to the controls by using AutoCAD software. The percentage of healing effect of the cream prepared from callus mucilage was (99.79%) compared to that of Madecassol® (99.76%). For the treatment time, the significant healing activity was observed after 17 days compared to that of the reference pharmaceutical products without any wound infection. The healing effect of Madecassol® is more effective because it stimulates and regulates the production of collagen, a fibrous matrix essential for wound healing. Mucilage extracts also showed a high capacity to heal the skin without any infection. According to this pharmacological activity, we suggest to use calluses produced by in vitro culture to producing new compounds for the skin care and treatment.Keywords: calluses, Pulicaria incisa, mucilage, Wounds
Procedia PDF Downloads 130316 A development of Innovator Teachers Training Curriculum to Create Instructional Innovation According to Active Learning Approach to Enhance learning Achievement of Private School in Phayao Province
Authors: Palita Sooksamran, Katcharin Mahawong
Abstract:
This research aims to offer the development of innovator teachers training curriculum to create instructional innovation according to active learning approach to enhance learning achievement. The research and development process is carried out in 3 steps: Step 1 The study of the needs necessary to develop a training curriculum: the inquiry was conducted by a sample of teachers in private schools in Phayao province that provide basic education at the level of education. Using a questionnaire of 176 people, the sample was defined using a table of random numbers and stratified samples, using the school as a random layer. Step 2 Training curriculum development: the tools used are developed training curriculum and curriculum assessments, with nine experts checking the appropriateness of the draft curriculum. The statistic used in data analysis is the average ( ) and standard deviation (S.D.) Step 3 study on effectiveness of training curriculum: one group pretest/posttest design applied in this study. The sample consisted of 35 teachers from private schools in Phayao province. The participants volunteered to attend on their own. The results of the research showed that: 1.The essential demand index needed with the list of essential needs in descending order is the choice and create of multimedia media, videos, application for learning management at the highest level ,Developed of multimedia, video and applications for learning management and selection of innovative learning management techniques and methods of solve the problem Learning , respectively. 2. The components of the training curriculum include principles, aims, scope of content, training activities, learning materials and resources, supervision evaluation. The scope of the curriculum consists of basic knowledge about learning management innovation, active learning, lesson plan design, learning materials and resources, learning measurement and evaluation, implementation of lesson plans into classroom and supervision and motoring. The results of the evaluation of quality of the draft training curriculum at the highest level. The Experts suggestion is that the purpose of the course should be used words that convey the results. 3. The effectiveness of training curriculum 1) Cognitive outcomes of the teachers in creating innovative learning management was at a high level of relative gain score. 2) The assessment results of learning management ability according to the active learning approach to enhance learning achievement by assessing from 2 education supervisor as a whole were very high , 3) Quality of innovation learning management based on active learning approach to enhance learning achievement of the teachers, 7 instructional Innovations were evaluated as outstanding works and 26 instructional Innovations passed the standard 4) Overall learning achievement of students who learned from 35 the sample teachers was at a high level of relative gain score 5) teachers' satisfaction towards the training curriculum was at the highest level.Keywords: training curriculum, innovator teachers, active learning approach, learning achievement
Procedia PDF Downloads 55315 Developing Computational Thinking in Early Childhood Education
Authors: Kalliopi Kanaki, Michael Kalogiannakis
Abstract:
Nowadays, in the digital era, the early acquisition of basic programming skills and knowledge is encouraged, as it facilitates students’ exposure to computational thinking and empowers their creativity, problem-solving skills, and cognitive development. More and more researchers and educators investigate the introduction of computational thinking in K-12 since it is expected to be a fundamental skill for everyone by the middle of the 21st century, just like reading, writing and arithmetic are at the moment. In this paper, a doctoral research in the process is presented, which investigates the infusion of computational thinking into science curriculum in early childhood education. The whole attempt aims to develop young children’s computational thinking by introducing them to the fundamental concepts of object-oriented programming in an enjoyable, yet educational framework. The backbone of the research is the digital environment PhysGramming (an abbreviation of Physical Science Programming), which provides children the opportunity to create their own digital games, turning them from passive consumers to active creators of technology. PhysGramming deploys an innovative hybrid schema of visual and text-based programming techniques, with emphasis on object-orientation. Through PhysGramming, young students are familiarized with basic object-oriented programming concepts, such as classes, objects, and attributes, while, at the same time, get a view of object-oriented programming syntax. Nevertheless, the most noteworthy feature of PhysGramming is that children create their own digital games within the context of physical science courses, in a way that provides familiarization with the basic principles of object-oriented programming and computational thinking, even though no specific reference is made to these principles. Attuned to the ethical guidelines of educational research, interventions were conducted in two classes of second grade. The interventions were designed with respect to the thematic units of the curriculum of physical science courses, as a part of the learning activities of the class. PhysGramming was integrated into the classroom, after short introductory sessions. During the interventions, 6-7 years old children worked in pairs on computers and created their own digital games (group games, matching games, and puzzles). The authors participated in these interventions as observers in order to achieve a realistic evaluation of the proposed educational framework concerning its applicability in the classroom and its educational and pedagogical perspectives. To better examine if the objectives of the research are met, the investigation was focused on six criteria; the educational value of PhysGramming, its engaging and enjoyable characteristics, its child-friendliness, its appropriateness for the purpose that is proposed, its ability to monitor the user’s progress and its individualizing features. In this paper, the functionality of PhysGramming and the philosophy of its integration in the classroom are both described in detail. Information about the implemented interventions and the results obtained is also provided. Finally, several limitations of the research conducted that deserve attention are denoted.Keywords: computational thinking, early childhood education, object-oriented programming, physical science courses
Procedia PDF Downloads 120314 Thinking Lean in ICU: A Time Motion Study Quantifying ICU Nurses’ Multitasking Time Allocation
Authors: Fatma Refaat Ahmed, Sally Mohamed Farghaly
Abstract:
Context: Intensive care unit (ICU) nurses often face pressure and constraints in their work, leading to the rationing of care when demands exceed available time and resources. Observations suggest that ICU nurses are frequently distracted from their core nursing roles by non-core tasks. This study aims to provide evidence on ICU nurses' multitasking activities and explore the association between nurses' personal and clinical characteristics and their time allocation. Research Aim: The aim of this study is to quantify the time spent by ICU nurses on multitasking activities and investigate the relationship between their personal and clinical characteristics and time allocation. Methodology: A self-observation form utilizing the "Diary" recording method was used to record the number of tasks performed by ICU nurses and the time allocated to each task category. Nurses also reported on the distractions encountered during their nursing activities. A convenience sample of 60 ICU nurses participated in the study, with each nurse observed for one nursing shift (6 hours), amounting to a total of 360 hours. The study was conducted in two ICUs within a university teaching hospital in Alexandria, Egypt. Findings: The results showed that ICU nurses completed 2,730 direct patient-related tasks and 1,037 indirect tasks during the 360-hour observation period. Nurses spent an average of 33.65 minutes on ventilator care-related tasks, 14.88 minutes on tube care-related tasks, and 10.77 minutes on inpatient care-related tasks. Additionally, nurses spent an average of 17.70 minutes on indirect care tasks per hour. The study identified correlations between nursing time and nurses' personal and clinical characteristics. Theoretical Importance: This study contributes to the existing research on ICU nurses' multitasking activities and their relationship with personal and clinical characteristics. The findings shed light on the significant time spent by ICU nurses on direct care for mechanically ventilated patients and the distractions that require attention from ICU managers. Data Collection: Data were collected using self-observation forms completed by participating ICU nurses. The forms recorded the number of tasks performed, the time allocated to each task category, and any distractions encountered during nursing activities. Analysis Procedures: The collected data were analyzed to quantify the time spent on different tasks by ICU nurses. Correlations were also examined between nursing time and nurses' personal and clinical characteristics. Question Addressed: This study addressed the question of how ICU nurses allocate their time across multitasking activities and whether there is an association between nurses' personal and clinical characteristics and time allocation. Conclusion: The findings of this study emphasize the need for a lean evaluation of ICU nurses' activities to identify and address potential gaps in patient care and distractions. Implementing lean techniques can improve efficiency, safety, clinical outcomes, and satisfaction for both patients and nurses, ultimately enhancing the quality of care and organizational performance in the ICU setting.Keywords: motion study, ICU nurse, lean, nursing time, multitasking activities
Procedia PDF Downloads 68313 Edge Enhancement Visual Methodology for Fat Amount and Distribution Assessment in Dry-Cured Ham Slices
Authors: Silvia Grassi, Stefano Schiavon, Ernestina Casiraghi, Cristina Alamprese
Abstract:
Dry-cured ham is an uncooked meat product particularly appreciated for its peculiar sensory traits among which lipid component plays a key role in defining quality and, consequently, consumers’ acceptability. Usually, fat content and distribution are chemically determined by expensive, time-consuming, and destructive analyses. Moreover, different sensory techniques are applied to assess product conformity to desired standards. In this context, visual systems are getting a foothold in the meat market envisioning more reliable and time-saving assessment of food quality traits. The present work aims at developing a simple but systematic and objective visual methodology to assess the fat amount of dry-cured ham slices, in terms of total, intermuscular and intramuscular fractions. To the aim, 160 slices from 80 PDO dry-cured hams were evaluated by digital image analysis and Soxhlet extraction. RGB images were captured by a flatbed scanner, converted in grey-scale images, and segmented based on intensity histograms as well as on a multi-stage algorithm aimed at edge enhancement. The latter was performed applying the Canny algorithm, which consists of image noise reduction, calculation of the intensity gradient for each image, spurious response removal, actual thresholding on corrected images, and confirmation of strong edge boundaries. The approach allowed for the automatic calculation of total, intermuscular and intramuscular fat fractions as percentages of the total slice area. Linear regression models were run to estimate the relationships between the image analysis results and the chemical data, thus allowing for the prediction of the total, intermuscular and intramuscular fat content by the dry-cured ham images. The goodness of fit of the obtained models was confirmed in terms of coefficient of determination (R²), hypothesis testing and pattern of residuals. Good regression models have been found being 0.73, 0.82, and 0.73 the R2 values for the total fat, the sum of intermuscular and intramuscular fat and the intermuscular fraction, respectively. In conclusion, the edge enhancement visual procedure brought to a good fat segmentation making the simple visual approach for the quantification of the different fat fractions in dry-cured ham slices sufficiently simple, accurate and precise. The presented image analysis approach steers towards the development of instruments that can overcome destructive, tedious and time-consuming chemical determinations. As future perspectives, the results of the proposed image analysis methodology will be compared with those of sensory tests in order to develop a fast grading method of dry-cured hams based on fat distribution. Therefore, the system will be able not only to predict the actual fat content but it will also reflect the visual appearance of samples as perceived by consumers.Keywords: dry-cured ham, edge detection algorithm, fat content, image analysis
Procedia PDF Downloads 177312 Telogen Effluvium: A Modern Hair Loss Concern and the Interventional Strategies
Authors: Chettyparambil Lalchand Thejalakshmi, Sonal Sabu Edattukaran
Abstract:
Hair loss is one of the main issues that contemporary society is dealing with. It can be attributable to a wide range of factors, listing from one's genetic composition and the anxiety we experience on a daily basis. Telogen effluvium [TE] is a condition that causes temporary hair loss after a stressor that might shock the body and cause the hair follicles to temporarily rest, leading to hair loss. Most frequently, women are the ones who bring up these difficulties. Extreme illness or trauma, an emotional or important life event, rapid weight loss and crash dieting, a severe scalp skin problem, a new medication, or ceasing hormone therapy are examples of potential causes. Men frequently do not notice hair thinning with time, but women with long hair may be easily identified when shedding, which can occasionally result in bias because women tend to be more concerned with aesthetics and beauty standards of the society, and approach frequently with the concerns .The woman, who formerly possessed a full head of hair, is worried about the hair loss from her scalp . There are several cases of hair loss reported every day, and Telogen effluvium is said to be the most prevalent one of them all without any hereditary risk factors. While the patient has loss in hair volume, baldness is not the result of this problem . The exponentially growing Dermatology and Aesthetic medical division has discovered that this problem is the most common and also the easiest to cure since it is feasible for these people to regrow their hair, unlike those who have scarring alopecia, in which the follicle itself is damaged and non-viable. Telogen effluvium comes in two different forms: acute and chronic. Acute TE occurs in all the age groups with a hair loss of less than three months, while chronic TE is more common in those between the ages of 30 and 60 with a hair loss of more than six months . Both kinds are prevalent throughout all age groups, regardless of the predominance. It takes between three and six months for the lost hair to come back, although this condition is readily reversed by eliminating stresses. After shedding their hair, patients frequently describe having noticeable fringes on their forehead. The current medical treatments for this condition include topical corticosteroids, systemic corticosteroids, minoxidil and finasteride, CNDPA (caffeine, niacinamide, panthenol, dimethicone, and an acrylate polymer) .Individual terminal hair growth was increased by 10% as a result of the innovative intervention CNDPA. Botulinum Toxin A, Scalp Micro Needling, Platelet Rich Plasma Therapy [PRP], and sessions with Multivitamin Mesotherapy Injections are some recently enhanced techniques with partially or completely reversible hair loss. Also, it has been shown that supplements like Nutrafol and Biotin are producing effective outcomes. There is virtually little evidence to support the claim that applying sulfur-rich ingredients to the scalp, such as onion juice, can help TE patients' hair regenerate.Keywords: dermatology, telogen effluvium, hair loss, modern hair loass treatments
Procedia PDF Downloads 90