Search results for: flow visualization techniques
394 An Assessment of the Trend and Pattern of Vital Registration System in Shiroro Local Government Area of Niger State, Nigeria
Authors: Aliyu Bello Mohammed
Abstract:
Vital registration or registration of vital events is one of the three major sources of demographic data in Nigeria. The other two are the population census and sample survey. The former is judged to be an indispensable source of demographic data because, it provide information on vital statistics and population trends between two census periods. Various literacy works however depict the vital registration in Nigeria as incapable of providing accurate data for the country. The study has both theoretical and practical significances. The trends and pattern of vital registration has not received adequate research interest in Sub-Saharan Africa in general and Nigeria in particular. This has created a gap in understanding the extent and consequence of the scourge in Africa sub-region. Practically, the study also captures the policy interventions of government and Non-Governmental Organizations (NGOs) that would help enlighten the public on the importance of vital registration in Nigeria. Furthermore, feasible policy strategies that will enhance trends and pattern vital registration in the society would emanate from the study. The study adopted a cross sectional survey design and applied multi stage sampling techniques to sample 230 respondents from the general public in the study area. The first stage involved the splitting of the local government into wards. The second stage involves selecting streets, while the third stage was the households. In all, 6 wards were sampled for the study. The study utilized both primary and secondary sources of data. The primary sources of data used were the questionnaire, focus group discussion (FGD) and in-depth interview (IDI) guides while the secondary sources of data were journals and books, newspapers and magazines. Twelve FGD sessions with 96 study participants and five IDI sessions with the heads of vital registration facilities were conducted. The quantitative data were analyzed using Statistical Package for Social Sciences (SPSS). Descriptive statistics like tables, frequencies and percentages were employed in presenting and interpreting the data. Information from the qualitative data was transcribed and ordered in themes to ensure that outstanding points of the responses are noted. The following conclusions were drawn from the study: the available vital registration facilities are not adequate and were not evenly distributed in the study area; lack of awareness and knowledge of the existence and the importance of vital registration by majority of the people in the local government; distance to vital registration centres from their residents; most births in the area were not registered, and even among the few births that were registered, majority of them were registered after the limited period for registration. And the study reveals that socio-economic index, educational level and distance of facilities to residents are determinants of access to vital registration facility. The study concludes by discussing the need for a reliable and accurate vital registration system if Nigeria’s vision of becoming one of the top 20 economies in the world in 2020 would be realized.Keywords: trends, patterns, vital, registration and assessment
Procedia PDF Downloads 254393 Low-Temperature Poly-Si Nanowire Junctionless Thin Film Transistors with Nickel Silicide
Authors: Yu-Hsien Lin, Yu-Ru Lin, Yung-Chun Wu
Abstract:
This work demonstrates the ultra-thin poly-Si (polycrystalline Silicon) nanowire junctionless thin film transistors (NWs JL-TFT) with nickel silicide contact. For nickel silicide film, this work designs to use two-step annealing to form ultra-thin, uniform and low sheet resistance (Rs) Ni silicide film. The NWs JL-TFT with nickel silicide contact exhibits the good electrical properties, including high driving current (>10⁷ Å), subthreshold slope (186 mV/dec.), and low parasitic resistance. In addition, this work also compares the electrical characteristics of NWs JL-TFT with nickel silicide and non-silicide contact. Nickel silicide techniques are widely used for high-performance devices as the device scaling due to the source/drain sheet resistance issue. Therefore, the self-aligned silicide (salicide) technique is presented to reduce the series resistance of the device. Nickel silicide has several advantages including low-temperature process, low silicon consumption, no bridging failure property, smaller mechanical stress, and smaller contact resistance. The junctionless thin-film transistor (JL-TFT) is fabricated simply by heavily doping the channel and source/drain (S/D) regions simultaneously. Owing to the special doping profile, JL-TFT has some advantages such as lower thermal the budget which can integrate with high-k/metal-gate easier than conventional MOSFETs (Metal Oxide Semiconductor Field-Effect Transistors), longer effective channel length than conventional MOSFETs, and avoidance of complicated source/drain engineering. To solve JL-TFT has turn-off problem, JL-TFT needs ultra-thin body (UTB) structure to reach fully depleted channel region in off-state. On the other hand, the drive current (Iᴅ) is declined as transistor features are scaled. Therefore, this work demonstrates ultra thin poly-Si nanowire junctionless thin film transistors with nickel silicide contact. This work investigates the low-temperature formation of nickel silicide layer by physical-chemical deposition (PVD) of a 15nm Ni layer on the poly-Si substrate. Notably, this work designs to use two-step annealing to form ultrathin, uniform and low sheet resistance (Rs) Ni silicide film. The first step was promoted Ni diffusion through a thin interfacial amorphous layer. Then, the unreacted metal was lifted off after the first step. The second step was annealing for lower sheet resistance and firmly merged the phase.The ultra-thin poly-Si nanowire junctionless thin film transistors NWs JL-TFT with nickel silicide contact is demonstrated, which reveals high driving current (>10⁷ Å), subthreshold slope (186 mV/dec.), and low parasitic resistance. In silicide film analysis, the second step of annealing was applied to form lower sheet resistance and firmly merge the phase silicide film. In short, the NWs JL-TFT with nickel silicide contact has exhibited a competitive short-channel behavior and improved drive current.Keywords: poly-Si, nanowire, junctionless, thin-film transistors, nickel silicide
Procedia PDF Downloads 238392 The Women’s Empowerment and Children’s Bell-Being in Italy: An Empirical Research Starting From the Capability Approach
Authors: Alba Francesca Canta
Abstract:
The present is one of those times when what normally seems to constitute a reason for living vanishes, particularly in times of crisis, during which certainties of all times crumble, and critical issues emerge, especially in already problematic areas such as the role of women and children. This paper aims to explore the issue of gender and highlight the importance of education for people’s development and well-being. The study is part of the broader framework of the capability approach, a multidimensional approach based on the need to consider a person’s wealth by virtue of their opportunity and freedom to live a ‘life of worth. The results of empirical research conducted in 2020 will be presented, the main objective of which was to measure, through qualitative (project techniques, focus groups, interviews with key informants) and quantitative (questionnaire) methods, the level of empowerment of women in two Italian territories and the consequent well-being of their children. By means of the relationship study, the present research results show that a higher level of women’s empowerment corresponds to a higher level of children’s well-being in a positive virtuous process. The opportunity structure and education are the main driving guide both to women’s empowerment and children’s well-being, emphasizing the importance of education to gender culture as a key factor for the development of the whole society. Among all the traumatic events that broke the harmony of the world and caused an abrupt turn in all areas of society, the crisis of democracy and education are some of the harshest. Nevertheless, education continues to be a fundamental pillar of Global Development Agendas, and above all, democratic education is the main factor in the development of a generative society, capable of forming people who know how to live in society. In this context, recovering democratic and inclusive education can be the key to a breakthrough. In the capability approach Sen, and other Scholars, point out education from two different perspectives: a. education as a fundamental right capable of influencing other real fields of people’s life (i.e., being educated to prevent illness, to vote, etc.) and b. spread communitarian education, tolerance, inclusive, democratic, and respectful, capable of forming human beings. This kind of educational system can directly lead to a general process of gender education that presupposes respect for essential principles: equality, uniqueness, and the participation of all in the processes of defining a democratic society. Many practices of women and children’s exclusions essentially derive from social factors (norms, values, quality of institutions, relations of power, educational and cultural practices) that can build strong barriers. Respect for these principles and education for gender culture could foster the renewal of society and the acquisition of fundamental skills for a generative and inclusive society, such as critical skills, cosmopolitan skills, and narrative imagination.Keywords: capability approach, children’s well-being, education, women’s empowerment
Procedia PDF Downloads 67391 Improving Patient Outcomes for Aspiration Pneumonia
Authors: Mary Farrell, Maria Soubra, Sandra Vega, Dorothy Kakraba, Joanne Fontanilla, Moira Kendra, Danielle Tonzola, Stephanie Chiu
Abstract:
Pneumonia is the most common infectious cause of hospitalizations in the United States, with more than one million admissions annually and costs of $10 billion every year, making it the 8th leading cause of death. Aspiration pneumonia is an aggressive type of pneumonia that results from inhalation of oropharyngeal secretions and/or gastric contents and is preventable. The authors hypothesized that an evidence-based aspiration pneumonia clinical care pathway could reduce 30-day hospital readmissions and mortality rates, while improving the overall care of patients. We conducted a retrospective chart review on 979 patients discharged with aspiration pneumonia from January 2021 to December 2022 at Overlook Medical Center. The authors identified patients who were coded with aspiration pneumonia and/or stable sepsis. Secondarily, we identified 30-day readmission rates for aspiration pneumonia from a SNF. The Aspiration Pneumonia Clinical Care Pathway starts in the emergency department (ED) with the initiation of antimicrobials within 4 hours of admission and early recognition of aspiration. Once this is identified, a swallow test is initiated by the bedside nurse, and if the patient demonstrates dysphagia, they are maintained on strict nothing by mouth (NPO) followed by a speech and language pathologist (SLP) referral for an appropriate modified diet recommendation. Aspiration prevention techniques included the avoidance of straws, 45-degree positioning, no talking during meals, taking small bites, placement of the aspiration wrist band, and consuming meals out of the bed in a chair. Nursing education was conducted with a newly created online learning module about aspiration pneumonia. The authors identified 979 patients, with an average age of 73.5 years old, who were diagnosed with aspiration pneumonia on the index hospitalization. These patients were reviewed for a 30-day readmission for aspiration pneumonia or stable sepsis, and mortality rates from January 2021 to December 2022 at Overlook Medical Center (OMC). The 30-day readmission rates were significantly lower in the cohort that received the clinical care pathway (35.0% vs. 27.5%, p = 0.011). When evaluating the mortality rates in the pre and post intervention cohort the authors discovered the mortality rates were lower in the post intervention cohort (23.7% vs 22.4%, p = 0.61) Mortality among non-white (self-reported as non-white) patients were lower in the post intervention cohort (34.4% vs. 21.0% , p = 0.05). Patients who reported as a current smoker/vaper in the pre and post cohorts had increased mortality rates (5.9% vs 22%). There was a decrease in mortality for the male population but an increase in mortality for women in the pre and post cohorts (19% vs. 25%). The authors attributed this increase in mortality in the post intervention cohort to more active smokers, more former smokers, and more being admitted from a SNF. This research identified that implementation of an Aspiration Pneumonia Clinical Care Pathway showed a statistically significant decrease in readmission rates and mortality rates in non-whites. The 30-day readmission rates were lower in the cohort that received the clinical care pathway (35.0% vs. 27.5%, p = 0.011).Keywords: aspiration pneumonia, mortality, quality improvement, 30-day pneumonia readmissions
Procedia PDF Downloads 63390 Training for Safe Tree Felling in the Forest with Symmetrical Collaborative Virtual Reality
Authors: Irene Capecchi, Tommaso Borghini, Iacopo Bernetti
Abstract:
One of the most common pieces of equipment still used today for pruning, felling, and processing trees is the chainsaw in forestry. However, chainsaw use highlights dangers and one of the highest rates of accidents in both professional and non-professional work. Felling is proportionally the most dangerous phase, both in severity and frequency, because of the risk of being hit by the plant the operator wants to cut down. To avoid this, a correct sequence of chainsaw cuts must be taught concerning the different conditions of the tree. Virtual reality (VR) makes it possible to virtually simulate chainsaw use without danger of injury. The limitations of the existing applications are as follow. The existing platforms are not symmetrical collaborative because the trainee is only in virtual reality, and the trainer can only see the virtual environment on a laptop or PC, and this results in an inefficient teacher-learner relationship. Therefore, most applications only involve the use of a virtual chainsaw, and the trainee thus cannot feel the real weight and inertia of a real chainsaw. Finally, existing applications simulate only a few cases of tree felling. The objectives of this research were to implement and test a symmetrical collaborative training application based on VR and mixed reality (MR) with the overlap between real and virtual chainsaws in MR. The research and training platform was developed for the Meta quest 2 head-mounted display. The research and training platform application is based on the Unity 3D engine, and Present Platform Interaction SDK (PPI-SDK) developed by Meta. PPI-SDK avoids the use of controllers and enables hand tracking and MR. With the combination of these two technologies, it was possible to overlay a virtual chainsaw with a real chainsaw in MR and synchronize their movements in VR. This ensures that the user feels the weight of the actual chainsaw, tightens the muscles, and performs the appropriate movements during the test allowing the user to learn the correct body posture. The chainsaw works only if the right sequence of cuts is made to felling the tree. Contact detection is done by Unity's physics system, which allows the interaction of objects that simulate real-world behavior. Each cut of the chainsaw is defined by a so-called collider, and the felling of the tree can only occur if the colliders are activated in the right order simulating a safe technique felling. In this way, the user can learn how to use the chainsaw safely. The system is also multiplayer, so the student and the instructor can experience VR together in a symmetrical and collaborative way. The platform simulates the following tree-felling situations with safe techniques: cutting the tree tilted forward, cutting the medium-sized tree tilted backward, cutting the large tree tilted backward, sectioning the trunk on the ground, and cutting branches. The application is being evaluated on a sample of university students through a special questionnaire. The results are expected to test both the increase in learning compared to a theoretical lecture and the immersive and telepresence of the platform.Keywords: chainsaw, collaborative symmetric virtual reality, mixed reality, operator training
Procedia PDF Downloads 107389 Distributed Listening in Intensive Care: Nurses’ Collective Alarm Responses Unravelled through Auditory Spatiotemporal Trajectories
Authors: Michael Sonne Kristensen, Frank Loesche, James Foster, Elif Ozcan, Judy Edworthy
Abstract:
Auditory alarms play an integral role in intensive care nurses’ daily work. Most medical devices in the intensive care unit (ICU) are designed to produce alarm sounds in order to make nurses aware of immediate or prospective safety risks. The utilisation of sound as a carrier of crucial patient information is highly dependent on nurses’ presence - both physically and mentally. For ICU nurses, especially the ones who work with stationary alarm devices at the patient bed space, it is a challenge to display ‘appropriate’ alarm responses at all times as they have to navigate with great flexibility in a complex work environment. While being primarily responsible for a small number of allocated patients they are often required to engage with other nurses’ patients, relatives, and colleagues at different locations inside and outside the unit. This work explores the social strategies used by a team of nurses to comprehend and react to the information conveyed by the alarms in the ICU. Two main research questions guide the study: To what extent do alarms from a patient bed space reach the relevant responsible nurse by direct auditory exposure? By which means do responsible nurses get informed about their patients’ alarms when not directly exposed to the alarms? A comprehensive video-ethnographic field study was carried out to capture and evaluate alarm-related events in an ICU. The study involved close collaboration with four nurses who wore eye-level cameras and ear-level binaural audio recorders during several work shifts. At all time the entire unit was monitored by multiple video and audio recorders. From a data set of hundreds of hours of recorded material information about the nurses’ location, social interaction, and alarm exposure at any point in time was coded in a multi-channel replay-interface. The data shows that responsible nurses’ direct exposure and awareness of the alarms of their allocated patients vary significantly depending on work load, social relationships, and the location of the patient’s bed space. Distributed listening is deliberately employed by the nursing team as a social strategy to respond adequately to alarms, but the patterns of information flow prompted by alarm-related events are not uniform. Auditory Spatiotemporal Trajectory (AST) is proposed as a methodological label to designate the integration of temporal, spatial and auditory load information. As a mixed-method metrics it provides tangible evidence of how nurses’ individual alarm-related experiences differ from one another and from stationary points in the ICU. Furthermore, it is used to demonstrate how alarm-related information reaches the individual nurse through principles of social and distributed cognition, and how that information relates to the actual alarm event. Thereby it bridges a long-standing gap in the literature on medical alarm utilisation between, on the one hand, initiatives to measure objective data of the medical sound environment without consideration for any human experience, and, on the other hand, initiatives to study subjective experiences of the medical sound environment without detailed evidence of the objective characteristics of the environment.Keywords: auditory spatiotemporal trajectory, medical alarms, social cognition, video-ethography
Procedia PDF Downloads 191388 Social and Economic Aspects of Unlikely but Still Possible Welfare to Work Transitions from Long-Term Unemployed
Authors: Andreas Hirseland, Lukas Kerschbaumer
Abstract:
In Germany, during the past years there constantly are about one million long term unemployed who did not benefit from the prospering labor market while most short term unemployed did. Instead, they are continuously dependent on welfare and sometimes precarious short-term employment, experiencing work poverty. Long term unemployment thus turns into a main obstacle to regular employment, especially if accompanied by other impediments such as low level education (school/vocational), poor health (especially chronical illness), advanced age (older than fifty), immigrant status, motherhood or engagement in care for other relatives. Almost two thirds of all welfare recipients have multiple impediments which hinder a successful transition from welfare back to sustainable and sufficient employment. Hiring them is often considered as an investment too risky for employers. Therefore formal application schemes based on formal qualification certificates and vocational biographies might reduce employers’ risks but at the same time are not helpful for long-term unemployed and welfare recipients. The panel survey ‘Labor market and social security’ (PASS; ~15,000 respondents in ~10,000 households), carried out by the Institute of Employment Research (the research institute of the German Federal Labor Agency), shows that their chance to get back to work tends to fall to nil. Only 66 cases of such unlikely transitions could be observed. In a sequential explanatory mixed-method study, the very scarce ‘success stories’ of unlikely transitions from long term unemployment to work were explored by qualitative inquiry – in-depth interviews with a focus on biography accompanied by qualitative network techniques in order to get a more detailed insight of relevant actors involved in the processes which promote the transition from being a welfare recipient to work. There is strong evidence that sustainable transitions are influenced by biographical resources like habits of network use, a set of informal skills and particularly a resilient way of dealing with obstacles, combined with contextual factors rather than by job-placement procedures promoted by Job-Centers according to activation rules or by following formal paths of application. On the employer’s side small and medium-sized enterprises are often found to give job opportunities to a wider variety of applicants, often based on a slow but steadily increasing relationship leading to employment. According to these results it is possible to show and discuss some limitations of (German) activation policies targeting welfare dependency and long-term unemployment. Based on these findings, indications for more supportive small scale measures in the field of labor-market policies are suggested to help long-term unemployed with multiple impediments to overcome their situation.Keywords: against-all-odds, economic sociology, long-term unemployment, mixed-methods
Procedia PDF Downloads 238387 Seismic Response of Reinforced Concrete Buildings: Field Challenges and Simplified Code Formulas
Authors: Michel Soto Chalhoub
Abstract:
Building code-related literature provides recommendations on normalizing approaches to the calculation of the dynamic properties of structures. Most building codes make a distinction among types of structural systems, construction material, and configuration through a numerical coefficient in the expression for the fundamental period. The period is then used in normalized response spectra to compute base shear. The typical parameter used in simplified code formulas for the fundamental period is overall building height raised to a power determined from analytical and experimental results. However, reinforced concrete buildings which constitute the majority of built space in less developed countries pose additional challenges to the ones built with homogeneous material such as steel, or with concrete under stricter quality control. In the present paper, the particularities of reinforced concrete buildings are explored and related to current methods of equivalent static analysis. A comparative study is presented between the Uniform Building Code, commonly used for buildings within and outside the USA, and data from the Middle East used to model 151 reinforced concrete buildings of varying number of bays, number of floors, overall building height, and individual story height. The fundamental period was calculated using eigenvalue matrix computation. The results were also used in a separate regression analysis where the computed period serves as dependent variable, while five building properties serve as independent variables. The statistical analysis shed light on important parameters that simplified code formulas need to account for including individual story height, overall building height, floor plan, number of bays, and concrete properties. Such inclusions are important for reinforced concrete buildings of special conditions due to the level of concrete damage, aging, or materials quality control during construction. Overall results of the present analysis show that simplified code formulas for fundamental period and base shear may be applied but they require revisions to account for multiple parameters. The conclusion above is confirmed by the analytical model where fundamental periods were computed using numerical techniques and eigenvalue solutions. This recommendation is particularly relevant to code upgrades in less developed countries where it is customary to adopt, and mildly adapt international codes. We also note the necessity of further research using empirical data from buildings in Lebanon that were subjected to severe damage due to impulse loading or accelerated aging. However, we excluded this study from the present paper and left it for future research as it has its own peculiarities and requires a different type of analysis.Keywords: seismic behaviour, reinforced concrete, simplified code formulas, equivalent static analysis, base shear, response spectra
Procedia PDF Downloads 232386 Spectroscopic Autoradiography of Alpha Particles on Geologic Samples at the Thin Section Scale Using a Parallel Ionization Multiplier Gaseous Detector
Authors: Hugo Lefeuvre, Jerôme Donnard, Michael Descostes, Sophie Billon, Samuel Duval, Tugdual Oger, Herve Toubon, Paul Sardini
Abstract:
Spectroscopic autoradiography is a method of interest for geological sample analysis. Indeed, researchers may face different issues such as radioelement identification and quantification in the field of environmental studies. Imaging gaseous ionization detectors find their place in geosciences for conducting specific measurements of radioactivity to improve the monitoring of natural processes using naturally-occurring radioactive tracers, but also for the nuclear industry linked to the mining sector. In geological samples, the location and identification of the radioactive-bearing minerals at the thin-section scale remains a major challenge as the detection limit of the usual elementary microprobe techniques is far higher than the concentration of most of the natural radioactive decay products. The spatial distribution of each decay product in the case of uranium in a geomaterial is interesting for relating radionuclides concentration to the mineralogy. The present study aims to provide spectroscopic autoradiography analysis method for measuring the initial energy of alpha particles with a parallel ionization multiplier gaseous detector. The analysis method has been developed thanks to Geant4 modelling of the detector. The track of alpha particles recorded in the gas detector allow the simultaneous measurement of the initial point of emission and the reconstruction of the initial particle energy by a selection based on the linear energy distribution. This spectroscopic autoradiography method was successfully used to reproduce the alpha spectra from a 238U decay chain on a geological sample at the thin-section scale. The characteristics of this measurement are an energy spectrum resolution of 17.2% (FWHM) at 4647 keV and a spatial resolution of at least 50 µm. Even if the efficiency of energy spectrum reconstruction is low (4.4%) compared to the efficiency of a simple autoradiograph (50%), this novel measurement approach offers the opportunity to select areas on an autoradiograph to perform an energy spectrum analysis within that area. This opens up possibilities for the detailed analysis of heterogeneous geological samples containing natural alpha emitters such as uranium-238 and radium-226. This measurement will allow the study of the spatial distribution of uranium and its descendants in geo-materials by coupling scanning electron microscope characterizations. The direct application of this dual modality (energy-position) of analysis will be the subject of future developments. The measurement of the radioactive equilibrium state of heterogeneous geological structures, and the quantitative mapping of 226Ra radioactivity are now being actively studied.Keywords: alpha spectroscopy, digital autoradiography, mining activities, natural decay products
Procedia PDF Downloads 151385 Precise Determination of the Residual Stress Gradient in Composite Laminates Using a Configurable Numerical-Experimental Coupling Based on the Incremental Hole Drilling Method
Authors: A. S. Ibrahim Mamane, S. Giljean, M.-J. Pac, G. L’Hostis
Abstract:
Fiber reinforced composite laminates are particularly subject to residual stresses due to their heterogeneity and the complex chemical, mechanical and thermal mechanisms that occur during their processing. Residual stresses are now well known to cause damage accumulation, shape instability, and behavior disturbance in composite parts. Many works exist in the literature on techniques for minimizing residual stresses in thermosetting and thermoplastic composites mainly. To study in-depth the influence of processing mechanisms on the formation of residual stresses and to minimize them by establishing a reliable correlation, it is essential to be able to measure very precisely the profile of residual stresses in the composite. Residual stresses are important data to consider when sizing composite parts and predicting their behavior. The incremental hole drilling is very effective in measuring the gradient of residual stresses in composite laminates. This method is semi-destructive and consists of drilling incrementally a hole through the thickness of the material and measuring relaxation strains around the hole for each increment using three strain gauges. These strains are then converted into residual stresses using a matrix of coefficients. These coefficients, called calibration coefficients, depending on the diameter of the hole and the dimensions of the gauges used. The reliability of the incremental hole drilling depends on the accuracy with which the calibration coefficients are determined. These coefficients are calculated using a finite element model. The samples’ features and the experimental conditions must be considered in the simulation. Any mismatch can lead to inadequate calibration coefficients, thus introducing errors on residual stresses. Several calibration coefficient correction methods exist for isotropic material, but there is a lack of information on this subject concerning composite laminates. In this work, a Python program was developed to automatically generate the adequate finite element model. This model allowed us to perform a parametric study to assess the influence of experimental errors on the calibration coefficients. The results highlighted the sensitivity of the calibration coefficients to the considered errors and gave an order of magnitude of the precisions required on the experimental device to have reliable measurements. On the basis of these results, improvements were proposed on the experimental device. Furthermore, a numerical method was proposed to correct the calibration coefficients for different types of materials, including thick composite parts for which the analytical approach is too complex. This method consists of taking into account the experimental errors in the simulation. Accurate measurement of the experimental errors (such as eccentricity of the hole, angular deviation of the gauges from their theoretical position, or errors on increment depth) is therefore necessary. The aim is to determine more precisely the residual stresses and to expand the validity domain of the incremental hole drilling technique.Keywords: fiber reinforced composites, finite element simulation, incremental hole drilling method, numerical correction of the calibration coefficients, residual stresses
Procedia PDF Downloads 132384 Functional Ingredients from Potato By-Products: Innovative Biocatalytic Processes
Authors: Salwa Karboune, Amanda Waglay
Abstract:
Recent studies indicate that health-promoting functional ingredients and nutraceuticals can help support and improve the overall public health, which is timely given the aging of the population and the increasing cost of health care. The development of novel ‘natural’ functional ingredients is increasingly challenging. Biocatalysis offers powerful approaches to achieve this goal. Our recent research has been focusing on the development of innovative biocatalytic approaches towards the isolation of protein isolates from potato by-products and the generation of peptides. Potato is a vegetable whose high-quality proteins are underestimated. In addition to their high proportion in the essential amino acids, potato proteins possess angiotensin-converting enzyme-inhibitory potency, an ability to reduce plasma triglycerides associated with a reduced risk of atherosclerosis, and stimulate the release of the appetite regulating hormone CCK. Potato proteins have long been considered not economically feasible due to the low protein content (27% dry matter) found in tuber (Solanum tuberosum). However, potatoes rank the second largest protein supplying crop grown per hectare following wheat. Potato proteins include patatin (40-45 kDa), protease inhibitors (5-25 kDa), and various high MW proteins. Non-destructive techniques for the extraction of proteins from potato pulp and for the generation of peptides are needed in order to minimize functional losses and enhance quality. A promising approach for isolating the potato proteins was developed, which involves the use of multi-enzymatic systems containing selected glycosyl hydrolase enzymes that synergistically work to open the plant cell wall network. This enzymatic approach is advantageous due to: (1) the use of milder reaction conditions, (2) the high selectivity and specificity of enzymes, (3) the low cost and (4) the ability to market natural ingredients. Another major benefit to this enzymatic approach is the elimination of a costly purification step; indeed, these multi-enzymatic systems have the ability to isolate proteins, while fractionating them due to their specificity and selectivity with minimal proteolytic activities. The isolated proteins were used for the enzymatic generation of active peptides. In addition, they were applied into a reduced gluten cookie formulation as consumers are putting a high demand for easy ready to eat snack foods, with high nutritional quality and limited to no gluten incorporation. The addition of potato protein significantly improved the textural hardness of reduced gluten cookies, more comparable to wheat flour alone. The presentation will focus on our recent ‘proof-of principle’ results illustrating the feasibility and the efficiency of new biocatalytic processes for the production of innovative functional food ingredients, from potato by-products, whose potential health benefits are increasingly being recognized.Keywords: biocatalytic approaches, functional ingredients, potato proteins, peptides
Procedia PDF Downloads 380383 Assessment of Groundwater Potential Sampled in Hand Dug Wells and Boreholes in Ado-Ekiti, Southwestern Nigeria
Authors: A. J. Olatunji, Adebolu Temitope Johnson
Abstract:
Groundwater samples were collected randomly from hand-dug wells and boreholes in parts of the Ado Ekiti metropolis and were subjected to quality assessment and characterization. Physicochemical analyses, which include the in-situ parameters (pH units, Turbidity, and Electrical Conductivity) and laboratory analysis of selected ionic concentrations, were carried out following standard methods. Hydrochemistry of the present study revealed relative mean concentrations of cations in the order Ca2+ > Na+ > Mg2+ > Cu2+> Fe > Mn2+ and that of anions: Cl- > NO3- > SO42- > F - respectively considering World Health Organisation Standard (WHO) range of values for potable water. The result shows that values of certain parameters (Total Dissolved Solid (TDS), Manganese, Calcium, Magnesium, Fluoride, and Sulphate) were below the Highest Desirable Level of the Standards, while values of some other parameters (pH Units, Electrical Conductivity, Turbidity, Alkalinity, Sodium, Copper, Chloride, and Total Hardness) were within the range of figures between Highest Desirable Level (HDL) and Maximum Permissible Level (MPL) of World Health Organization (WHO) drinking water Standards. The reduction in the mean concentration value of Total Dissolved Solids (TDS) of most borehole samples follows the fact that water had been allowed to settle in the overhead tanks before usage; we discussed and brainstormed in the course of sampling and agreed to take a sample that way because that represents what the people consume, it also shows an indication while there was slightly concentration increase of these soluble ions in hand-dug wells samples than borehole samples only with the exception of borehole sample seven BH7 because BH7 uses the mono-pumping system. These in-situ parameters and ionic concentrations were further displayed and or represented on bar charts along with the WHO standards for better pictorial clarifications. Deductions from field observation indices revealed the imprints of natural weathering, ion-exchange processes, and anthropogenic activities influencing groundwater quality. A strong degree of association was found to exist between sodium and chlorine ions in both hand-dug well and borehole groundwater samples through the use of Pearson’s correlation coefficient; this association can further be supported by the chemistry of the parent bedrock associated with the study area because the chemistry of groundwater is a replica of its host rock. The correlation of those two ions must have begun from the period of mountain building, indicating an identical source from which they were released to the groundwater. Moreover, considering the comparison of ionic species concentrations of all samples with the (WHO) standards, there were no anomalous increases or decreases in the laboratory analysis results; this simply reveals an insignificant state of pollution of the groundwater. The study and its sampling techniques were not set to target the likely area and extent of groundwater pollution but its portability. It could be said that the samples were safe for human consumption.Keywords: groundwater, physicochemical, parameters ionic, concentrations, WHO standards
Procedia PDF Downloads 42382 Influence of Iron Content in Carbon Nanotubes on the Intensity of Hyperthermia in the Cancer Treatment
Authors: S. Wiak, L. Szymanski, Z. Kolacinski, G. Raniszewski, L. Pietrzak, Z. Staniszewska
Abstract:
The term ‘cancer’ is given to a collection of related diseases that may affect any part of the human body. It is a pathological behaviour of cells with the potential to undergo abnormal breakdown in the processes that control cell proliferation, differentiation, and death of particular cells. Although cancer is commonly considered as modern disease, there are beliefs that drastically growing number of new cases can be linked to the extensively prolonged life expectancy and enhanced techniques for cancer diagnosis. Magnetic hyperthermia therapy is a novel approach to cancer treatment, which may greatly contribute to higher efficiency of the therapy. Employing carbon nanotubes as nanocarriers for magnetic particles, it is possible to decrease toxicity and invasiveness of the treatment by surface functionalisation. Despite appearing in recent years, magnetic particle hyperthermia has already become of the highest interest in the scientific and medical environment. The reason why hyperthermia therapy brings so much hope for future treatment of cancer lays in the effect that it produces in malignant cells. Subjecting them to thermal shock results in activation of numerous degradation processes inside and outside the cell. The heating process initiates mechanisms of DNA destruction, protein denaturation and induction of cell apoptosis, which may lead to tumour shrinkage, and in some cases, it may even cause complete disappearance of cancer. The factors which have the major impact on the final efficiency of the treatment include temperatures generated inside the tissues, time of exposure to the heating process, and the character of an individual cancer cell type. The vast majority of cancer cells is characterised by lower pH, persistent hypoxia and lack of nutrients, which can be associated to abnormal microvasculature. Since in healthy tissues we cannot observe presence of these conditions, they should not be seriously affected by elevation of the temperature. The aim of this work is to investigate the influence of iron content in iron filled Carbon Nanotubes on the desired nanoparticles for cancer therapy. In the article, the development and demonstration of the method and the model device for hyperthermic selective destruction of cancer cells are presented. This method was based on the synthesis and functionalization of carbon nanotubes serving as ferromagnetic material nanocontainers. The methodology of the production carbon- ferromagnetic nanocontainers (FNCs) includes the synthesis of carbon nanotubes, chemical, and physical characterization, increasing the content of a ferromagnetic material and biochemical functionalization involving the attachment of the key addresses. The ferromagnetic nanocontainers were synthesised in CVD and microwave plasma system. The research work has been financed from the budget of science as a research project No. PBS2/A5/31/2013.Keywords: hyperthermia, carbon nanotubes, cancer colon cells, radio frequency field
Procedia PDF Downloads 123381 Understanding the Impact of Out-of-Sequence Thrust Dynamics on Earthquake Mitigation: Implications for Hazard Assessment and Disaster Planning
Authors: Rajkumar Ghosh
Abstract:
Earthquakes pose significant risks to human life and infrastructure, highlighting the importance of effective earthquake mitigation strategies. Traditional earthquake modelling and mitigation efforts have largely focused on the primary fault segments and their slip behaviour. However, earthquakes can exhibit complex rupture dynamics, including out-of-sequence thrust (OOST) events, which occur on secondary or subsidiary faults. This abstract examines the impact of OOST dynamics on earthquake mitigation strategies and their implications for hazard assessment and disaster planning. OOST events challenge conventional seismic hazard assessments by introducing additional fault segments and potential rupture scenarios that were previously unrecognized or underestimated. Consequently, these events may increase the overall seismic hazard in affected regions. The study reviews recent case studies and research findings that illustrate the occurrence and characteristics of OOST events. It explores the factors contributing to OOST dynamics, such as stress interactions between fault segments, fault geometry, and mechanical properties of fault materials. Moreover, it investigates the potential triggers and precursory signals associated with OOST events to enhance early warning systems and emergency response preparedness. The abstract also highlights the significance of incorporating OOST dynamics into seismic hazard assessment methodologies. It discusses the challenges associated with accurately modelling OOST events, including the need for improved understanding of fault interactions, stress transfer mechanisms, and rupture propagation patterns. Additionally, the abstract explores the potential for advanced geophysical techniques, such as high-resolution imaging and seismic monitoring networks, to detect and characterize OOST events. Furthermore, the abstract emphasizes the practical implications of OOST dynamics for earthquake mitigation strategies and urban planning. It addresses the need for revising building codes, land-use regulations, and infrastructure designs to account for the increased seismic hazard associated with OOST events. It also underscores the importance of public awareness campaigns to educate communities about the potential risks and safety measures specific to OOST-induced earthquakes. This sheds light on the impact of out-of-sequence thrust dynamics in earthquake mitigation. By recognizing and understanding OOST events, researchers, engineers, and policymakers can improve hazard assessment methodologies, enhance early warning systems, and implement effective mitigation measures. By integrating knowledge of OOST dynamics into urban planning and infrastructure development, societies can strive for greater resilience in the face of earthquakes, ultimately minimizing the potential for loss of life and infrastructure damage.Keywords: earthquake mitigation, out-of-sequence thrust, seismic, satellite imagery
Procedia PDF Downloads 90380 Advancements in Electronic Sensor Technologies for Tea Quality Evaluation
Authors: Raana Babadi Fathipour
Abstract:
Tea, second only to water in global consumption rates, holds a significant place as the beverage of choice for many around the world. The process of fermenting tea leaves plays a crucial role in determining its ultimate quality, traditionally assessed through meticulous observation by tea tasters and laboratory analysis. However, advancements in technology have paved the way for innovative electronic sensing platforms like the electronic nose (e-nose), electronic tongue (e-tongue), and electronic eye (e-eye). These cutting-edge tools, coupled with sophisticated data processing algorithms, not only expedite the assessment of tea's sensory qualities based on consumer preferences but also establish new benchmarks for this esteemed bioactive product to meet burgeoning market demands worldwide. By harnessing intricate data sets derived from electronic signals and deploying multivariate statistical techniques, these technological marvels can enhance accuracy in predicting and distinguishing tea quality with unparalleled precision. In this contemporary exploration, a comprehensive overview is provided of the most recent breakthroughs and viable solutions aimed at addressing forthcoming challenges in the realm of tea analysis. Utilizing bio-mimicking Electronic Sensory Perception systems (ESPs), researchers have developed innovative technologies that enable precise and instantaneous evaluation of the sensory-chemical attributes inherent in tea and its derivatives. These sophisticated sensing mechanisms are adept at deciphering key elements such as aroma, taste, and color profiles, transitioning valuable data into intricate mathematical algorithms for classification purposes. Through their adept capabilities, these cutting-edge devices exhibit remarkable proficiency in discerning various teas with respect to their distinct pricing structures, geographic origins, harvest epochs, fermentation processes, storage durations, quality classifications, and potential adulteration levels. While voltammetric and fluorescent sensor arrays have emerged as promising tools for constructing electronic tongue systems proficient in scrutinizing tea compositions, potentiometric electrodes continue to serve as reliable instruments for meticulously monitoring taste dynamics within different tea varieties. By implementing a feature-level fusion strategy within predictive models, marked enhancements can be achieved regarding efficiency and accuracy levels. Moreover, by establishing intrinsic linkages through pattern recognition methodologies between sensory traits and biochemical makeup found within tea samples, further strides are made toward enhancing our understanding of this venerable beverage's complex nature.Keywords: classifier system, tea, polyphenol, sensor, taste sensor
Procedia PDF Downloads 0379 An Algebraic Geometric Imaging Approach for Automatic Dairy Cow Body Condition Scoring System
Authors: Thi Thi Zin, Pyke Tin, Ikuo Kobayashi, Yoichiro Horii
Abstract:
Today dairy farm experts and farmers have well recognized the importance of dairy cow Body Condition Score (BCS) since these scores can be used to optimize milk production, managing feeding system and as an indicator for abnormality in health even can be utilized to manage for having healthy calving times and process. In tradition, BCS measures are done by animal experts or trained technicians based on visual observations focusing on pin bones, pin, thurl and hook area, tail heads shapes, hook angles and short and long ribs. Since the traditional technique is very manual and subjective, the results can lead to different scores as well as not cost effective. Thus this paper proposes an algebraic geometric imaging approach for an automatic dairy cow BCS system. The proposed system consists of three functional modules. In the first module, significant landmarks or anatomical points from the cow image region are automatically extracted by using image processing techniques. To be specific, there are 23 anatomical points in the regions of ribs, hook bones, pin bone, thurl and tail head. These points are extracted by using block region based vertical and horizontal histogram methods. According to animal experts, the body condition scores depend mainly on the shape structure these regions. Therefore the second module will investigate some algebraic and geometric properties of the extracted anatomical points. Specifically, the second order polynomial regression is employed to a subset of anatomical points to produce the regression coefficients which are to be utilized as a part of feature vector in scoring process. In addition, the angles at thurl, pin, tail head and hook bone area are computed to extend the feature vector. Finally, in the third module, the extracted feature vectors are trained by using Markov Classification process to assign BCS for individual cows. Then the assigned BCS are revised by using multiple regression method to produce the final BCS score for dairy cows. In order to confirm the validity of proposed method, a monitoring video camera is set up at the milk rotary parlor to take top view images of cows. The proposed method extracts the key anatomical points and the corresponding feature vectors for each individual cows. Then the multiple regression calculator and Markov Chain Classification process are utilized to produce the estimated body condition score for each cow. The experimental results tested on 100 dairy cows from self-collected dataset and public bench mark dataset show very promising with accuracy of 98%.Keywords: algebraic geometric imaging approach, body condition score, Markov classification, polynomial regression
Procedia PDF Downloads 161378 Preparation of Metallic Nanoparticles with the Use of Reagents of Natural Origin
Authors: Anna Drabczyk, Sonia Kudlacik-Kramarczyk, Dagmara Malina, Bozena Tyliszczak, Agnieszka Sobczak-Kupiec
Abstract:
Nowadays, nano-size materials are very popular group of materials among scientists. What is more, these materials find an application in a wide range of various areas. Therefore constantly increasing demand for nanomaterials including metallic nanoparticles such as silver of gold ones is observed. Therefore, new routes of their preparation are sought. Considering potential application of nanoparticles, it is important to select an adequate methodology of their preparation because it determines their size and shape. Among the most commonly applied methods of preparation of nanoparticles chemical and electrochemical techniques are leading. However, currently growing attention is directed into the biological or biochemical aspects of syntheses of metallic nanoparticles. This is associated with a trend of developing of new routes of preparation of given compounds according to the principles of green chemistry. These principles involve e.g. the reduction of the use of toxic compounds in the synthesis as well as the reduction of the energy demand or minimization of the generated waste. As a result, a growing popularity of the use of such components as natural plant extracts, infusions or essential oils is observed. Such natural substances may be used both as a reducing agent of metal ions and as a stabilizing agent of formed nanoparticles therefore they can replace synthetic compounds previously used for the reduction of metal ions or for the stabilization of obtained nanoparticles suspension. Methods that proceed in the presence of previously mentioned natural compounds are environmentally friendly and proceed without the application of any toxic reagents. Methodology: Presented research involves preparation of silver nanoparticles using selected plant extracts, e.g. artichoke extract. Extracts of natural origin were used as reducing and stabilizing agents at the same time. Furthermore, syntheses were carried out in the presence of additional polymeric stabilizing agent. Next, such features of obtained suspensions of nanoparticles as total antioxidant activity as well as content of phenolic compounds have been characterized. First of the mentioned studies involved the reaction with DPPH (2,2-Diphenyl-1-picrylhydrazyl) radical. The content of phenolic compounds was determined using Folin-Ciocalteu technique. Furthermore, an essential issue was also the determining of the stability of formed suspensions of nanoparticles. Conclusions: In the research it was demonstrated that metallic nanoparticles may be obtained using plant extracts or infusions as stabilizing or reducing agent. The methodology applied, i.e. a type of plant extract used during the synthesis, had an impact on the content of phenolic compounds as well as on the size and polydispersity of obtained nanoparticles. What is more, it is possible to prepare nano-size particles that will be characterized by properties desirable from the viewpoint of their potential application and such an effect may be achieved with the use of non-toxic reagents of natural origin. Furthermore, proposed methodology stays in line with the principles of green chemistry.Keywords: green chemistry principles, metallic nanoparticles, plant extracts, stabilization of nanoparticles
Procedia PDF Downloads 125377 Spatio-Temporal Variation of Gaseous Pollutants and the Contribution of Particulate Matters in Chao Phraya River Basin, Thailand
Authors: Samart Porncharoen, Nisa Pakvilai
Abstract:
The elevated levels of air pollutants in regional atmospheric environments is a significant problem that affects human health in Thailand, particularly in the Chao Phraya River Basin. Of concern are issues surrounding ambient air pollution such as particulate matter, gaseous pollutants and more specifically concerning air pollution along the river. Therefore, the spatio-temporal study of air pollution in this real environment can gain more accurate air quality data for making formalized environmental policy in river basins. In order to inform such a policy, a study was conducted over a period of January –December, 2015 to continually collect measurements of various pollutants in both urban and regional locations in the Chao Phraya River Basin. This study investigated the air pollutants in many diverse environments along the Chao Phraya River Basin, Thailand in 2015. Multivariate Analysis Techniques such as Principle Component Analysis (PCA) and Path analysis were utilised to classify air pollution in the surveyed location. Measurements were collected in both urban and rural areas to see if significant differences existed between the two locations in terms of air pollution levels. The meteorological parameters of various particulates were collected continually from a Thai pollution control department monitoring station over a period of January –December, 2015. Of interest to this study were the readings of SO2, CO, NOx, O3, and PM10. Results showed a daily arithmetic mean concentration of SO2, CO, NOx, O3, PM10 reading at 3±1 ppb, 0.5± 0.5 ppm, 30±21 ppb, 19±16 ppb, and 40±20 ug/m3 in urban locations (Bangkok). During the same time period, the readings for the same measurements in rural areas, Ayutthaya (were 1±0.5 ppb, 0.1± 0.05 ppm, 25±17 ppb, 30±21 ppb, and 35±10 ug/m3respectively. This show that Bangkok were located in highly polluted environments that are dominated source emitted from vehicles. Further, results were analysed to ascertain if significant seasonal variation existed in the measurements. It was found that levels of both gaseous pollutants and particle matter in dry season were higher than the wet season. More broadly, the results show that levels of pollutants were measured highest in locations along the Chao Phraya. River Basin known to have a large number of vehicles and biomass burning. This correlation suggests that the principle pollutants were from these anthropogenic sources. This study contributes to the body of knowledge surrounding ambient air pollution such as particulate matter, gaseous pollutants and more specifically concerning air pollution along the Chao Phraya River Basin. Further, this study is one of the first to utilise continuous mobile monitoring along a river in order to gain accurate measurements during a data collection period. Overall, the results of this study can be used for making formalized environmental policy in river basins in order to reduce the physical effects on human health.Keywords: air pollution, Chao Phraya river basin, meteorology, seasonal variation, principal component analysis
Procedia PDF Downloads 286376 Comparison of a Capacitive Sensor Functionalized with Natural or Synthetic Receptors Selective towards Benzo(a)Pyrene
Authors: Natalia V. Beloglazova, Pieterjan Lenain, Martin Hedstrom, Dietmar Knopp, Sarah De Saeger
Abstract:
In recent years polycyclic aromatic hydrocarbons (PAHs), which represent a hazard to humans and entire ecosystem, have been receiving an increased interest due to their mutagenic, carcinogenic and endocrine disrupting properties. They are formed in all incomplete combustion processes of organic matter and, as a consequence, ubiquitous in the environment. Benzo(a)pyrene (BaP) is on the priority list published by the Environmental Agency (US EPA) as the first PAH to be identified as a carcinogen and has often been used as a marker for PAHs contamination in general. It can be found in different types of water samples, therefore, the European Commission set up a limit value of 10 ng L–1 (10 ppt) for BAP in water intended for human consumption. Generally, different chromatographic techniques are used for PAHs determination, but these assays require pre-concentration of analyte, create large amounts of solvent waste, and are relatively time consuming and difficult to perform on-site. An alternative robust, stand-alone, and preferably cheap solution is needed. For example, a sensing unit which can be submerged in a river to monitor and continuously sample BaP. An affinity sensor based on capacitive transduction was developed. Natural antibodies or their synthetic analogues can be used as ligands. Ideally the sensor should operate independently over a longer period of time, e.g. several weeks or months, therefore the use of molecularly imprinted polymers (MIPs) was discussed. MIPs are synthetic antibodies which are selective for a chosen target molecule. Their robustness allows application in environments for which biological recognition elements are unsuitable or denature. They can be reused multiple times, which is essential to meet the stand-alone requirement. BaP is a highly lipophilic compound and does not contain any functional groups in its structure, thus excluding non-covalent imprinting methods based on ionic interactions. Instead, the MIPs syntheses were based on non-covalent hydrophobic and π-π interactions. Different polymerization strategies were compared and the best results were demonstrated by the MIPs produced using electropolymerization. 4-vinylpyridin (VP) and divinylbenzene (DVB) were used as monomer and cross-linker in the polymerization reaction. The selectivity and recovery of the MIP were compared to a non-imprinted polymer (NIP). Electrodes were functionalized with natural receptor (monoclonal anti-BaP antibody) and with MIPs selective towards BaP. Different sets of electrodes were evaluated and their properties such as sensitivity, selectivity and linear range were determined and compared. It was found that both receptor can reach the cut-off level comparable to the established ML, and despite the fact that the antibody showed the better cross-reactivity and affinity, MIPs were more convenient receptor due to their ability to regenerate and stability in river till 7 days.Keywords: antibody, benzo(a)pyrene, capacitive sensor, MIPs, river water
Procedia PDF Downloads 304375 Analysis of Overall Thermo-Elastic Properties of Random Particulate Nanocomposites with Various Interphase Models
Authors: Lidiia Nazarenko, Henryk Stolarski, Holm Altenbach
Abstract:
In the paper, a (hierarchical) approach to analysis of thermo-elastic properties of random composites with interphases is outlined and illustrated. It is based on the statistical homogenization method – the method of conditional moments – combined with recently introduced notion of the energy-equivalent inhomogeneity which, in this paper, is extended to include thermal effects. After exposition of the general principles, the approach is applied in the investigation of the effective thermo-elastic properties of a material with randomly distributed nanoparticles. The basic idea of equivalent inhomogeneity is to replace the inhomogeneity and the surrounding it interphase by a single equivalent inhomogeneity of constant stiffness tensor and coefficient of thermal expansion, combining thermal and elastic properties of both. The equivalent inhomogeneity is then perfectly bonded to the matrix which allows to analyze composites with interphases using techniques devised for problems without interphases. From the mechanical viewpoint, definition of the equivalent inhomogeneity is based on Hill’s energy equivalence principle, applied to the problem consisting only of the original inhomogeneity and its interphase. It is more general than the definitions proposed in the past in that, conceptually and practically, it allows to consider inhomogeneities of various shapes and various models of interphases. This is illustrated considering spherical particles with two models of interphases, Gurtin-Murdoch material surface model and spring layer model. The resulting equivalent inhomogeneities are subsequently used to determine effective thermo-elastic properties of randomly distributed particulate composites. The effective stiffness tensor and coefficient of thermal extension of the material with so defined equivalent inhomogeneities are determined by the method of conditional moments. Closed-form expressions for the effective thermo-elastic parameters of a composite consisting of a matrix and randomly distributed spherical inhomogeneities are derived for the bulk and the shear moduli as well as for the coefficient of thermal expansion. Dependence of the effective parameters on the interphase properties is included in the resulting expressions, exhibiting analytically the nature of the size-effects in nanomaterials. As a numerical example, the epoxy matrix with randomly distributed spherical glass particles is investigated. The dependence of the effective bulk and shear moduli, as well as of the effective thermal expansion coefficient on the particle volume fraction (for different radii of nanoparticles) and on the radius of nanoparticle (for fixed volume fraction of nanoparticles) for different interphase models are compared to and discussed in the context of other theoretical predictions. Possible applications of the proposed approach to short-fiber composites with various types of interphases are discussed.Keywords: effective properties, energy equivalence, Gurtin-Murdoch surface model, interphase, random composites, spherical equivalent inhomogeneity, spring layer model
Procedia PDF Downloads 186374 Exploring Accessible Filmmaking and Video for Deafblind Audiences through Multisensory Participatory Design
Authors: Aikaterini Tavoulari, Mike Richardson
Abstract:
Objective: This abstract presents a multisensory participatory design project, inspired by a deafblind PhD student's ambition to climb Mount Everest. The project aims to explore accessible routes for filmmaking and video content creation, catering to the needs of individuals with hearing and sight loss. By engaging participants from the Southwest area of England, recruited through multiple networks, the project seeks to gather qualitative data and insights to inform the development of inclusive media practices. Design: It will be a community-based participatory research design. The workshop will feature various stations that stimulate different senses, such as scent, touch, sight, hearing as well as movement. Participants will have the opportunity to engage with these multisensory experiences, providing valuable feedback on their effectiveness and potential for enhancing accessibility in filmmaking and video content. Methods: Brief semi-structured interviews will be conducted to collect qualitative data, allowing participants to share their perspectives, challenges, and suggestions for improvement. The participatory design approach emphasizes the importance of involving the target audience in the creative process. By actively engaging individuals with hearing and sight loss, the project aims to ensure that their needs and preferences are central to the development of accessible filmmaking techniques and video content. This collaborative effort seeks to bridge the gap between content creators and diverse audiences, fostering a more inclusive media landscape. Results: The findings from this study will contribute to the growing body of research on accessible filmmaking and video content creation. Via inductive thematic analysis of the qualitative data collected through interviews and observations, the researchers aim to identify key themes, challenges, and opportunities for creating engaging and inclusive media experiences for deafblind audiences. The insights will inform the development of best practices and guidelines for accessible filmmaking, empowering content creators to produce more inclusive and immersive video content. Conclusion: The abstract targets the hybrid International Conference for Disability and Diversity in Canada (January 2025), as this platform provides an excellent opportunity to share the outcomes of the project with a global audience of researchers, practitioners, and advocates working towards inclusivity and accessibility in various disability domains. By presenting this research at the conference in person, the authors aim to contribute to the ongoing discourse on disability and diversity, highlighting the importance of multisensory experiences and participatory design in creating accessible media content for the deafblind community and the community with sensory impairments more broadly.Keywords: vision impairment, hearing impairment, deafblindness, accessibility, filmmaking
Procedia PDF Downloads 45373 Evaluation of Modern Natural Language Processing Techniques via Measuring a Company's Public Perception
Authors: Burak Oksuzoglu, Savas Yildirim, Ferhat Kutlu
Abstract:
Opinion mining (OM) is one of the natural language processing (NLP) problems to determine the polarity of opinions, mostly represented on a positive-neutral-negative axis. The data for OM is usually collected from various social media platforms. In an era where social media has considerable control over companies’ futures, it’s worth understanding social media and taking actions accordingly. OM comes to the fore here as the scale of the discussion about companies increases, and it becomes unfeasible to gauge opinion on individual levels. Thus, the companies opt to automize this process by applying machine learning (ML) approaches to their data. For the last two decades, OM or sentiment analysis (SA) has been mainly performed by applying ML classification algorithms such as support vector machines (SVM) and Naïve Bayes to a bag of n-gram representations of textual data. With the advent of deep learning and its apparent success in NLP, traditional methods have become obsolete. Transfer learning paradigm that has been commonly used in computer vision (CV) problems started to shape NLP approaches and language models (LM) lately. This gave a sudden rise to the usage of the pretrained language model (PTM), which contains language representations that are obtained by training it on the large datasets using self-supervised learning objectives. The PTMs are further fine-tuned by a specialized downstream task dataset to produce efficient models for various NLP tasks such as OM, NER (Named-Entity Recognition), Question Answering (QA), and so forth. In this study, the traditional and modern NLP approaches have been evaluated for OM by using a sizable corpus belonging to a large private company containing about 76,000 comments in Turkish: SVM with a bag of n-grams, and two chosen pre-trained models, multilingual universal sentence encoder (MUSE) and bidirectional encoder representations from transformers (BERT). The MUSE model is a multilingual model that supports 16 languages, including Turkish, and it is based on convolutional neural networks. The BERT is a monolingual model in our case and transformers-based neural networks. It uses a masked language model and next sentence prediction tasks that allow the bidirectional training of the transformers. During the training phase of the architecture, pre-processing operations such as morphological parsing, stemming, and spelling correction was not used since the experiments showed that their contribution to the model performance was found insignificant even though Turkish is a highly agglutinative and inflective language. The results show that usage of deep learning methods with pre-trained models and fine-tuning achieve about 11% improvement over SVM for OM. The BERT model achieved around 94% prediction accuracy while the MUSE model achieved around 88% and SVM did around 83%. The MUSE multilingual model shows better results than SVM, but it still performs worse than the monolingual BERT model.Keywords: BERT, MUSE, opinion mining, pretrained language model, SVM, Turkish
Procedia PDF Downloads 148372 Artificial Neural Network Approach for GIS-Based Soil Macro-Nutrients Mapping
Authors: Shahrzad Zolfagharnassab, Abdul Rashid Mohamed Shariff, Siti Khairunniza Bejo
Abstract:
Conventional methods for nutrient soil mapping are based on laboratory tests of samples that are obtained from surveys. The time and cost involved in gathering and analyzing soil samples are the reasons that researchers use Predictive Soil Mapping (PSM). PSM can be defined as the development of a numerical or statistical model of the relationship among environmental variables and soil properties, which is then applied to a geographic database to create a predictive map. Kriging is a group of geostatistical techniques to spatially interpolate point values at an unobserved location from observations of values at nearby locations. The main problem with using kriging as an interpolator is that it is excessively data-dependent and requires a large number of closely spaced data points. Hence, there is a need to minimize the number of data points without sacrificing the accuracy of the results. In this paper, an Artificial Neural Networks (ANN) scheme was used to predict macronutrient values at un-sampled points. ANN has become a popular tool for prediction as it eliminates certain difficulties in soil property prediction, such as non-linear relationships and non-normality. Back-propagation multilayer feed-forward network structures were used to predict nitrogen, phosphorous and potassium values in the soil of the study area. A limited number of samples were used in the training, validation and testing phases of ANN (pattern reconstruction structures) to classify soil properties and the trained network was used for prediction. The soil analysis results of samples collected from the soil survey of block C of Sawah Sempadan, Tanjung Karang rice irrigation project at Selangor of Malaysia were used. Soil maps were produced by the Kriging method using 236 samples (or values) that were a combination of actual values (obtained from real samples) and virtual values (neural network predicted values). For each macronutrient element, three types of maps were generated with 118 actual and 118 virtual values, 59 actual and 177 virtual values, and 30 actual and 206 virtual values, respectively. To evaluate the performance of the proposed method, for each macronutrient element, a base map using 236 actual samples and test maps using 118, 59 and 30 actual samples respectively produced by the Kriging method. A set of parameters was defined to measure the similarity of the maps that were generated with the proposed method, termed the sample reduction method. The results show that the maps that were generated through the sample reduction method were more accurate than the corresponding base maps produced through a smaller number of real samples. For example, nitrogen maps that were produced from 118, 59 and 30 real samples have 78%, 62%, 41% similarity, respectively with the base map (236 samples) and the sample reduction method increased similarity to 87%, 77%, 71%, respectively. Hence, this method can reduce the number of real samples and substitute ANN predictive samples to achieve the specified level of accuracy.Keywords: artificial neural network, kriging, macro nutrient, pattern recognition, precision farming, soil mapping
Procedia PDF Downloads 71371 Strategies for Synchronizing Chocolate Conching Data Using Dynamic Time Warping
Authors: Fernanda A. P. Peres, Thiago N. Peres, Flavio S. Fogliatto, Michel J. Anzanello
Abstract:
Batch processes are widely used in food industry and have an important role in the production of high added value products, such as chocolate. Process performance is usually described by variables that are monitored as the batch progresses. Data arising from these processes are likely to display a strong correlation-autocorrelation structure, and are usually monitored using control charts based on multiway principal components analysis (MPCA). Process control of a new batch is carried out comparing the trajectories of its relevant process variables with those in a reference set of batches that yielded products within specifications; it is clear that proper determination of the reference set is key for the success of a correct signalization of non-conforming batches in such quality control schemes. In chocolate manufacturing, misclassifications of non-conforming batches in the conching phase may lead to significant financial losses. In such context, the accuracy of process control grows in relevance. In addition to that, the main assumption in MPCA-based monitoring strategies is that all batches are synchronized in duration, both the new batch being monitored and those in the reference set. Such assumption is often not satisfied in chocolate manufacturing process. As a consequence, traditional techniques as MPCA-based charts are not suitable for process control and monitoring. To address that issue, the objective of this work is to compare the performance of three dynamic time warping (DTW) methods in the alignment and synchronization of chocolate conching process variables’ trajectories, aimed at properly determining the reference distribution for multivariate statistical process control. The power of classification of batches in two categories (conforming and non-conforming) was evaluated using the k-nearest neighbor (KNN) algorithm. Real data from a milk chocolate conching process was collected and the following variables were monitored over time: frequency of soybean lecithin dosage, rotation speed of the shovels, current of the main motor of the conche, and chocolate temperature. A set of 62 batches with durations between 495 and 1,170 minutes was considered; 53% of the batches were known to be conforming based on lab test results and experts’ evaluations. Results showed that all three DTW methods tested were able to align and synchronize the conching dataset. However, synchronized datasets obtained from these methods performed differently when inputted in the KNN classification algorithm. Kassidas, MacGregor and Taylor’s (named KMT) method was deemed the best DTW method for aligning and synchronizing a milk chocolate conching dataset, presenting 93.7% accuracy, 97.2% sensitivity and 90.3% specificity in batch classification, being considered the best option to determine the reference set for the milk chocolate dataset. Such method was recommended due to the lowest number of iterations required to achieve convergence and highest average accuracy in the testing portion using the KNN classification technique.Keywords: batch process monitoring, chocolate conching, dynamic time warping, reference set distribution, variable duration
Procedia PDF Downloads 168370 Communication Skills for Physicians: Adaptation to the Third Gender and Language Cross Cultural Influences
Authors: Virginia Guillén Cañas, Miren Agurtzane Ortiz-Jauregi, Sonia Ruiz De Azua, Naiara Ozamiz
Abstract:
We want to focus on relationship of the communicational skills in several key aspects of medicine. The most relevant competencies of a health professional are an adequate communication capacity, which will influence the satisfaction of professionals and patients, therapeutic compliance, conflict prevention, clinical outcomes’ improvement and efficiency of health services. We define empathy as it as Sympathy and connection to others and capability to communicate this understanding. Some outcomes favoring empathy are female gender, younger age, and specialty choice. Third gender or third sex is a concept in which allows a person not to be categorized in a dual way but as a continuous variable, giving the choice of moving along it. This point of view recognizes three or more genders. The subject of Ethics and Clinical Communication is dedicated to sensitizing students about the importance and effectiveness of a good therapeutic relationship. We are also interested in other communicational aspects related to empathy as active listening, assertivity and basic and advanced Social Skills. Objectives: 1. To facilitate the approach of the student in the Medicine Degree to the reality of the medical profession 2. Analyze interesting outcome variables in communication 3. Interactive process to detect the areas of improvement in the learning process of the Physician throughout his professional career needs. Design: A comparative study with a cross-sectional approach was conducted in successive academic year cohorts of health professional students at a public Basque university. Four communicational aspects were evaluated through these questionnaires in Basque, Spanish and English: The active listening questionnaire, the TECA empathy questionnaire, the ACDA questionnaire and the EHS questionnaire Social Skills Scale. Types of interventions for improving skills: Interpersonal skills training intervention, Empathy intervention, Writing about experiential learning, Drama through role plays, Communicational skills training, Problem-based learning, Patient interviews ´videos, Empathy-focused training, Discussion. Results: It identified the need for a cross cultural adaptation and no gender distinction. The students enjoyed all the techniques in comparison to the usual master class. There was medium participation but these participative methodologies are not so usual in the university. According to empathy, men have a greater empathic capacity to fully understand women (p < 0.05) With regard to assertiveness there have been no differences between men and women in self-assertiveness but nevertheless women are more heteroassertive than men. Conclusions: These findings suggest that educational interventions with adequate feedback can be effective in maintaining and enhancing empathy in undergraduate medical students.Keywords: physician's communicational skills, patient satisfaction, third gender, cross cultural adaptation
Procedia PDF Downloads 193369 Effects of Macro and Micro Nutrients on Growth and Yield Performances of Tomato (Lycopersicon esculentum MILL.)
Authors: K. M. S. Weerasinghe, A. H. K. Balasooriya, S. L. Ransingha, G. D. Krishantha, R. S. Brhakamanagae, L. C. Wijethilke
Abstract:
Tomato (Lycopersicon esculentum Mill.) is a major horticultural crop with an estimated global production of over 120 million metric tons and ranks first as a processing crop. The average tomato productivity in Sri Lanka (11 metric tons/ha) is much lower than the world average (24 metric tons/ha).To meet the tomato demand for the increasing population the productivity has to be intensified through the agronomic-techniques. Nutrition is one of the main factors which govern the growth and yield of tomato and the main nutrient source soil affect the plant growth and quality of the produce. Continuous cropping, improper fertilizer usage etc., cause widespread nutrient deficiencies. Therefore synthetic fertilizers and organic manures were introduced to enhance plant growth and maximize the crop yields. In this study, effects of macro and micronutrient supplementations on improvement of growth and yield of tomato were investigated. Selected tomato variety is Maheshi and plants were grown in Regional Agricultural and Research Centre Makadura under the Department of Agriculture recommended (DOA) macro nutrients and various combination of Ontario recommended dosages of secondary and micro fertilizer supplementations. There were six treatments in this experiment and each treatment was replicated in three times and each replicate consisted of six plants. Other than the DOA recommendation, five combinations of Ontario recommended dosage of secondary and micronutrients for tomato were also used as treatments. The treatments were arranged in a Randomized Complete Block Design. All cultural practices were carried out according to the DOA recommendations. The mean data was subjected to the statistical analysis using SAS package and mean separation (Duncan’s Multiple Range test at 5% probability level) procedures. Secondary and micronutrients containing treatments significantly increased most of the growth parameters. Plant height, plant girth, number of leaves, leaf area index etc. Fruits harvested from pots amended with macro, secondary and micronutrients performed best in terms of total yield; yield quality; to pots amended with DOA recommended dosage of fertilizer for tomato. It could be due to the application of all essential macro and micro nutrients that rise in photosynthetic activity, efficient translocation and utilization of photosynthates causing rapid cell elongation and cell division in actively growing region of the plant leading to stimulation of growth and yield were caused. The experiment revealed and highlighted the requirements of essential macro, secondary and micro nutrient fertilizer supplementations for tomato farming. The study indicated that, macro and micro nutrient supplementation practices can influence growth and yield performances of tomato fruits and it is a promising approach to get potential tomato yields.Keywords: macro and micronutrients, tomato, SAS package, photosynthates
Procedia PDF Downloads 476368 Effect of the Polymer Modification on the Cytocompatibility of Human and Rat Cells
Authors: N. Slepickova Kasalkova, P. Slepicka, L. Bacakova, V. Svorcik
Abstract:
Tissue engineering includes combination of materials and techniques used for the improvement, repair or replacement of the tissue. Scaffolds, permanent or temporally material, are used as support for the creation of the "new cell structures". For this important component (scaffold), a variety of materials can be used. The advantage of some polymeric materials is their cytocompatibility and possibility of biodegradation. Poly(L-lactic acid) (PLLA) is a biodegradable, semi-crystalline thermoplastic polymer. PLLA can be fully degraded into H2O and CO2. In this experiment, the effect of the surface modification of biodegradable polymer (performed by plasma treatment) on the various cell types was studied. The surface parameters and changes of the physicochemical properties of modified PLLA substrates were studied by different methods. Surface wettability was determined by goniometry, surface morphology and roughness study were performed with atomic force microscopy and chemical composition was determined using photoelectron spectroscopy. The physicochemical properties were studied in relation to cytocompatibility of human osteoblast (MG 63 cells), rat vascular smooth muscle cells (VSMC), and human stem cells (ASC) of the adipose tissue in vitro. A fluorescence microscopy was chosen to study and compare cell-material interaction. Important parameters of the cytocompatibility like adhesion, proliferation, viability, shape, spreading of the cells were evaluated. It was found that the modification leads to the change of the surface wettability depending on the time of modification. Short time of exposition (10-120 s) can reduce the wettability of the aged samples, exposition longer than 150 s causes to increase of contact angle of the aged PLLA. The surface morphology is significantly influenced by duration of modification, too. The plasma treatment involves the formation of the crystallites, whose number increases with increasing time of modification. On the basis of physicochemical properties evaluation, the cells were cultivated on the selected samples. Cell-material interactions are strongly affected by material chemical structure and surface morphology. It was proved that the plasma treatment of PLLA has a positive effect on the adhesion, spreading, homogeneity of distribution and viability of all cultivated cells. This effect was even more apparent for the VSMCs and ASCs which homogeneously covered almost the whole surface of the substrate after 7 days of cultivation. The viability of these cells was high (more than 98% for VSMCs, 89-96% for ASCs). This experiment is one part of the basic research, which aims to easily create scaffolds for tissue engineering with subsequent use of stem cells and their subsequent "reorientation" towards the bone cells or smooth muscle cells.Keywords: poly(L-lactic acid), plasma treatment, surface characterization, cytocompatibility, human osteoblast, rat vascular smooth muscle cells, human stem cells
Procedia PDF Downloads 229367 Spatial Pattern of Farm Mechanization: A Micro Level Study of Western Trans-Ghaghara Plain, India
Authors: Zafar Tabrez, Nizamuddin Khan
Abstract:
Agriculture in India in the pre-green revolution period was mostly controlled by terrain, climate and edaphic factors. But after the introduction of innovative factors and technological inputs, green revolution occurred and agricultural scene witnessed great change. In the development of India’s agriculture, speedy, and extensive introduction of technological change is one of the crucial factors. The technological change consists of adoption of farming techniques such as use of fertilisers, pesticides and fungicides, improved variety of seeds, modern agricultural implements, improved irrigation facilities, contour bunding for the conservation of moisture and soil, which are developed through research and calculated to bring about diversification and increase of production and greater economic return to the farmers. The green revolution in India took place during late 60s, equipped with technological inputs like high yielding varieties seeds, assured irrigation as well as modern machines and implements. Initially the revolution started in Punjab, Haryana and western Uttar Pradesh. With the efforts of government, agricultural planners, as well as policy makers, the modern technocratic agricultural development scheme was also implemented and introduced in backward and marginal regions of the country later on. Agriculture sector occupies the centre stage of India’s social security and overall economic welfare. The country has attained self-sufficiency in food grain production and also has sufficient buffer stock. Our first Prime Minister, Jawaharlal Nehru said ‘everything else can wait but not agriculture’. There is still a continuous change in the technological inputs and cropping patterns. Keeping these points in view, author attempts to investigate extensively the mechanization of agriculture and the change by selecting western Trans-Ghaghara plain as a case study and block a unit of the study. It includes the districts of Gonda, Balrampur, Bahraich and Shravasti which incorporate 44 blocks. It is based on secondary sources of data by blocks for the year 1997 and 2007. It may be observed that there is a wide range of variations and the change in farm mechanization, i.e., agricultural machineries such as ploughs, wooden and iron, advanced harrow and cultivator, advanced thrasher machine, sprayers, advanced sowing instrument, and tractors etc. It may be further noted that due to continuous decline in size of land holdings and outflux of people for the same nature of works or to be employed in non-agricultural sectors, the magnitude and direction of agricultural systems are affected in the study area which is one of the marginalized regions of Uttar Pradesh, India.Keywords: agriculture, technological inputs, farm mechanization, food production, cropping pattern
Procedia PDF Downloads 313366 Linguistic Analysis of Borderline Personality Disorder: Using Language to Predict Maladaptive Thoughts and Behaviours
Authors: Charlotte Entwistle, Ryan Boyd
Abstract:
Recent developments in information retrieval techniques and natural language processing have allowed for greater exploration of psychological and social processes. Linguistic analysis methods for understanding behaviour have provided useful insights within the field of mental health. One area within mental health that has received little attention though, is borderline personality disorder (BPD). BPD is a common mental health disorder characterised by instability of interpersonal relationships, self-image and affect. It also manifests through maladaptive behaviours, such as impulsivity and self-harm. Examination of language patterns associated with BPD could allow for a greater understanding of the disorder and its links to maladaptive thoughts and behaviours. Language analysis methods could also be used in a predictive way, such as by identifying indicators of BPD or predicting maladaptive thoughts, emotions and behaviours. Additionally, associations that are uncovered between language and maladaptive thoughts and behaviours could then be applied at a more general level. This study explores linguistic characteristics of BPD, and their links to maladaptive thoughts and behaviours, through the analysis of social media data. Data were collected from a large corpus of posts from the publicly available social media platform Reddit, namely, from the ‘r/BPD’ subreddit whereby people identify as having BPD. Data were collected using the Python Reddit API Wrapper and included all users which had posted within the BPD subreddit. All posts were manually inspected to ensure that they were not posted by someone who clearly did not have BPD, such as people posting about a loved one with BPD. These users were then tracked across all other subreddits of which they had posted in and data from these subreddits were also collected. Additionally, data were collected from a random control group of Reddit users. Disorder-relevant behaviours, such as self-harming or aggression-related behaviours, outlined within Reddit posts were coded to by expert raters. All posts and comments were aggregated by user and split by subreddit. Language data were then analysed using the Linguistic Inquiry and Word Count (LIWC) 2015 software. LIWC is a text analysis program that identifies and categorises words based on linguistic and paralinguistic dimensions, psychological constructs and personal concern categories. Statistical analyses of linguistic features could then be conducted. Findings revealed distinct linguistic features associated with BPD, based on Reddit posts, which differentiated these users from a control group. Language patterns were also found to be associated with the occurrence of maladaptive thoughts and behaviours. Thus, this study demonstrates that there are indeed linguistic markers of BPD present on social media. It also implies that language could be predictive of maladaptive thoughts and behaviours associated with BPD. These findings are of importance as they suggest potential for clinical interventions to be provided based on the language of people with BPD to try to reduce the likelihood of maladaptive thoughts and behaviours occurring. For example, by social media tracking or engaging people with BPD in expressive writing therapy. Overall, this study has provided a greater understanding of the disorder and how it manifests through language and behaviour.Keywords: behaviour analysis, borderline personality disorder, natural language processing, social media data
Procedia PDF Downloads 353365 Text Mining Past Medical History in Electrophysiological Studies
Authors: Roni Ramon-Gonen, Amir Dori, Shahar Shelly
Abstract:
Background and objectives: Healthcare professionals produce abundant textual information in their daily clinical practice. The extraction of insights from all the gathered information, mainly unstructured and lacking in normalization, is one of the major challenges in computational medicine. In this respect, text mining assembles different techniques to derive valuable insights from unstructured textual data, so it has led to being especially relevant in Medicine. Neurological patient’s history allows the clinician to define the patient’s symptoms and along with the result of the nerve conduction study (NCS) and electromyography (EMG) test, assists in formulating a differential diagnosis. Past medical history (PMH) helps to direct the latter. In this study, we aimed to identify relevant PMH, understand which PMHs are common among patients in the referral cohort and documented by the medical staff, and examine the differences by sex and age in a large cohort based on textual format notes. Methods: We retrospectively identified all patients with abnormal NCS between May 2016 to February 2022. Age, gender, and all NCS attributes reports were recorded, including the summary text. All patients’ histories were extracted from the text report by a query. Basic text cleansing and data preparation were performed, as well as lemmatization. Very popular words (like ‘left’ and ‘right’) were deleted. Several words were replaced with their abbreviations. A bag of words approach was used to perform the analyses. Different visualizations which are common in text analysis, were created to easily grasp the results. Results: We identified 5282 unique patients. Three thousand and five (57%) patients had documented PMH. Of which 60.4% (n=1817) were males. The total median age was 62 years (range 0.12 – 97.2 years), and the majority of patients (83%) presented after the age of forty years. The top two documented medical histories were diabetes mellitus (DM) and surgery. DM was observed in 16.3% of the patients, and surgery at 15.4%. Other frequent patient histories (among the top 20) were fracture, cancer (ca), motor vehicle accident (MVA), leg, lumbar, discopathy, back and carpal tunnel release (CTR). When separating the data by sex, we can see that DM and MVA are more frequent among males, while cancer and CTR are less frequent. On the other hand, the top medical history in females was surgery and, after that, DM. Other frequent histories among females are breast cancer, fractures, and CTR. In the younger population (ages 18 to 26), the frequent PMH were surgery, fractures, trauma, and MVA. Discussion: By applying text mining approaches to unstructured data, we were able to better understand which medical histories are more relevant in these circumstances and, in addition, gain additional insights regarding sex and age differences. These insights might help to collect epidemiological demographical data as well as raise new hypotheses. One limitation of this work is that each clinician might use different words or abbreviations to describe the same condition, and therefore using a coding system can be beneficial.Keywords: abnormal studies, healthcare analytics, medical history, nerve conduction studies, text mining, textual analysis
Procedia PDF Downloads 96