Search results for: single inductor multi output (SIMO)
992 Localized and Time-Resolved Velocity Measurements of Pulsatile Flow in a Rectangular Channel
Authors: R. Blythman, N. Jeffers, T. Persoons, D. B. Murray
Abstract:
The exploitation of flow pulsation in micro- and mini-channels is a potentially useful technique for enhancing cooling of high-end photonics and electronics systems. It is thought that pulsation alters the thickness of the hydrodynamic and thermal boundary layers, and hence affects the overall thermal resistance of the heat sink. Although the fluid mechanics and heat transfer are inextricably linked, it can be useful to decouple the parameters to better understand the mechanisms underlying any heat transfer enhancement. Using two-dimensional, two-component particle image velocimetry, the current work intends to characterize the heat transfer mechanisms in pulsating flow with a mean Reynolds number of 48 by experimentally quantifying the hydrodynamics of a generic liquid-cooled channel geometry. Flows circulated through the test section by a gear pump are modulated using a controller to achieve sinusoidal flow pulsations with Womersley numbers of 7.45 and 2.36 and an amplitude ratio of 0.75. It is found that the transient characteristics of the measured velocity profiles are dependent on the speed of oscillation, in accordance with the analytical solution for flow in a rectangular channel. A large velocity overshoot is observed close to the wall at high frequencies, resulting from the interaction of near-wall viscous stresses and inertial effects of the main fluid body. The steep velocity gradients at the wall are indicative of augmented heat transfer, although the local flow reversal may reduce the upstream temperature difference in heat transfer applications. While unsteady effects remain evident at the lower frequency, the annular effect subsides and retreats from the wall. The shear rate at the wall is increased during the accelerating half-cycle and decreased during deceleration compared to steady flow, suggesting that the flow may experience both enhanced and diminished heat transfer during a single period. Hence, the thickness of the hydrodynamic boundary layer is reduced for positively moving flow during one half of the pulsation cycle at the investigated frequencies. It is expected that the size of the thermal boundary layer is similarly reduced during the cycle, leading to intervals of heat transfer enhancement.Keywords: Heat transfer enhancement, particle image velocimetry, localized and time-resolved velocity, photonics and electronics cooling, pulsating flow, Richardson’s annular effect
Procedia PDF Downloads 350991 Relations between the Internal Employment Conditions of International Organizations and the Characteristics of the National Civil Service
Authors: Renata Hrecska
Abstract:
This research seeks to fully examine the internal employment law of international organizations by comparing it with the characteristics of the national civil service. The aim of the research is to compare the legal system that has developed over many centuries and the relatively new internal staffing regulations to find out what solution schemes can help each other through mutual legal development in order to respond effectively to the social challenges of everyday life. Generally, the rules of civil service of any country or international entity have in common that they have, in their pragmatics inherently, the characteristic that makes them serving public interests. Though behind the common base there are many differences: there is the clear fragmentation of state regulation and the unity of organizational regulation. On the other hand, however, this difference disappears to some extent: the public service regulation of international organizations can be considered uniform until we examine it within, but not outside an organization. As soon as we compare the different organizations we may find many different solutions for staffing regulations. It is clear that the national civil service is a strong model for international organizations, but the question may be whether the staffing policy of international organizations can serve the national civil service as an example, too. In this respect, the easiest way to imagine a legislative environment would be to have a single comprehensive code, the general part of which is the Civil Service Act itself, and the specific part containing specific, necessarily differentiating rules for each layer of the civil service. Would it be advantageous to follow the footsteps of the leading international organizations, or is there any speciality in national level civil service that we cannot avoid during regulating processes? In addition to the above, the personal competencies of officials working in international organizations and public administrations also show a high degree of similarity, regardless of the type of employment. Thus, the whole public service system is characterized by the fundamental and special values that a person capable of holding a public office must be able to demonstrate, in some cases, even without special qualifications. It is also interesting how we can compare the two spheres of employment in light of the theory of Lawyer Louis Brandeis, a judge at the US Supreme Court, who formulated a complex theory of profession as distinguished from other occupations. From this point of view we can examine the continuous development of research and specialized knowledge at work; the community recognition and social status; that to what extent we can see a close-knit professional organization of altruistic philosophy; that how stability grows in the working conditions due to the stability of the profession; and that how the autonomy of the profession can prevail.Keywords: civil service, comparative law, international organizations, regulatory systems
Procedia PDF Downloads 138990 The Impact of Hosting an On-Site Vocal Concert in Preschool on Music Inspiration and Learning Among Preschoolers
Authors: Meiying Liao, Poya Huang
Abstract:
The aesthetic domain is one of the six major domains in the Taiwanese preschool curriculum, encompassing visual arts, music, and dramatic play. Its primary objective is to cultivate children’s abilities in exploration and awareness, expression and creation, and response and appreciation. The purpose of this study was to explore the effects of hosting a vocal music concert on aesthetic inspiration and learning among preschoolers in a preschool setting. The primary research method employed was a case study focusing on a private preschool in Northern Taiwan that organized a school-wide event featuring two vocalists. The concert repertoires included children’s songs, folk songs, and arias performed in Mandarin, Hakka, English, German, and Italian. In addition to professional performances, preschool teachers actively participated by presenting a children’s song. A total of 5 classes, comprising approximately 150 preschoolers, along with 16 teachers and staff, participated in the event. Data collection methods included observation, interviews, and documents. Results indicated that both teachers and children thoroughly enjoyed the concert, with high levels of acceptance when the program was appropriately designed and hosted. Teachers reported that post-concert discussions with children revealed the latter’s ability to recall people, events, and elements observed during the performance, expressing their impressions of the most memorable segments. The concert effectively achieved the goals of the aesthetic domain, particularly in fostering response and appreciation. It also inspired preschoolers’ interest in music. Many teachers noted an increased desire for performance among preschoolers after exposure to the concert, with children imitating the performers and their expressions. Remarkably, one class extended this experience by incorporating it into the curriculum, autonomously organizing a high-quality concert in the music learning center. Parents also reported that preschoolers enthusiastically shared their concert experiences at home. In conclusion, despite being a single event, the positive responses from preschoolers towards the music performance suggest a meaningful impact. These experiences extended into the curriculum, as firsthand exposure to performances allowed teachers to deepen related topics, fostering a habit of autonomous learning in the designated learning centers.Keywords: concert, early childhood music education, aesthetic education, music develpment
Procedia PDF Downloads 51989 Impact of UV on Toxicity of Zn²⁺ and ZnO Nanoparticles to Lemna minor
Authors: Gabriela Kalcikova, Gregor Marolt, Anita Jemec Kokalj, Andreja Zgajnar Gotvajn
Abstract:
Since the 90’s, nanotechnology is one of the fastest growing fields of science. Nanomaterials are increasingly becoming part of many products and technologies. Metal oxide nanoparticles are among the most used nanomaterials. Zinc oxide nanoparticles (nZnO) is widely used due to its versatile properties; it has been used in products including plastics, paints, food, batteries, solar cells and cosmetic products. It is also a very effective photocatalyst used for water treatment. Such expanding application of nZnO increases their possible occurrence in the environment. In the aquatic ecosystem nZnO interact with natural environmental factors such as UV radiation, and thus it is essential to evaluate possible interaction between them. In this context, the aim of our study was to evaluate combined ecotoxicity of nZnO and Zn²⁺ on duckweed Lemna minor in presence or absence UV. Inhibition of vegetative growth of duckweed Lemna minor was monitored over a period of 7 days in multi-well plates. After the experiment, specific growth rate was determined. ZnO nanoparticles used were of primary size 13.6 ± 1.7 nm. The test was conducted with nominal nZnO and Zn²⁺ (in form of ZnCl₂) concentrations of 1, 10, 100 mg/L. Experiment was repeated with presence of natural intensity of UV (8h UV, 10 W/m² UVA, 0.5 W/m² UVB). Concentration of Zn during the test was determined by ICP-MS. In the regular experiment (absence of UV) the specific growth rate was slightly increased by low concentrations of nZnO and Zn²⁺ in comparison to control. However, 10 and 100 mg/L of Zn²⁺ resulted in 45% and 68% inhibition of the specific growth rate, respectively. In case of nZnO both concentrations (10 and 100 mg/L) resulted in similar ~ 30% inhibition and the response was not dose-dependent. The lack of the dose-response relationship is often observed in case of nanoparticles. The possible explanation is that the physical impact prevails instead of chemical ones. In the presence of UV the toxicity of Zn²⁺ was increased and 100 mg/L of Zn²⁺ caused total inhibition of the specific growth rate (100%). On the other hand, 100 mg/L of nZnO resulted in low inhibition (19%) in comparison to the experiment without UV (30%). It is thus expected, that tested nZnO is low photoactive, but could have a good UV absorption and/or reflective properties and thus protect duckweed against UV impacts. Measured concentration of Zn in the test suspension decreased only about 4% after 168h in the case of ZnCl₂. On the other hand concentration of Zn in nZnO test decreased by 80%. It is expected that nZnO were partially dissolved in the medium and at the same time agglomeration and sedimentation of particles took place and thus the concentration of Zn at the water level decreased. Results of our study indicated, that nZnO combined with UV of natural intensity does not increase toxicity of nZnO, but slightly protect the plant against UV negative effects. When Zn²⁺ and ZnO results are compared it seems that dissolved Zn plays a central role in the nZnO toxicity.Keywords: duckweed, environmental factors, nanoparticles, toxicity
Procedia PDF Downloads 338988 Spectrogram Pre-Processing to Improve Isotopic Identification to Discriminate Gamma and Neutrons Sources
Authors: Mustafa Alhamdi
Abstract:
Industrial application to classify gamma rays and neutron events is investigated in this study using deep machine learning. The identification using a convolutional neural network and recursive neural network showed a significant improvement in predication accuracy in a variety of applications. The ability to identify the isotope type and activity from spectral information depends on feature extraction methods, followed by classification. The features extracted from the spectrum profiles try to find patterns and relationships to present the actual spectrum energy in low dimensional space. Increasing the level of separation between classes in feature space improves the possibility to enhance classification accuracy. The nonlinear nature to extract features by neural network contains a variety of transformation and mathematical optimization, while principal component analysis depends on linear transformations to extract features and subsequently improve the classification accuracy. In this paper, the isotope spectrum information has been preprocessed by finding the frequencies components relative to time and using them as a training dataset. Fourier transform implementation to extract frequencies component has been optimized by a suitable windowing function. Training and validation samples of different isotope profiles interacted with CdTe crystal have been simulated using Geant4. The readout electronic noise has been simulated by optimizing the mean and variance of normal distribution. Ensemble learning by combing voting of many models managed to improve the classification accuracy of neural networks. The ability to discriminate gamma and neutron events in a single predication approach using deep machine learning has shown high accuracy using deep learning. The paper findings show the ability to improve the classification accuracy by applying the spectrogram preprocessing stage to the gamma and neutron spectrums of different isotopes. Tuning deep machine learning models by hyperparameter optimization of neural network models enhanced the separation in the latent space and provided the ability to extend the number of detected isotopes in the training database. Ensemble learning contributed significantly to improve the final prediction.Keywords: machine learning, nuclear physics, Monte Carlo simulation, noise estimation, feature extraction, classification
Procedia PDF Downloads 154987 The Sr-Nd Isotope Data of the Platreef Rocks from the Northern Limb of the Bushveld Igneous Complex: Evidence of Contrasting Magma Composition and Origin
Authors: Tshipeng Mwenze, Charles Okujeni, Abdi Siad, Russel Bailie, Dirk Frei, Marcelene Voigt, Petrus Le Roux
Abstract:
The Platreef is a platinum group element (PGE) deposit in the northern limb of the Bushveld Igneous Complex (BIC) which was emplaced as a series of mafic and ultramafic sills between the Main Zone (MZ) and the country rocks. The PGE mineralisation in the Platreef is hosted in different rock types, and its distribution and style vary with depth and along strike. This study contributes towards understanding the processes involved in the genesis of the Platreef. Twenty-four Platreef (2 harzburgites, 4 olivine pyroxenites, 17 feldspathic pyroxenites and 1 gabbronorite) and few MZ (1 gabbronorite and 1 leucogabbronorite) quarter core samples were collected from four drill cores (e.g., TN754, TN200, SS339, and OY482) and analysed for whole-rock Sr-Nd isotope data. The results show positive ɛNd values (+3.53 to +7.51) for harzburgites suggesting their parental magmas derived from the depleted Mantle. The remaining Platreef rocks have negative ɛNd values (-2.91 to -22.88) and show significant variations in Sr-Nd isotopic compositions. The first group of Platreef samples has relatively high isotopic compositions (ɛNd= -2.91 to -5.68; ⁸⁷Sr/⁸⁶Sri= 0.709177 - 0.711998). The second group of Platreef samples has Sr ratios (⁸⁷Sr/⁸⁶Sri= 0.709816-0.712106) overlapping with samples of the first group but slightly lower ɛNd values (-7.44 to -8.39). Lastly, the third group of Platreef samples has low ɛNd values (-10.82 to -14.32) and low Sr ratios (⁸⁷Sr/⁸⁶Sri= 0.707545-0.710042) than those from samples of the two Platreef groups mentioned above. There is, however, a Platreef sample with ɛNd value (-5.26) in range with the Platreef samples of the first group, but its Sr ratio (0.707281) is the lowest even when compared to samples of the third Platreef group. There are also five other Platreef samples which have either anomalous ɛNd or Sr ratios which make it difficult to assess their isotopic compositions relative to other samples. These isotopic variations for the Platreef samples indicate both multiple sources and multiple magma chambers where varying crustal contamination styles have operated during the evolution of these magmas prior their emplacements into the Platreef setting as sills. Furthermore, the MZ rocks have different Sr-Nd isotopic compositions (For OY482 gabbronorite [ɛNd= +0.65; ⁸⁷Sr/⁸⁶Sri= 0.711746]; for TN754 leucogabbronorite [ɛNd= -7.44; ⁸⁷Sr/⁸⁶Sri= 0.709322]) which do not only indicate different MZ magma chambers, but also different magmas from those of the Platreef. Although the Platreef is still considered a single stratigraphic unit in the northern limb of the BIC, its genesis involved multiple magmatic processes which evolved independently from each other.Keywords: crustal contamination styles, magma chambers, magma sources, multiple sills emplacement
Procedia PDF Downloads 170986 Development of a Table-Top Composite Wire Fabrication System for Additive Manufacturing
Authors: Krishna Nand, Mohammad Taufik
Abstract:
Fused Filament Fabrication (FFF) is one of the most popular additive manufacturing (AM) technology. In FFF technology, a wire form material (filament) is fed inside a heated chamber, where it gets converted into semi-solid form and extruded out of a nozzle to be deposited on the build platform to fabricate the part. FFF technology is expanding and covering the market at a very rapid rate, so the need of raw materials for 3D printing is also increasing. The cost of 3D printing is directly affected by filament cost. To make 3D printing more economic, a compact and portable filament/wire extrusion system is needed. Wire extrusion systems to extrude ordinary wire/filament made of a single material are available in the market. However, extrusion system to make a composite wire/filament are not available. Hence, in this study, initial efforts have been made to develop a table-top composite wire extruder. The developed system is consisted of mechanical parts, electronics parts, and a control system. A multiple channel hopper, extrusion screw, melting chamber and nozzle, cooling zone, and spool winder are some mechanical parts. While motors, heater, temperature sensor, cooling fans are some electronics parts, which are used to develop this system. A control board has been used to control the various process parameters like – temperature and speed of motors. For the production of composite wire/filament, two different materials could be fed through two channels of hopper, which will be mixed and carried to the heated zone by extrusion screw. The extrusion screw is rotated by a motor, and the speed of this motor will be controlled by the controller as per the requirement of material extrusion rate. In the heated zone, the material will melt with the help of a heating element and extruded out of the nozzle in the form of wire. The developed system occupies less floor space due to the vertical orientation of its heating chamber. It is capable to extrude ordinary filament as well as composite filament, which are compatible with 3D printers available in the market. Further, the developed system could be employed in the research and development of materials, processing, and characterization for 3D printer. The developed system presented in this study could be a better choice for hobbyists and researchers dealing with the fused filament fabrication process to reduce the 3D printing cost significantly by recycling the waste material into 3D printer feed material. Further, it could also be explored as a better alternative for filament production at the commercial level.Keywords: additive manufacturing, 3D Printing, filament extrusion, pellet extrusion
Procedia PDF Downloads 171985 Compression-Extrusion Test to Assess Texture of Thickened Liquids for Dysphagia
Authors: Jesus Salmeron, Carmen De Vega, Maria Soledad Vicente, Mireia Olabarria, Olaia Martinez
Abstract:
Dysphagia or difficulty in swallowing affects mostly elder people: 56-78% of the institutionalized and 44% of the hospitalized. Liquid food thickening is a necessary measure in this situation because it reduces the risk of penetration-aspiration. Until now, and as proposed by the American Dietetic Association in 2002, possible consistencies have been categorized in three groups attending to their viscosity: nectar (50-350 mPa•s), honey (350-1750 mPa•s) and pudding (>1750 mPa•s). The adequate viscosity level should be identified for every patient, according to her/his impairment. Nevertheless, a systematic review on dysphagia diet performed recently indicated that there is no evidence to suggest that there is any transition of clinical relevance between the three levels proposed. It was also stated that other physical properties of the bolus (slipperiness, density or cohesiveness, among others) could influence swallowing in affected patients and could contribute to the amount of remaining residue. Texture parameters need to be evaluated as possible alternative to viscosity. The aim of this study was to evaluate the instrumental extrusion-compression test as a possible tool to characterize changes along time in water thickened with various products and in the three theoretical consistencies. Six commercial thickeners were used: NM® (NM), Multi-thick® (M), Nutilis Powder® (Nut), Resource® (R), Thick&Easy® (TE) and Vegenat® (V). All of them with a modified starch base. Only one of them, Nut, also had a 6,4% of gum (guar, tara and xanthan). They were prepared as indicated in the instructions of each product and dispensing the correspondent amount for nectar, honey and pudding consistencies in 300 mL of tap water at 18ºC-20ºC. The mixture was stirred for about 30 s. Once it was homogeneously spread, it was dispensed in 30 mL plastic glasses; always to the same height. Each of these glasses was used as a measuring point. Viscosity was measured using a rotational viscometer (ST-2001, Selecta, Barcelona). Extrusion-compression test was performed using a TA.XT2i texture analyzer (Stable Micro Systems, UK) with a 25 mm diameter cylindrical probe (SMSP/25). Penetration distance was set at 10 mm and a speed of 3 mm/s. Measurements were made at 1, 5, 10, 20, 30, 40, 50 and 60 minutes from the moment samples were mixed. From the force (g)–time (s) curves obtained in the instrumental assays, maximum force peak (F) was chosen a reference parameter. Viscosity (mPa•s) and F (g) showed to be highly correlated and had similar development along time, following time-dependent quadratic models. It was possible to predict viscosity using F as an independent variable, as they were linearly correlated. In conclusion, compression-extrusion test could be an alternative and a useful tool to assess physical characteristics of thickened liquids.Keywords: compression-extrusion test, dysphagia, texture analyzer, thickener
Procedia PDF Downloads 372984 MCD-017: Potential Candidate from the Class of Nitroimidazoles to Treat Tuberculosis
Authors: Gurleen Kour, Mowkshi Khullar, B. K. Chandan, Parvinder Pal Singh, Kushalava Reddy Yumpalla, Gurunadham Munagala, Ram A. Vishwakarma, Zabeer Ahmed
Abstract:
New chemotherapeutic compounds against multidrug-resistant Mycobacterium tuberculosis (Mtb) are urgently needed to combat drug resistance in tuberculosis (TB). Apart from in-vitro potency against the target, physiochemical properties and pharmacokinetic properties play an imperative role in the process of drug discovery. We have identified novel nitroimidazole derivatives with potential activity against mycobacterium tuberculosis. One lead candidates, MCD-017, which showed potent activity against H37Rv strain (MIC=0.5µg/ml) and was further evaluated in the process of drug development. Methods: Basic physicochemical parameters like solubility and lipophilicity (LogP) were evaluated. Thermodynamic solubility was determined in PBS buffer (pH 7.4) using LC/MS-MS. The partition coefficient (Log P) of the compound was determined between octanol and phosphate buffered saline (PBS at pH 7.4) at 25°C by the microscale shake flask method. The compound followed Lipinski’s rule of five, which is predictive of good oral bioavailability and was further evaluated for metabolic stability. In-vitro metabolic stability was determined in rat liver microsomes. The hepatotoxicity of the compound was also determined in HepG2 cell line. In vivo pharmacokinetic profile of the compound after oral dosing was also obtained using balb/c mice. Results: The compound exhibited favorable solubility and lipophilicity. The physical and chemical properties of the compound were made use of as the first determination of drug-like properties. The compound obeyed Lipinski’s rule of five, with molecular weight < 500, number of hydrogen bond donors (HBD) < 5 and number of hydrogen bond acceptors(HBA) not more then 10. The log P of the compound was less than 5 and therefore the compound is predictive of exhibiting good absorption and permeation. Pooled rat liver microsomes were prepared from rat liver homogenate for measuring the metabolic stability. 99% of the compound was not metabolized and remained intact. The compound did not exhibit cytoxicity in hepG2 cells upto 40 µg/ml. The compound revealed good pharmacokinetic profile at a dose of 5mg/kg administered orally with a half life (t1/2) of 1.15 hours, Cmax of 642ng/ml, clearance of 4.84 ml/min/kg and a volume of distribution of 8.05 l/kg. Conclusion : The emergence of multi drug resistance (MDR) and extensively drug resistant (XDR) Tuberculosis emphasize the requirement of novel drugs active against tuberculosis. Thus, the need to evaluate physicochemical and pharmacokinetic properties in the early stages of drug discovery is required to reduce the attrition associated with poor drug exposure. In summary, it can be concluded that MCD-017 may be considered a good candidate for further preclinical and clinical evaluations.Keywords: mycobacterium tuberculosis, pharmacokinetics, physicochemical properties, hepatotoxicity
Procedia PDF Downloads 460983 A Seven Year Single-Centre Study of Dental Implant Survival in Head and Neck Oncology Patients
Authors: Sidra Suleman, Maliha Suleman, Stephen Brindley
Abstract:
Oral rehabilitation of head and neck cancer patients plays a crucial role in the quality of life for such individuals post-treatment. Placement of dental implants or implant-retained prostheses can help restore oral function and aesthetics, which is often compromised following surgery. Conventional prosthodontic techniques can be insufficient in rehabilitating such patients due to their altered anatomy and reduced oral competence. Hence, there is a strong clinical need for the placement of dental implants. With an increasing incidence of head and neck cancer patients, the demand for such treatment is rising. Aim: The aim of the study was to determine the survival rate of dental implants in head and neck cancer patients placed at the Restorative and Maxillofacial Department, Royal Stoke University Hospital (RSUH), United Kingdom. Methodology: All patients who received dental implants between January 1, 2013 to December 31, 2020 were identified. Patients were excluded based on three criteria: 1) non-head and neck cancer patients, 2) no outpatient follow-up post-implant placement 3) provision of non-dental implants. Scanned paper notes and electronic records were extracted and analyzed. Implant survival was defined as fixtures that had remained in-situ / not required removal. Sample: Overall, 61 individuals were recruited from the 143 patients identified. The mean age was 64.9 years, with a range of 35 – 89 years. The sample included 37 (60.7%) males and 24 (39.3%) females. In total, 211 implants were placed, of which 40 (19.0%) were in the maxilla, 152 (72.0%) in the mandible and 19 (9.0%) in autogenous bone graft sites. Histologically 57 (93.4%) patients had squamous cell carcinoma, with 43 (70.5%) patients having either stage IVA or IVB disease. As part of treatment, 42 (68.9%) patients received radiotherapy, which was carried out post-operatively for 29 (69.0%) cases. Whereas 21 (34.4%) patients underwent chemotherapy, 13 (61.9%) of which were post-operative. The Median follow-up period was 21.9 months with a range from 0.9 – 91.4 months. During the study, 23 (37.7%) patients died and their data was censored beyond the date of death. Results: In total, four patients who had received radiotherapy had one implant failure each. Two mandibular implants failed secondary to osteoradionecrosis, and two maxillary implants did not survive as a result of failure to osseointegrate. The overall implant survival rates were 99.1% at three years and 98.1% at both 5 and 7 years. Conclusions: Although this data shows that implant failure rates are low, it highlights the difficulty in predicting which patients will be affected. Future studies involving larger cohorts are warranted to further analyze factors affecting outcomes.Keywords: oncology, dental implants, survival, restorative
Procedia PDF Downloads 238982 Evaluation of Cardiac Rhythm Patterns after Open Surgical Maze-Procedures from Three Years' Experiences in a Single Heart Center
Authors: J. Yan, B. Pieper, B. Bucsky, H. H. Sievers, B. Nasseri, S. A. Mohamed
Abstract:
In order to optimize the efficacy of medications, the regular follow-up with long-term continuous monitoring of heart rhythmic patterns has been facilitated since clinical introduction of cardiac implantable electronic monitoring devices (CIMD). Extensive analysis of rhythmic circadian properties is capable to disclose the distributions of arrhythmic events, which may support appropriate medication according rate-/rhythm-control strategy and minimize consequent afflictions. 348 patients (69 ± 0.5ys, male 61.8%) with predisposed atrial fibrillation (AF), undergoing primary ablating therapies combined to coronary or valve operations and secondary implantation of CIMDs, were involved and divided into 3 groups such as PAAF (paroxysmal AF) (n=99, male 68.7%), PEAF (persistent AF) (n=94, male 62.8%), and LSPEAF (long-standing persistent AF) (n=155, male 56.8%). All patients participated in three-year ambulant follow-up (3, 6, 9, 12, 18, 24, 30 and 36 months). Burdens of atrial fibrillation recurrence were assessed using cardiac monitor devices, whereby attacks frequencies and their circadian patterns were systemically analyzed. Anticoagulants and regular anti-arrhythmic medications were evaluated and the last were listed in terms of anti-rate and anti-rhythm regimens. Patients in the PEAF-group showed the least AF-burden after surgical ablating procedures compared to both of the other subtypes (p < 0.05). The AF-recurrences predominantly performed such attacks’ property as shorter than one hour, namely within 10 minutes (p < 0.05), regardless of AF-subtypes. Concerning circadian distribution of the recurrence attacks, frequent AF-attacks were mostly recorded in the morning in the PAAF-group (p < 0.05), while the patients with predisposed PEAF complained less attack-induced discomforts in the latter half of the night and the ones with LSPEAF only if they were not physically active after primary surgical ablations. Different AF-subtypes presented distinct therapeutic efficacies after appropriate surgical ablating procedures and recurrence properties in sense of circadian distribution. An optimization of medical regimen and drug dosages to maintain the therapeutic success needs more attention to detailed assessment of the long-term follow-up. Rate-control strategy plays a much more important role than rhythm-control in the ongoing follow-up examinations.Keywords: atrial fibrillation, CIMD, MAZE, rate-control, rhythm-control, rhythm patterns
Procedia PDF Downloads 159981 An Exploratory Study on the Effect of a Fermented Dairy Product on Self-Reported Gut Complaints in US Recreational Athletes
Authors: Kersch-Counet C., Fransen K. H. S., Broyd M., Nyakayiru J. D. O. A., Schoemaker M. H., Mallee L. F., Bovee-Oudenhoven I. M. J.
Abstract:
Background: Around one third of people, including athletes, suffer from feelings of gut discomfort. Fermentation of dairy is a process that has been associated with products that can improve gut health. However, insight in (potential) health benefits of most fermented foods is limited to chemical analyses and in-vitro models. Objective: The aim of this open-label, single-arm explorative trial was to investigate in a real life setting the effect of consumption of a fermented whey product for 3 weeks on self-perceived physical and mental wellbeing and digestive issues in 150 US recreational athletes (20-50 years of age) with self-reported gut complaints at enrolment. Methods: Participants living at the West-Coast of the US received for 3 weeks a daily powder of 15 g of BiotisTM Fermentis to be mixed in water using a supplied shaker. Weekly questionnaires were conducted by MMR research to study the effect on physical/mental health issues and self-perceived gut complaints. Non-parametric tests (e.g., Friedman test) were used to assess statistical differences over time while the Kruskal-Wallis and Wilcoxon signed-rank tests were used for sub-groups analysis. Results: Bloating, stress and anxiety were the top 3 issues of the US recreational athletes. Satisfaction of physical wellbeing increased significantly throughout the 3-weeks of fermented whey product consumption (p<0.0005). Combined digestive issues decreased significantly after 2- and 3-weeks of product consumption, with bloating showing a significant reduction (p<0.05). There was a trend that self-reported stress levels reduced after 3 weeks and participants said to significantly feel more active, energetic, and vital (p<0.05). Subgroup analysis showed that gender and habitual protein supplement consumption were associated with specific health issues and modulated the response to the fermented dairy product. Conclusion: Daily consumption of the fermented BiotisTM Fermentis product is associated with a reduction in self-perceived gastrointestinal symptoms and improved overall wellbeing and mood state in US recreational athletes. This large nutrition and health consumer study brings valuable insights in self-reported gut complaints of recreational athletes in the US and their response to a fermented dairy product. A controlled clinical trial in a targeted population is recommended to scientifically substantiate the product effect as observed in this explorative study.Keywords: real-life study, digestive health, fermented whey, sports
Procedia PDF Downloads 282980 Investigation on Single Nucleotide Polymorphism in Candidate Genes and Their Association with Occurrence of Mycobacterium avium Subspecies Paratuberculosis Infection in Cattle
Authors: Ran Vir Singh, Anuj Chauhan, Subhodh Kumar, Rajesh Rathore, Satish Kumar, B Gopi, Sushil Kumar, Tarun Kumar, Ramji Yadav, Donna Phangchopi, Shoor Vir Singh
Abstract:
Paratuberculosis caused by Mycobacterium avium subspecies paratuberculosis (MAP) is a chronic granulomatous enteritis affecting ruminants. It is responsible for significant economic losses in livestock industry worldwide. This organism is also of public health concern due to an unconfirmed link to Crohn’s disease. Susceptibility to paratuberculosis has been suggested to have genetic component with low to moderate heritability. Number of SNPs in various candidates genes have been observed to be affecting the susceptibility toward paratuberculosis. The objective of this study was to explore the association of various SNPs in the candidate genes and QTL region with MAP. A total of 117 SNPs from SLC11A1, IFNG, CARD15, TLR2, TLR4, CLEC7A, CD209, SP110, ANKARA2, PGLYRP1 and one QTL were selected for study. A total of 1222 cattle from various organized herds, gauhsalas and farmer herds were screened for MAP infection by Johnin intradermal skin test, AGID, serum ELISA, fecal microscopy, fecal culture and IS900 blood PCR. Based on the results of these tests, a case and control population of 200 and 183 respectively was established for study. A total of 117 SNPs from 10 candidate genes and one QTL were selected and validated/tested in our case and control population by PCR-RFLP technique. Data was analyzed using SAS 9.3 software. Statistical analysis revealed that, 107 out of 117 SNPs were not significantly associated with occurrence of MAP. Only SNP rs55617172 of TLR2, rs8193046 and rs8193060 of TLR4, rs110353594 and rs41654445 of CLEC7A, rs208814257of CD209, rs41933863 of ANKRA2, two loci {SLC11A1(53C/G)} and {IFNG (185 G/r) } and SNP rs41945014 in QTL region was significantly associated with MAP. Six SNP from 10 significant SNPs viz., rs110353594 and rs41654445 from CLEC7A, rs8193046 and rs8193060 from TLR4, rs109453173 from SLC11A1 rs208814257 from CD209 were validated in new case and control population. Out of these only one SNP rs8193046 of TLR4 gene was found significantly associated with occurrence of MAP in cattle. ODD ratio indicates that animals with AG genotype were more susceptible to MAP and this finding is in accordance with the earlier report. Hence it reaffirms that AG genotype can serve as a reliable genetic marker for indentifying more susceptible cattle in future selection against MAP infection in cattle.Keywords: SNP, candidate genes, paratuberculosis, cattle
Procedia PDF Downloads 361979 Microglia Activation in Animal Model of Schizophrenia
Authors: Esshili Awatef, Manitz Marie-Pierre, Eßlinger Manuela, Gerhardt Alexandra, Plümper Jennifer, Wachholz Simone, Friebe Astrid, Juckel Georg
Abstract:
Maternal immune activation (MIA) resulting from maternal viral infection during pregnancy is a known risk factor for schizophrenia. The neural mechanisms by which maternal infections increase the risk for schizophrenia remain unknown, although the prevailing hypothesis argues that an activation of the maternal immune system induces changes in the maternal-fetal environment that might interact with fetal brain development. It may lead to an activation of fetal microglia inducing long-lasting functional changes of these cells. Based on post-mortem analysis showing an increased number of activated microglial cells in patients with schizophrenia, it can be hypothesized that these cells contribute to disease pathogenesis and may actively be involved in gray matter loss observed in such patients. In the present study, we hypothesize that prenatal treatment with the inflammatory agent Poly(I:C) during embryogenesis at contributes to microglial activation in the offspring, which may, therefore, represent a contributing factor to the pathogenesis of schizophrenia and underlines the need for new pharmacological treatment options. Pregnant rats were treated with intraperitoneal injections a single dose of Poly(I:C) or saline on gestation day 17. Brains of control and Poly(I:C) offspring, were removed and into 20-μm-thick coronal sections were cut by using a Cryostat. Brain slices were fixed and immunostained with ba1 antibody. Subsequently, Iba1-immunoreactivity was detected using a secondary antibody, goat anti-rabbit. The sections were viewed and photographed under microscope. The immunohistochemical analysis revealed increases in microglia cell number in the prefrontal cortex, in offspring of poly(I:C) treated-rats as compared to the controls injected with NaCl. However, no significant differences were observed in microglia activation in the cerebellum among the groups. Prenatal immune challenge with Poly(I:C) was able to induce long-lasting changes in the offspring brains. This lead to a higher activation of microglia cells in the prefrontal cortex, a brain region critical for many higher brain functions, including working memory and cognitive flexibility. which might be implicated in possible changes in cortical neuropil architecture in schizophrenia. Further studies will be needed to clarify the association between microglial cells activation and schizophrenia-related behavioral alterations.Keywords: Microglia, neuroinflammation, PolyI:C, schizophrenia
Procedia PDF Downloads 422978 Walking across the Government of Egypt: A Single Country Comparative Study of the Past and Current Condition of the Government of Egypt
Authors: Homyr L. Garcia, Jr., Anne Margaret A. Rendon, Carla Michaela B. Taguinod
Abstract:
Nothing is constant in this world but change. This is the reality wherein a lot of people fail to recognize and maybe, it is because of the fact that some see things that are happening with little value or no value at all until it’s gone. For the past years, Egypt was known for its stable government. It was able to withstand a lot of problems and crisis which challenged their country in ways which can never be imagined. In the present time, it seems like in just a snap of a finger, the said stability vanished and it was immediately replaced by a crisis which resulted to a failure in some parts of their government. In addition, this problem continued to worsen and the current situation of Egypt is just a reflection or a result of it. On the other hand, as the researchers continued to study the reasons why the government of Egypt is unstable, they concluded that there might be a possibility that they will be able to produce ways in which their country could be helped or improved. The instability of the government of Egypt is the product of combining all the problems which affects the lives of the people. Some of the reasons that the researchers found are the following: 1) unending doubts of the people regarding the ruling capacity of elected presidents, 2) removal of President Mohamed Morsi in position, 3) economic crisis, 4) a lot of protests and revolution happened, 5) resignation of the long term President Hosni Mubarak and 6) the office of the President is most likely available only to the chosen successor. Also, according to previous researches, there are two plausible scenarios for the instability of Egypt: 1) a military intervention specifically the Supreme Council of the Armed Forces or SCAF, resulting from a contested succession and 2) an Islamist push for political power which highlights the claim that religion is a hindrance towards the development of their country and government. From the eight possible reasons, the researchers decided that they will be focusing on economic crisis since the instability is more clearly seen in the country’s economy which directly affects the people and the government itself. In addition, they made a hypothesis which states that stable economy is a prerequisite towards a stable government. If they will be able to show how this claim is true by using the Social Autopsy Research Design for the qualitative method and Pearson’s correlation coefficient for the quantitative method, the researchers might be able to produce a proposal on how Egypt can stabilize their government and avoid such problems. Also, the hypothesis will be based from the Rational Action Theory which is a theory for understanding and modeling social and economy as well as individual behavior.Keywords: Pearson’s correlation coefficient, rational action theory, social autopsy research design, supreme council of the armed forces (SCAF)
Procedia PDF Downloads 412977 DNA Hypomethylating Agents Induced Histone Acetylation Changes in Leukemia
Authors: Sridhar A. Malkaram, Tamer E. Fandy
Abstract:
Purpose: 5-Azacytidine (5AC) and decitabine (DC) are DNA hypomethylating agents. We recently demonstrated that both drugs increase the enzymatic activity of the histone deacetylase enzyme SIRT6. Accordingly, we are comparing the changes H3K9 acetylation changes in the whole genome induced by both drugs using leukemia cells. Description of Methods & Materials: Mononuclear cells from the bone marrow of six de-identified naive acute myeloid leukemia (AML) patients were cultured with either 500 nM of DC or 5AC for 72 h followed by ChIP-Seq analysis using a ChIP-validated acetylated-H3K9 (H3K9ac) antibody. Chip-Seq libraries were prepared from treated and untreated cells using SMARTer ThruPLEX DNA- seq kit (Takara Bio, USA) according to the manufacturer’s instructions. Libraries were purified and size-selected with AMPure XP beads at 1:1 (v/v) ratio. All libraries were pooled prior to sequencing on an Illumina HiSeq 1500. The dual-indexed single-read Rapid Run was performed with 1x120 cycles at 5 pM final concentration of the library pool. Sequence reads with average Phred quality < 20, with length < 35bp, PCR duplicates, and those aligning to blacklisted regions of the genome were filtered out using Trim Galore v0.4.4 and cutadapt v1.18. Reads were aligned to the reference human genome (hg38) using Bowtie v2.3.4.1 in end-to-end alignment mode. H3K9ac enriched (peak) regions were identified using diffReps v1.55.4 software using input samples for background correction. The statistical significance of differential peak counts was assessed using a negative binomial test using all individuals as replicates. Data & Results: The data from the six patients showed significant (Padj<0.05) acetylation changes at 925 loci after 5AC treatment versus 182 loci after DC treatment. Both drugs induced H3K9 acetylation changes at different chromosomal regions, including promoters, coding exons, introns, and distal intergenic regions. Ten common genes showed H3K9 acetylation changes by both drugs. Approximately 84% of the genes showed an H3K9 acetylation decrease by 5AC versus 54% only by DC. Figures 1 and 2 show the heatmaps for the top 100 genes and the 99 genes showing H3K9 acetylation decrease after 5AC treatment and DC treatment, respectively. Conclusion: Despite the similarity in hypomethylating activity and chemical structure, the effect of both drugs on H3K9 acetylation change was significantly different. More changes in H3K9 acetylation were observed after 5 AC treatments compared to DC. The impact of these changes on gene expression and the clinical efficacy of these drugs requires further investigation.Keywords: DNA methylation, leukemia, decitabine, 5-Azacytidine, epigenetics
Procedia PDF Downloads 154976 Planning Railway Assets Renewal with a Multiobjective Approach
Authors: João Coutinho-Rodrigues, Nuno Sousa, Luís Alçada-Almeida
Abstract:
Transportation infrastructure systems are fundamental in modern society and economy. However, they need modernizing, maintaining, and reinforcing interventions which require large investments. In many countries, accumulated intervention delays arise from aging and intense use, being magnified by financial constraints of the past. The decision problem of managing the renewal of large backlogs is common to several types of important transportation infrastructures (e.g., railways, roads). This problem requires considering financial aspects as well as operational constraints under a multidimensional framework. The present research introduces a linear programming multiobjective model for managing railway infrastructure asset renewal. The model aims at minimizing three objectives: (i) yearly investment peak, by evenly spreading investment throughout multiple years; (ii) total cost, which includes extra maintenance costs incurred from renewal backlogs; (iii) priority delays related to work start postponements on the higher priority railway sections. Operational constraints ensure that passenger and freight services are not excessively delayed from having railway line sections under intervention. Achieving a balanced annual investment plan, without compromising the total financial effort or excessively postponing the execution of the priority works, was the motivation for pursuing the research which is now presented. The methodology, inspired by a real case study and tested with real data, reflects aspects of the practice of an infrastructure management company and is generalizable to different types of infrastructure (e.g., railways, highways). It was conceived for treating renewal interventions in infrastructure assets, which is a railway network may be rails, ballasts, sleepers, etc.; while a section is under intervention, trains must run at reduced speed, causing delays in services. The model cannot, therefore, allow for an accumulation of works on the same line, which may cause excessively large delays. Similarly, the lines do not all have the same socio-economic importance or service intensity, making it is necessary to prioritize the sections to be renewed. The model takes these issues into account, and its output is an optimized works schedule for the renewal project translatable in Gantt charts The infrastructure management company provided all the data for the first test case study and validated the parameterization. This case consists of several sections to be renewed, over 5 years and belonging to 17 lines. A large instance was also generated, reflecting a problem of a size similar to the USA railway network (considered the largest one in the world), so it is not expected that considerably larger problems appear in real life; an average of 25 years backlog and ten years of project horizon was considered. Despite the very large increase in the number of decision variables (200 times as large), the computational time cost did not increase very significantly. It is thus expectable that just about any real-life problem can be treated in a modern computer, regardless of size. The trade-off analysis shows that if the decision maker allows some increase in max yearly investment (i.e., degradation of objective ii), solutions improve considerably in the remaining two objectives.Keywords: transport infrastructure, asset renewal, railway maintenance, multiobjective modeling
Procedia PDF Downloads 151975 Anatomical and Histochemical Investigation of the Leaf of Vitex agnus-castus L.
Authors: S. Mamoucha, J. Rahul, N. Christodoulakis
Abstract:
Introduction: Nature has been the source of medicinal agents since the dawn of the human existence on Earth. Currently, millions of people, in the developing world, rely on medicinal plants for primary health care, income generation and lifespan improvement. In Greece, more than 5500 plant taxa are reported while about 250 of them are considered to be of great pharmaceutical importance. Among the plants used for medical purposes, Vitex agnus-castus L. (Verbenaceae) is known since ancient times. It is a small tree or shrub, widely distributed in the Mediterranean basin up to the Central Asia. It is also known as chaste tree or monks pepper. Theophrastus mentioned the shrub several times, as ‘agnos’ in his ‘Enquiry into Plants’. Dioscorides mentioned the use of V. agnus-castus for the stimulation of lactation in nursing mothers and the treatment of several female disorders. The plant has important medicinal properties and a long tradition in folk medicine as an antimicrobial, diuretic, digestive and insecticidal agent. Materials and methods: Leaves were cleaned, detached, fixed, sectioned and investigated with light and Scanning Electron Microscopy (SEM). Histochemical tests were executed as well. Specific histochemical reagents (osmium tetroxide, H2SO4, vanillin/HCl, antimony trichloride, Wagner’ s reagent, Dittmar’ s reagent, potassium bichromate, nitroso reaction, ferric chloride and di methoxy benzaldehyde) were used for the sub cellular localization of secondary metabolites. Results: Light microscopical investigations of the elongated leaves of V. agnus-castus revealed three layers of palisade parenchyma, just below the single layered adaxial epidermis. The spongy parenchyma is rather loose. Adaxial epidermal cells are larger in magnitude, compared to those of the abaxial epidermis. Four different types of capitate, secreting trichomes, were localized among the abaxial epidermal cells. Stomata were observed at the abaxial epidermis as well. SEM revealed the interesting arrangement of trichomes. Histochemical treatment on fresh and plastic embedded tissue sections revealed the nature and the sites of secondary metabolites accumulation (flavonoids, steroids, terpenes). Acknowledgment: This work was supported by IKY - State Scholarship Foundation, Athens, Greece.Keywords: Vitex agnus-castus, leaf anatomy, histochemical reagents, secondary metabolites
Procedia PDF Downloads 388974 IoT Continuous Monitoring Biochemical Oxygen Demand Wastewater Effluent Quality: Machine Learning Algorithms
Authors: Sergio Celaschi, Henrique Canavarro de Alencar, Claaudecir Biazoli
Abstract:
Effluent quality is of the highest priority for compliance with the permit limits of environmental protection agencies and ensures the protection of their local water system. Of the pollutants monitored, the biochemical oxygen demand (BOD) posed one of the greatest challenges. This work presents a solution for wastewater treatment plants - WWTP’s ability to react to different situations and meet treatment goals. Delayed BOD5 results from the lab take 7 to 8 analysis days, hindered the WWTP’s ability to react to different situations and meet treatment goals. Reducing BOD turnaround time from days to hours is our quest. Such a solution is based on a system of two BOD bioreactors associated with Digital Twin (DT) and Machine Learning (ML) methodologies via an Internet of Things (IoT) platform to monitor and control a WWTP to support decision making. DT is a virtual and dynamic replica of a production process. DT requires the ability to collect and store real-time sensor data related to the operating environment. Furthermore, it integrates and organizes the data on a digital platform and applies analytical models allowing a deeper understanding of the real process to catch sooner anomalies. In our system of continuous time monitoring of the BOD suppressed by the effluent treatment process, the DT algorithm for analyzing the data uses ML on a chemical kinetic parameterized model. The continuous BOD monitoring system, capable of providing results in a fraction of the time required by BOD5 analysis, is composed of two thermally isolated batch bioreactors. Each bioreactor contains input/output access to wastewater sample (influent and effluent), hydraulic conduction tubes, pumps, and valves for batch sample and dilution water, air supply for dissolved oxygen (DO) saturation, cooler/heater for sample thermal stability, optical ODO sensor based on fluorescence quenching, pH, ORP, temperature, and atmospheric pressure sensors, local PLC/CPU for TCP/IP data transmission interface. The dynamic BOD system monitoring range covers 2 mg/L < BOD < 2,000 mg/L. In addition to the BOD monitoring system, there are many other operational WWTP sensors. The CPU data is transmitted/received to/from the digital platform, which in turn performs analyses at periodic intervals, aiming to feed the learning process. BOD bulletins and their credibility intervals are made available in 12-hour intervals to web users. The chemical kinetics ML algorithm is composed of a coupled system of four first-order ordinary differential equations for the molar masses of DO, organic material present in the sample, biomass, and products (CO₂ and H₂O) of the reaction. This system is solved numerically linked to its initial conditions: DO (saturated) and initial products of the kinetic oxidation process; CO₂ = H₂0 = 0. The initial values for organic matter and biomass are estimated by the method of minimization of the mean square deviations. A real case of continuous monitoring of BOD wastewater effluent quality is being conducted by deploying an IoT application on a large wastewater purification system located in S. Paulo, Brazil.Keywords: effluent treatment, biochemical oxygen demand, continuous monitoring, IoT, machine learning
Procedia PDF Downloads 77973 Radar on Bike: Coarse Classification based on Multi-Level Clustering for Cyclist Safety Enhancement
Authors: Asma Omri, Noureddine Benothman, Sofiane Sayahi, Fethi Tlili, Hichem Besbes
Abstract:
Cycling, a popular mode of transportation, can also be perilous due to cyclists' vulnerability to collisions with vehicles and obstacles. This paper presents an innovative cyclist safety system based on radar technology designed to offer real-time collision risk warnings to cyclists. The system incorporates a low-power radar sensor affixed to the bicycle and connected to a microcontroller. It leverages radar point cloud detections, a clustering algorithm, and a supervised classifier. These algorithms are optimized for efficiency to run on the TI’s AWR 1843 BOOST radar, utilizing a coarse classification approach distinguishing between cars, trucks, two-wheeled vehicles, and other objects. To enhance the performance of clustering techniques, we propose a 2-Level clustering approach. This approach builds on the state-of-the-art Density-based spatial clustering of applications with noise (DBSCAN). The objective is to first cluster objects based on their velocity, then refine the analysis by clustering based on position. The initial level identifies groups of objects with similar velocities and movement patterns. The subsequent level refines the analysis by considering the spatial distribution of these objects. The clusters obtained from the first level serve as input for the second level of clustering. Our proposed technique surpasses the classical DBSCAN algorithm in terms of geometrical metrics, including homogeneity, completeness, and V-score. Relevant cluster features are extracted and utilized to classify objects using an SVM classifier. Potential obstacles are identified based on their velocity and proximity to the cyclist. To optimize the system, we used the View of Delft dataset for hyperparameter selection and SVM classifier training. The system's performance was assessed using our collected dataset of radar point clouds synchronized with a camera on an Nvidia Jetson Nano board. The radar-based cyclist safety system is a practical solution that can be easily installed on any bicycle and connected to smartphones or other devices, offering real-time feedback and navigation assistance to cyclists. We conducted experiments to validate the system's feasibility, achieving an impressive 85% accuracy in the classification task. This system has the potential to significantly reduce the number of accidents involving cyclists and enhance their safety on the road.Keywords: 2-level clustering, coarse classification, cyclist safety, warning system based on radar technology
Procedia PDF Downloads 94972 Dry Modifications of PCL/Chitosan/PCL Tissue Scaffolds
Authors: Ozan Ozkan, Hilal Turkoglu Sasmazel
Abstract:
Natural polymers are widely used in tissue engineering applications, because of their biocompatibility, biodegradability and solubility in the physiological medium. On the other hand, synthetic polymers are also widely utilized in tissue engineering applications, because they carry no risk of infectious diseases and do not cause immune system reaction. However, the disadvantages of both polymer types block their individual usages as tissue scaffolds efficiently. Therefore, the idea of usage of natural and synthetic polymers together as a single 3D hybrid scaffold which has the advantages of both and the disadvantages of none has been entered to the literature. On the other hand, even though these hybrid structures support the cell adhesion and/or proliferation, various surface modification techniques applied to the surfaces of them to create topographical changes on the surfaces and to obtain reactive functional groups required for the immobilization of biomolecules, especially on the surfaces of synthetic polymers in order to improve cell adhesion and proliferation. In a study presented here, to improve the surface functionality and topography of the layer by layer electrospun 3D poly-epsilon-caprolactone/chitosan/poly-epsilon-caprolactone hybrid tissue scaffolds by using atmospheric pressure plasma method, thus to improve cell adhesion and proliferation of these tissue scaffolds were aimed. The formation/creation of the functional hydroxyl and amine groups and topographical changes on the surfaces of scaffolds were realized by using two different atmospheric pressure plasma systems (nozzle type and dielectric barrier discharge (DBD) type) carried out under different gas medium (air, Ar+O2, Ar+N2). The plasma modification time and distance for the nozzle type plasma system as well as the plasma modification time and the gas flow rate for DBD type plasma system were optimized with monitoring the changes in surface hydrophilicity by using contact angle measurements. The topographical and chemical characterizations of these modified biomaterials’ surfaces were carried out with SEM and ESCA, respectively. The results showed that the atmospheric pressure plasma modifications carried out with both nozzle type plasma and DBD plasma caused topographical and functionality changes on the surfaces of the layer by layer electrospun tissue scaffolds. However, the shelf life studies indicated that the hydrophilicity introduced to the surfaces was mainly because of the functionality changes. Therefore, according to the optimized results, samples treated with nozzle type air plasma modification applied for 9 minutes from a distance of 17 cm and Ar+O2 DBD plasma modification applied for 1 minute under 70 cm3/min O2 flow rate were found to have the highest hydrophilicity compared to pristine samples.Keywords: biomaterial, chitosan, hybrid, plasma
Procedia PDF Downloads 277971 Exploring the Impact of Input Sequence Lengths on Long Short-Term Memory-Based Streamflow Prediction in Flashy Catchments
Authors: Farzad Hosseini Hossein Abadi, Cristina Prieto Sierra, Cesar Álvarez Díaz
Abstract:
Predicting streamflow accurately in flashy catchments prone to floods is a major research and operational challenge in hydrological modeling. Recent advancements in deep learning, particularly Long Short-Term Memory (LSTM) networks, have shown to be promising in achieving accurate hydrological predictions at daily and hourly time scales. In this work, a multi-timescale LSTM (MTS-LSTM) network was applied to the context of regional hydrological predictions at an hourly time scale in flashy catchments. The case study includes 40 catchments allocated in the Basque Country, north of Spain. We explore the impact of hyperparameters on the performance of streamflow predictions given by regional deep learning models through systematic hyperparameter tuning - where optimal regional values for different catchments are identified. The results show that predictions are highly accurate, with Nash-Sutcliffe (NSE) and Kling-Gupta (KGE) metrics values as high as 0.98 and 0.97, respectively. A principal component analysis reveals that a hyperparameter related to the length of the input sequence contributes most significantly to the prediction performance. The findings suggest that input sequence lengths have a crucial impact on the model prediction performance. Moreover, employing catchment-scale analysis reveals distinct sequence lengths for individual basins, highlighting the necessity of customizing this hyperparameter based on each catchment’s characteristics. This aligns with well known “uniqueness of the place” paradigm. In prior research, tuning the length of the input sequence of LSTMs has received limited focus in the field of streamflow prediction. Initially it was set to 365 days to capture a full annual water cycle. Later, performing limited systematic hyper-tuning using grid search, revealed a modification to 270 days. However, despite the significance of this hyperparameter in hydrological predictions, usually studies have overlooked its tuning and fixed it to 365 days. This study, employing a simultaneous systematic hyperparameter tuning approach, emphasizes the critical role of input sequence length as an influential hyperparameter in configuring LSTMs for regional streamflow prediction. Proper tuning of this hyperparameter is essential for achieving accurate hourly predictions using deep learning models.Keywords: LSTMs, streamflow, hyperparameters, hydrology
Procedia PDF Downloads 76970 Understanding the Qualitative Nature of Product Reviews by Integrating Text Processing Algorithm and Usability Feature Extraction
Authors: Cherry Yieng Siang Ling, Joong Hee Lee, Myung Hwan Yun
Abstract:
The quality of a product to be usable has become the basic requirement in consumer’s perspective while failing the requirement ends up the customer from not using the product. Identifying usability issues from analyzing quantitative and qualitative data collected from usability testing and evaluation activities aids in the process of product design, yet the lack of studies and researches regarding analysis methodologies in qualitative text data of usability field inhibits the potential of these data for more useful applications. While the possibility of analyzing qualitative text data found with the rapid development of data analysis studies such as natural language processing field in understanding human language in computer, and machine learning field in providing predictive model and clustering tool. Therefore, this research aims to study the application capability of text processing algorithm in analysis of qualitative text data collected from usability activities. This research utilized datasets collected from LG neckband headset usability experiment in which the datasets consist of headset survey text data, subject’s data and product physical data. In the analysis procedure, which integrated with the text-processing algorithm, the process includes training of comments onto vector space, labeling them with the subject and product physical feature data, and clustering to validate the result of comment vector clustering. The result shows 'volume and music control button' as the usability feature that matches best with the cluster of comment vectors where centroid comments of a cluster emphasized more on button positions, while centroid comments of the other cluster emphasized more on button interface issues. When volume and music control buttons are designed separately, the participant experienced less confusion, and thus, the comments mentioned only about the buttons' positions. While in the situation where the volume and music control buttons are designed as a single button, the participants experienced interface issues regarding the buttons such as operating methods of functions and confusion of functions' buttons. The relevance of the cluster centroid comments with the extracted feature explained the capability of text processing algorithms in analyzing qualitative text data from usability testing and evaluations.Keywords: usability, qualitative data, text-processing algorithm, natural language processing
Procedia PDF Downloads 286969 EverPro as the Missing Piece in the Plant Protein Portfolio to Aid the Transformation to Sustainable Food Systems
Authors: Aylin W Sahin, Alice Jaeger, Laura Nyhan, Gregory Belt, Steffen Münch, Elke K. Arendt
Abstract:
Our current food systems cause an increase in malnutrition resulting in more people being overweight or obese in the Western World. Additionally, our natural resources are under enormous pressure and the greenhouse gas emission increases yearly with a significant contribution to climate change. Hence, transforming our food systems is of highest priority. Plant-based food products have a lower environmental impact compared to their animal-based counterpart, representing a more sustainable protein source. However, most plant-based protein ingredients, such as soy and pea, are lacking indispensable amino acids and extremely limited in their functionality and, thus, in their food application potential. They are known to have a low solubility in water and change their properties during processing. The low solubility displays the biggest challenge in the development of milk alternatives leading to inferior protein content and protein quality in dairy alternatives on the market. Moreover, plant-based protein ingredients often possess an off-flavour, which makes them less attractive to consumers. EverPro, a plant-protein isolate originated from Brewer’s Spent Grain, the most abundant by-product in the brewing industry, represents the missing piece in the plant protein portfolio. With a protein content of >85%, it is of high nutritional value, including all indispensable amino acids which allows closing the protein quality gap of plant proteins. Moreover, it possesses high techno-functional properties. It is fully soluble in water (101.7 ± 2.9%), has a high fat absorption capacity (182.4 ± 1.9%), and a foaming capacity which is superior to soy protein or pea protein. This makes EverPro suitable for a vast range of food applications. Furthermore, it does not cause changes in viscosity during heating and cooling of dispersions, such as beverages. Besides its outstanding nutritional and functional characteristics, the production of EverPro has a much lower environmental impact compared to dairy or other plant protein ingredients. Life cycle assessment analysis showed that EverPro has the lowest impact on global warming compared to soy protein isolate, pea protein isolate, whey protein isolate, and egg white powder. It also contributes significantly less to freshwater eutrophication, marine eutrophication and land use compared the protein sources mentioned above. EverPro is the prime example of sustainable ingredients, and the type of plant protein the food industry was waiting for: nutritious, multi-functional, and environmentally friendly.Keywords: plant-based protein, upcycled, brewers' spent grain, low environmental impact, highly functional ingredient
Procedia PDF Downloads 83968 Robust Electrical Segmentation for Zone Coherency Delimitation Base on Multiplex Graph Community Detection
Authors: Noureddine Henka, Sami Tazi, Mohamad Assaad
Abstract:
The electrical grid is a highly intricate system designed to transfer electricity from production areas to consumption areas. The Transmission System Operator (TSO) is responsible for ensuring the efficient distribution of electricity and maintaining the grid's safety and quality. However, due to the increasing integration of intermittent renewable energy sources, there is a growing level of uncertainty, which requires a faster responsive approach. A potential solution involves the use of electrical segmentation, which involves creating coherence zones where electrical disturbances mainly remain within the zone. Indeed, by means of coherent electrical zones, it becomes possible to focus solely on the sub-zone, reducing the range of possibilities and aiding in managing uncertainty. It allows faster execution of operational processes and easier learning for supervised machine learning algorithms. Electrical segmentation can be applied to various applications, such as electrical control, minimizing electrical loss, and ensuring voltage stability. Since the electrical grid can be modeled as a graph, where the vertices represent electrical buses and the edges represent electrical lines, identifying coherent electrical zones can be seen as a clustering task on graphs, generally called community detection. Nevertheless, a critical criterion for the zones is their ability to remain resilient to the electrical evolution of the grid over time. This evolution is due to the constant changes in electricity generation and consumption, which are reflected in graph structure variations as well as line flow changes. One approach to creating a resilient segmentation is to design robust zones under various circumstances. This issue can be represented through a multiplex graph, where each layer represents a specific situation that may arise on the grid. Consequently, resilient segmentation can be achieved by conducting community detection on this multiplex graph. The multiplex graph is composed of multiple graphs, and all the layers share the same set of vertices. Our proposal involves a model that utilizes a unified representation to compute a flattening of all layers. This unified situation can be penalized to obtain (K) connected components representing the robust electrical segmentation clusters. We compare our robust segmentation to the segmentation based on a single reference situation. The robust segmentation proves its relevance by producing clusters with high intra-electrical perturbation and low variance of electrical perturbation. We saw through the experiences when robust electrical segmentation has a benefit and in which context.Keywords: community detection, electrical segmentation, multiplex graph, power grid
Procedia PDF Downloads 83967 Development and Validation of a Semi-Quantitative Food Frequency Questionnaire for Use in Urban and Rural Communities of Rwanda
Authors: Phenias Nsabimana, Jérôme W. Some, Hilda Vasanthakaalam, Stefaan De Henauw, Souheila Abbeddou
Abstract:
Tools for the dietary assessment in adults are limited in low- and middle-income settings. The objective of this study was to develop and validate a semi-quantitative food frequency questionnaire (FFQ) against the multiple pass-24 h recall tool for use in urban and rural Rwanda. A total of 212 adults (154 females and 58 males), 18-49 aged, including 105 urban and 107 rural residents, from the four regions of Rwanda, were recruited in the present study. A multiple-pass 24- H recall technique was used to collect dietary data in both urban and rural areas in four different rounds, on different days (one weekday and one weekend day), separated by a period of three months, from November 2020 to October 2021. The details of all the foods and beverages consumed over the 24h period of the day prior to the interview day were collected during face-to-face interviews. A list of foods, beverages, and commonly consumed recipes was developed by the study researchers and ten research assistants from the different regions of Rwanda. Non-standard recipes were collected when the information was available. A single semi-quantitative FFQ was also developed in the same group discussion prior to the beginning of the data collection. The FFQ was collected at the beginning and the end of the data collection period. Data were collected digitally. The amount of energy and macro-nutrients contributed by each food, recipe, and beverage will be computed based on nutrient composition reported in food composition tables and weight consumed. Median energy and nutrient contents of different food intakes from FFQ and 24-hour recalls and median differences (24-hour recall –FFQ) will be calculated. Kappa, Spearman, Wilcoxon, and Bland-Altman plot statistics will be conducted to evaluate the correlation between estimated nutrient and energy intake found by the two methods. Differences will be tested for their significance and all analyses will be done with STATA 11. Data collection was completed in November 2021. Data cleaning is ongoing and the data analysis is expected to be completed by July 2022. A developed and validated semi-quantitative FFQ will be available for use in dietary assessment. The developed FFQ will help researchers to collect reliable data that will support policy makers to plan for proper dietary change intervention in Rwanda.Keywords: food frequency questionnaire, reproducibility, 24-H recall questionnaire, validation
Procedia PDF Downloads 145966 Disability Management and Occupational Health Enhancement Program in Hong Kong Hospital Settings
Authors: K. C. M. Wong, C. P. Y. Cheng, K. Y. Chan, G. S. C. Fung, T. F. O. Lau, K. F. C. Leung, J. P. C. Fok
Abstract:
Hospital Authority (HA) is the statutory body to manage all public hospitals in Hong Kong. Occupational Care Medicine Service (OMCS) is an in-house multi-disciplinary team responsible for injury management in HA. Hospital administrative services (AS) provides essential support in hospital daily operation to facilitate the provision of quality healthcare services. An occupational health enhancement program in Tai Po Hospital (TPH) domestic service supporting unit (DSSU) was piloted in 2013 with satisfactory outcome, the keys to success were staff engagement and management support. Riding on the success, the program was rolled out to another 5 AS departments of Alice Ho Miu Ling Nethersole Hospital (AHNH) and TPH in 2015. This paper highlights the indispensable components of disability management and occupational health enhancement program in hospital settings. Objectives: 1) Facilitate workplace to support staff with health affecting work problem, 2) Enhance staff’s occupational health. Methodology: Hospital Occupational Safety and Health (OSH) team and AS departments (catering, linen services, and DSSU) of AHNH and TPH worked closely with OMCS. Focus group meetings and worksite visits were conducted with frontline staff engagement. OSH hazards were identified with corresponding OSH improvement measures introduced, e.g., invention of high dusting device to minimize working at height; tailor-made linen cart to minimize back bending at work, etc. Specific MHO trainings were offered to each AS department. A disability management workshop was provided to supervisors in order to enhance their knowledge and skills in return-to-work (RTW) facilitation. Based on injured staff's health condition, OMCS would provide work recommendation, and RTW plan was formulated with engagement of staff and their supervisors. Genuine communication among stakeholders with expectation management paved the way for realistic goals setting and success in our program. Outcome: After implementation of the program, a significant drop of 26% in musculoskeletal disorders related sickness absence day was noted in 2016 as compared to the average of 2013-2015. The improvement was postulated by innovative OSH improvement measures, teamwork, staff engagement and management support. Staff and supervisors’ feedback were very encouraging that 90% respondents rated very satisfactory in program evaluation. This program exemplified good work sharing among departments to support staff in need.Keywords: disability management, occupational health, return to work, occupational medicine
Procedia PDF Downloads 216965 Beware the Trolldom: Speculative Interests and Policy Implications behind the Circulation of Damage Claims
Authors: Antonio Davola
Abstract:
Moving from the evaluations operated by Richard Posner in his judgment on the case Carhart v. Halaska, the paper seeks to analyse the so-called ‘litigation troll’ phenomenon and the development of a damage claims market, i.e. a market in which the right to propose claims is voluntary exchangeable for money and can be asserted by private buyers. The aim of our study is to assess whether the implementation of a ‘damage claims market’ might represent a resource for victims or if, on the contrary, it might operate solely as a speculation tool for private investors. The analysis will move from the US experience, and will then focus on the EU framework. Firstly, the paper will analyse the relation between the litigation troll phenomenon and the patent troll activity: even though these activities are considered similar by Posner, a comparative study shows how these practices significantly differ in their impact on the market and on consumer protection, even moving from similar economic perspectives. The second part of the paper will focus on the main specific concerns related to the litigation trolling activity. The main issues that will be addressed are the risk that the circulation of damage claims might spur non-meritorious litigation and the implications of the misalignment between the victim of a tort and the actual plaintiff in court arising from the sale of a claim. In its third part, the paper will then focus on the opportunities and benefits that the introduction and regulation of a claims market might imply both for potential claims sellers and buyers, in order to ultimately assess whether such a solution might actually increase individual’s legal empowerment. Through the damage claims market compensation would be granted more quickly and easily to consumers who had suffered harm: tort victims would, in fact, be compensated instantly upon the sale of their claims without any burden of proof. On the other hand, claim-buyers would profit from the gap between the amount that a consumer would accept for an immediate refund and the compensation awarded in court. In the fourth part of the paper, the analysis will focus on the legal legitimacy of the litigation trolling activity in the US and the EU framework. Even though there is no express provision that forbids the sale of the right to pursue a claim in court - or that deems such a right to be non-transferable – procedural laws of single States (especially in the EU panorama) must be taken into account in evaluating this aspect. The fifth and final part of the paper will summarize the various data collected to suggest an evaluation on if, and through which normative solutions, the litigation trolling might comport benefits for competition and which would be its overall effect over consumer’s protection.Keywords: competition, claims, consumer's protection, litigation
Procedia PDF Downloads 233964 Frequency Response of Complex Systems with Localized Nonlinearities
Authors: E. Menga, S. Hernandez
Abstract:
Finite Element Models (FEMs) are widely used in order to study and predict the dynamic properties of structures and usually, the prediction can be obtained with much more accuracy in the case of a single component than in the case of assemblies. Especially for structural dynamics studies, in the low and middle frequency range, most complex FEMs can be seen as assemblies made by linear components joined together at interfaces. From a modelling and computational point of view, these types of joints can be seen as localized sources of stiffness and damping and can be modelled as lumped spring/damper elements, most of time, characterized by nonlinear constitutive laws. On the other side, most of FE programs are able to run nonlinear analysis in time-domain. They treat the whole structure as nonlinear, even if there is one nonlinear degree of freedom (DOF) out of thousands of linear ones, making the analysis unnecessarily expensive from a computational point of view. In this work, a methodology in order to obtain the nonlinear frequency response of structures, whose nonlinearities can be considered as localized sources, is presented. The work extends the well-known Structural Dynamic Modification Method (SDMM) to a nonlinear set of modifications, and allows getting the Nonlinear Frequency Response Functions (NLFRFs), through an ‘updating’ process of the Linear Frequency Response Functions (LFRFs). A brief summary of the analytical concepts is given, starting from the linear formulation and understanding what the implications of the nonlinear one, are. The response of the system is formulated in both: time and frequency domain. First the Modal Database is extracted and the linear response is calculated. Secondly the nonlinear response is obtained thru the NL SDMM, by updating the underlying linear behavior of the system. The methodology, implemented in MATLAB, has been successfully applied to estimate the nonlinear frequency response of two systems. The first one is a two DOFs spring-mass-damper system, and the second example takes into account a full aircraft FE Model. In spite of the different levels of complexity, both examples show the reliability and effectiveness of the method. The results highlight a feasible and robust procedure, which allows a quick estimation of the effect of localized nonlinearities on the dynamic behavior. The method is particularly powerful when most of the FE Model can be considered as acting linearly and the nonlinear behavior is restricted to few degrees of freedom. The procedure is very attractive from a computational point of view because the FEM needs to be run just once, which allows faster nonlinear sensitivity analysis and easier implementation of optimization procedures for the calibration of nonlinear models.Keywords: frequency response, nonlinear dynamics, structural dynamic modification, softening effect, rubber
Procedia PDF Downloads 270963 Consensus Reaching Process and False Consensus Effect in a Problem of Portfolio Selection
Authors: Viviana Ventre, Giacomo Di Tollo, Roberta Martino
Abstract:
The portfolio selection problem includes the evaluation of many criteria that are difficult to compare directly and is characterized by uncertain elements. The portfolio selection problem can be modeled as a group decision problem in which several experts are invited to present their assessment. In this context, it is important to study and analyze the process of reaching a consensus among group members. Indeed, due to the various diversities among experts, reaching consensus is not necessarily always simple and easily achievable. Moreover, the concept of consensus is accompanied by the concept of false consensus, which is particularly interesting in the dynamics of group decision-making processes. False consensus can alter the evaluation and selection phase of the alternative and is the consequence of the decision maker's inability to recognize that his preferences are conditioned by subjective structures. The present work aims to investigate the dynamics of consensus attainment in a group decision problem in which equivalent portfolios are proposed. In particular, the study aims to analyze the impact of the subjective structure of the decision-maker during the evaluation and selection phase of the alternatives. Therefore, the experimental framework is divided into three phases. In the first phase, experts are sent to evaluate the characteristics of all portfolios individually, without peer comparison, arriving independently at the selection of the preferred portfolio. The experts' evaluations are used to obtain individual Analytical Hierarchical Processes that define the weight that each expert gives to all criteria with respect to the proposed alternatives. This step provides insight into how the decision maker's decision process develops, step by step, from goal analysis to alternative selection. The second phase includes the description of the decision maker's state through Markov chains. In fact, the individual weights obtained in the first phase can be reviewed and described as transition weights from one state to another. Thus, with the construction of the individual transition matrices, the possible next state of the expert is determined from the individual weights at the end of the first phase. Finally, the experts meet, and the process of reaching consensus is analyzed by considering the single individual state obtained at the previous stage and the false consensus bias. The work contributes to the study of the impact of subjective structures, quantified through the Analytical Hierarchical Process, and how they combine with the false consensus bias in group decision-making dynamics and the consensus reaching process in problems involving the selection of equivalent portfolios.Keywords: analytical hierarchical process, consensus building, false consensus effect, markov chains, portfolio selection problem
Procedia PDF Downloads 98