Search results for: slow ontology
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 869

Search results for: slow ontology

119 Increasing the Dialogue in Workplaces Enhances the Age-Friendly Organisational Culture and Helps Employees Face Work-Related Dilemmas

Authors: Heli Makkonen, Eini Hyppönen

Abstract:

The ageing of employees, the availability of workforce, and employees’ engagement in work are today’s challenges in the field of health care and social services, and particularly in the care of older people. Therefore, it is important to enhance both the attractiveness of the work in the field of older people’s care and the retention of employees in the field, and also to pay attention to the length of careers. The length of careers can be affected, for example, by developing an age-friendly organisational culture. Changing the organisational culture in a workplace is, however, a slow process which requires engagement from employees and enhanced dialogue between employees. This article presents an example of age-friendly organisational culture in an older people’s care unit and presents the results of the development of this organisational culture to meet the identified development challenges. In this research-based development process, cycles used in action research were applied. Three workshops were arranged for employees in a service home for older people. The workshops worked as interventions, and the employees and their manager were given several consecutive assignments to be completed between them. In addition to workshops, the employees benchmarked two other service homes. In the workshops, data was collected by observing and documenting the conversations. After that, thematic analysis was used to identify the factors connected to an age-friendly organisational culture. By analysing the data and comparing it to previous studies, some dilemmas we recognised that were hindering or enhancing the attractiveness of work and the retention of employees in this nursing home. After each intervention, the process was reflected and evaluated, and the next steps were planned. The areas of development identified in the study were related to, for example, the flexibility of work, holistic ergonomics, the physical environment at the workplace, and the workplace culture. Some of the areas of development were taken over by the work community and carried out in cooperation with e.g. occupational health care. We encouraged the work community, and the employees provided us with information about their progress. In this research project, the focus was on the development of the workplace culture and, in particular, on the development of the culture of interaction. The workshops showed employees’ attitudes and strong opinions, which can be a challenge from the point of view of the attractiveness of work and the retention of employees in the field. On the other hand, the data revealed that the work community has an interest in developing the dialogue in the work community. Enhancing the dialogue gave the employees the opportunity and resources to face even challenging dilemmas related to the attractiveness of work and the retention of employees in the field. The psychological safety was also enhanced at the same time. The results of this study are part of a broader study that aims at building a model for extending older employees’ careers.

Keywords: age-friendliness, attractiveness of work, dialogue, older people, organisational culture, workplace culture

Procedia PDF Downloads 53
118 The Inverse Problem in Energy Beam Processes Using Discrete Adjoint Optimization

Authors: Aitor Bilbao, Dragos Axinte, John Billingham

Abstract:

The inverse problem in Energy Beam (EB) Processes consists of defining the control parameters, in particular the 2D beam path (position and orientation of the beam as a function of time), to arrive at a prescribed solution (freeform surface). This inverse problem is well understood for conventional machining, because the cutting tool geometry is well defined and the material removal is a time independent process. In contrast, EB machining is achieved through the local interaction of a beam of particular characteristics (e.g. energy distribution), which leads to a surface-dependent removal rate. Furthermore, EB machining is a time-dependent process in which not only the beam varies with the dwell time, but any acceleration/deceleration of the machine/beam delivery system, when performing raster paths will influence the actual geometry of the surface to be generated. Two different EB processes, Abrasive Water Machining (AWJM) and Pulsed Laser Ablation (PLA), are studied. Even though they are considered as independent different technologies, both can be described as time-dependent processes. AWJM can be considered as a continuous process and the etched material depends on the feed speed of the jet at each instant during the process. On the other hand, PLA processes are usually defined as discrete systems and the total removed material is calculated by the summation of the different pulses shot during the process. The overlapping of these shots depends on the feed speed and the frequency between two consecutive shots. However, if the feed speed is sufficiently slow compared with the frequency, then consecutive shots are close enough and the behaviour can be similar to a continuous process. Using this approximation a generic continuous model can be described for both processes. The inverse problem is usually solved for this kind of process by simply controlling dwell time in proportion to the required depth of milling at each single pixel on the surface using a linear model of the process. However, this approach does not always lead to the good solution since linear models are only valid when shallow surfaces are etched. The solution of the inverse problem is improved by using a discrete adjoint optimization algorithm. Moreover, the calculation of the Jacobian matrix consumes less computation time than finite difference approaches. The influence of the dynamics of the machine on the actual movement of the jet is also important and should be taken into account. When the parameters of the controller are not known or cannot be changed, a simple approximation is used for the choice of the slope of a step profile. Several experimental tests are performed for both technologies to show the usefulness of this approach.

Keywords: abrasive waterjet machining, energy beam processes, inverse problem, pulsed laser ablation

Procedia PDF Downloads 257
117 Measurement of Fatty Acid Changes in Post-Mortem Belowground Carcass (Sus-scrofa) Decomposition: A Semi-Quantitative Methodology for Determining the Post-Mortem Interval

Authors: Nada R. Abuknesha, John P. Morgan, Andrew J. Searle

Abstract:

Information regarding post-mortem interval (PMI) in criminal investigations is vital to establish a time frame when reconstructing events. PMI is defined as the time period that has elapsed between the occurrence of death and the discovery of the corpse. Adipocere, commonly referred to as ‘grave-wax’, is formed when post-mortem adipose tissue is converted into a solid material that is heavily comprised of fatty acids. Adipocere is of interest to forensic anthropologists, as its formation is able to slow down the decomposition process. Therefore, analysing the changes in the patterns of fatty acids during the early decomposition process may be able to estimate the period of burial, and hence the PMI. The current study concerned the investigation of the fatty acid composition and patterns in buried pig fat tissue. This was in an attempt to determine whether particular patterns of fatty acid composition can be shown to be associated with the duration of the burial, and hence may be used to estimate PMI. The use of adipose tissue from the abdominal region of domestic pigs (Sus-scrofa), was used to model the human decomposition process. 17 x 20cm piece of pork belly was buried in a shallow artificial grave, and weekly samples (n=3) from the buried pig fat tissue were collected over an 11-week period. Marker fatty acids: palmitic (C16:0), oleic (C18:1n-9) and linoleic (C18:2n-6) acid were extracted from the buried pig fat tissue and analysed as fatty acid methyl esters using the gas chromatography system. Levels of the marker fatty acids were quantified from their respective standards. The concentrations of C16:0 (69.2 mg/mL) and C18:1n-9 (44.3 mg/mL) from time zero exhibited significant fluctuations during the burial period. Levels rose (116 and 60.2 mg/mL, respectively) and fell starting from the second week to reach 19.3 and 18.3 mg/mL, respectively at week 6. Levels showed another increase at week 9 (66.3 and 44.1 mg/mL, respectively) followed by gradual decrease at week 10 (20.4 and 18.5 mg/mL, respectively). A sharp increase was observed in the final week (131.2 and 61.1 mg/mL, respectively). Conversely, the levels of C18:2n-6 remained more or less constant throughout the study. In addition to fluctuations in the concentrations, several new fatty acids appeared in the latter weeks. Other fatty acids which were detectable in the time zero sample, were lost in the latter weeks. There are several probable opportunities to utilise fatty acid analysis as a basic technique for approximating PMI: the quantification of marker fatty acids and the detection of selected fatty acids that either disappear or appear during the burial period. This pilot study indicates that this may be a potential semi-quantitative methodology for determining the PMI. Ideally, the analysis of particular fatty acid patterns in the early stages of decomposition could be an additional tool to the already available techniques or methods in improving the overall processes in estimating PMI of a corpse.

Keywords: adipocere, fatty acids, gas chromatography, post-mortem interval

Procedia PDF Downloads 103
116 Self-Sensing Concrete Nanocomposites for Smart Structures

Authors: A. D'Alessandro, F. Ubertini, A. L. Materazzi

Abstract:

In the field of civil engineering, Structural Health Monitoring is a topic of growing interest. Effective monitoring instruments permit the control of the working conditions of structures and infrastructures, through the identification of behavioral anomalies due to incipient damages, especially in areas of high environmental hazards as earthquakes. While traditional sensors can be applied only in a limited number of points, providing a partial information for a structural diagnosis, novel transducers may allow a diffuse sensing. Thanks to the new tools and materials provided by nanotechnology, new types of multifunctional sensors are developing in the scientific panorama. In particular, cement-matrix composite materials capable of diagnosing their own state of strain and tension, could be originated by the addition of specific conductive nanofillers. Because of the nature of the material they are made of, these new cementitious nano-modified transducers can be inserted within the concrete elements, transforming the same structures in sets of widespread sensors. This paper is aimed at presenting the results of a research about a new self-sensing nanocomposite and about the implementation of smart sensors for Structural Health Monitoring. The developed nanocomposite has been obtained by inserting multi walled carbon nanotubes within a cementitious matrix. The insertion of such conductive carbon nanofillers provides the base material with piezoresistive characteristics and peculiar sensitivity to mechanical modifications. The self-sensing ability is achieved by correlating the variation of the external stress or strain with the variation of some electrical properties, such as the electrical resistance or conductivity. Through the measurement of such electrical characteristics, the performance and the working conditions of an element or a structure can be monitored. Among conductive carbon nanofillers, carbon nanotubes seem to be particularly promising for the realization of self-sensing cement-matrix materials. Some issues related to the nanofiller dispersion or to the influence of the nano-inclusions amount in the cement matrix need to be carefully investigated: the strain sensitivity of the resulting sensors is influenced by such factors. This work analyzes the dispersion of the carbon nanofillers, the physical properties of the fresh dough, the electrical properties of the hardened composites and the sensing properties of the realized sensors. The experimental campaign focuses specifically on their dynamic characterization and their applicability to the monitoring of full-scale elements. The results of the electromechanical tests with both slow varying and dynamic loads show that the developed nanocomposite sensors can be effectively used for the health monitoring of structures.

Keywords: carbon nanotubes, self-sensing nanocomposites, smart cement-matrix sensors, structural health monitoring

Procedia PDF Downloads 206
115 Policy Initiatives That Increase Mass-Market Participation of Fuel Cell Electric Vehicles

Authors: Usman Asif, Klaus Schmidt

Abstract:

In recent years, the development of alternate fuel vehicles has helped to reduce carbon emissions worldwide. As the number of vehicles will continue to increase in the future, the energy demand will also increase. Therefore, we must consider automotive technologies that are efficient and less harmful to the environment in the long run. Battery Electric Vehicles (BEVs) have gained popularity in recent years because of their lower maintenance, lower fuel costs, and lower carbon emissions. Nevertheless, BEVs show several disadvantages, such as slow charging times and lower range than traditional combustion-powered vehicles. These factors keep many people from switching to BEVs. The authors of this research believe that these limitations can be overcome by using fuel cell technology. Fuel cell technology converts chemical energy into electrical energy from hydrogen power and therefore serves as fuel to power the motor and thus replacing heavy lithium batteries that are expensive and hard to recycle. Also, in contrast to battery-powered electric vehicle technology, Fuel Cell Electric Vehicles (FCEVs) offer higher ranges and lower fuel-up times and therefore are more competitive with electric vehicles. However, FCEVs have not gained the same popularity as electric vehicles due to stringent legal frameworks, underdeveloped infrastructure, high fuel transport, and storage costs plus the expense of fuel cell technology itself. This research will focus on the legal frameworks for hydrogen-powered vehicles, and how a change in these policies may affect and improve hydrogen fueling infrastructure and lower hydrogen transport and storage costs. These policies may also facilitate reductions in fuel cell technology costs. In order to attain a better framework, a number of countries have developed conceptual roadmaps. These roadmaps have set out a series of objectives to increase the access of FCEVs to their respective markets. This research will specifically focus on policies in Japan, Europe, and the USA in their attempt to shape the automotive industry of the future. The researchers also suggest additional policies that may help to accelerate the advancement of FCEVs to mass-markets. The approach was to provide a solid literature review using resources from around the globe. After a subsequent analysis and synthesis of this review, the authors concluded that in spite of existing legal challenges that have hindered the advancement of fuel-cell technology in the automobile industry in the past, new initiatives that enhance and advance the very same technology in the future are underway.

Keywords: fuel cell electric vehicles, fuel cell technology, legal frameworks, policies and regulations

Procedia PDF Downloads 93
114 An Approach to Study the Biodegradation of Low Density Polyethylene Using Microbial Strains of Bacillus subtilus, Aspergillus niger, Pseudomonas fluroscence in Different Media Form and Salt Condition

Authors: Monu Ojha, Rahul Rana, Satywati Sharma, Kavya Dashora

Abstract:

The global production rate of plastics has increased enormously and global demand for polyethylene resins –High-density polyethylene (HDPE), Linear low-density polyethylene (LLDPE) and Low-density polyethylene (LDPE) is expected to rise drastically, with very high value. These get accumulated in the environment, posing a potential ecological threat as they are degrading at a very slow rate and remain in the environment indefinitely. The aim of the present study was to investigate the potential of commonly found soil microbes like Bacillus subtilus, Aspergillus niger, Pseudomonas fluroscence for their ability to biodegrade LDPE in the lab on solid and liquid media conditions as well as in presence of 1% salt in the soil. This study was conducted at Indian Institute of Technology, Delhi, India from July to September where average temperature and RH (Relative Humidity) were 33 degrees Celcius and 80% respectively. It revealed that the weight loss of LDPE strip obtained from market of approximately 4x6 cm dimensions is more in liquid broth media than in solid agar media. The percentage weight loss by P. fluroscence, A. niger and B. subtilus observed after 80 days of incubation was 15.52, 9.24 and 8.99% respectively in broth media and 6.93, 2.18 and 4.76 % in agar media. The LDPE strips from same source and on the same were subjected to soil in presence of above microbes with 1% salt (NaCl: obtained from commercial table salt) with temperature and RH 33 degree Celcius and 80%. It was found that the rate of degradation increased in the soil than under lab conditions. The rate of weight loss of LDPE strips under same conditions given in lab was found to be 32.98, 15.01 and17.09 % by P. fluroscence, A. niger and B. subtilus respectively. The breaking strength was found to be 9.65N, 29N and 23.85 N for P. fluroscence, A. niger and B. subtilus respectively. SEM analysis conducted on Zeiss EVO 50 confirmed that surface of LDPE becomes physically weak after biological treatment. There was the increase in the surface roughness indicating Surface erosion of LDPE film. FTIR (Fourier-transform infrared spectroscopy) analysis of the degraded LDPE films showed stretching of aldehyde group at 3334.92 and 3228.84 cm-1,, C–C=C symmetric of aromatic ring at 1639.49 cm-1.There was also C=O stretching of aldehyde group at 1735.93 cm-1. N=O peak bend was also observed which corresponds to 1365.60 cm-1, C–O stretching of ether group at 1217.08 and 1078.21 cm-1.

Keywords: microbial degradation, LDPE, Aspergillus niger, Bacillus subtilus, Peudomonas fluroscence, common salt

Procedia PDF Downloads 141
113 Toxicity and Biodegradability of Veterinary Antibiotic Tiamulin

Authors: Gabriela Kalcikova, Igor Bosevski, Ula Rozman, Andreja Zgajnar Gotvajn

Abstract:

Antibiotics are extensively used in human medicine and also in animal husbandry to prevent or control infections. Recently, a lot of attention has been put on veterinary antibiotics, because their global consumption is increasing and it is expected to be 106.600 tons in 2030. Most of veterinary antibiotics are introduced into the environment via animal manure, which is used as fertilizer. One of such veterinary antibiotics is tiamulin. It is used the form of fumarate for treatment of pig and poultry. It is used against prophylaxis of dysentery, pneumonia and mycroplasmal infections, but its environmental impact is practically unknown. Tiamulin has been found very persistent in animal manure and thus it is expected that can be, during rainfalls, transported into the aquatic environment and affect various organisms. For assessment of its environmental impact, it is necessary to evaluate its biodegradability and toxicity to various organisms from different levels of a food chain. Therefore, the aim of our study was to evaluate ready biodegradability and toxicity of tiamulin fumarate to various organisms. Bioassay used included luminescent bacterium Vibrio fischeri heterotrophic and nitrifying microorganisms of activated sludge, water flea Daphnia magna and duckweed Lemna minor. For each species, EC₅₀ values were calculated. Biodegradability test was used for determination of ready biodegradability and it provides information about biodegradability of tiamulin under the most common environmental conditions. Results of our study showed that tiamulin differently affects selected organisms. The most sensitive organisms were water fleas with 48hEC₅₀ = 14.2 ± 4.8 mg/L and duckweed with 168hEC₅₀ = 22.6 ± 0.8 mg/L. Higher concentrations of tiamulin (from 10 mg/L) significantly affected photosynthetic pigments content in duckweed and concentrations above 80 mg/L cause visible chlorosis. It is in agreement with previous studies showing significant effect of tiamulin on green algae and cyanobacteria. Tiamuline has a low effect on microorganisms. The lower toxicity was observed for heterotrophic microorganisms (30minEC₅₀ = 1656 ± 296 mg/L), than Vibrio fisheri (30minEC₅₀ = 492 ± 21) and the most sensitive organisms were nitrifying microorganisms (30minEC₅₀ = 183 ± 127 mg/L). The reason is most probably the mode of action of tiamulin being effective to gram-positive bacteria while gram-negative (e.g., Vibrio fisheri) are more tolerant to tiamulin. Biodegradation of tiamulin was very slow with a long lag-phase being 20 days. The maximal degradation reached 40 ± 2 % in 43 days of the test and tiamulin as other antibiotics (e.g. ciprofloxacin) are not easily biodegradable. Tiamulin is widely used antibiotic in veterinary medicine and thus present in the environment. According to our results, tiamulin can have negative effect on water fleas and duckweeds, but the concentrations are several magnitudes higher than that found in any environmental compartment. Tiamulin is low toxic to tested microorganisms, but it is very low biodegradable and thus possibly persistent in the environment.

Keywords: antibiotics, biodegradability, tiamulin, toxicity

Procedia PDF Downloads 161
112 Application of Micro-Tunneling Technique to Rectify Tilted Structures Constructed on Cohesive Soil

Authors: Yasser R. Tawfic, Mohamed A. Eid

Abstract:

Foundation differential settlement and supported structure tilting is an occasionally occurred engineering problem. This may be caused by overloading, changes in ground soil properties or unsupported nearby excavations. Engineering thinking points directly toward the logic solution for such problem by uplifting the settled side. This can be achieved with deep foundation elements such as micro-piles and macro-piles™, jacked piers and helical piers, jet grouted soil-crete columns, compaction grout columns, cement grouting or with chemical grouting, or traditional pit underpinning with concrete and mortar. Although, some of these techniques offer economic, fast and low noise solutions, many of them are quite the contrary. For tilted structures, with limited inclination, it may be much easier to cause a balancing settlement on the less-settlement side which shall be done carefully in a proper rate. This principal has been applied in Leaning Tower of Pisa stabilization with soil extraction from the ground surface. In this research, the authors attempt to introduce a new solution with a different point of view. So, micro-tunneling technique is presented in here as an intended ground deformation cause. In general, micro-tunneling is expected to induce limited ground deformations. Thus, the researchers propose to apply the technique to form small size ground unsupported holes to produce the target deformations. This shall be done in four phases: •Application of one or more micro-tunnels, regarding the existing differential settlement value, under the raised side of the tilted structure. •For each individual tunnel, the lining shall be pulled out from both sides (from jacking and receiving shafts) in slow rate. •If required, according to calculations and site records, an additional surface load can be applied on the raised foundation side. •Finally, a strengthening soil grouting shall be applied for stabilization after adjustment. A finite element based numerical model is presented to simulate the proposed construction phases for different tunneling positions and tunnels group. For each case, the surface settlements are calculated and induced plasticity points are checked. These results show the impact of the suggested procedure on the tilted structure and its feasibility. Comparing results also show the importance of the position selection and tunnels group gradual effect. Thus, a new engineering solution is presented to one of the structural and geotechnical engineering challenges.

Keywords: differential settlement, micro-tunneling, soil-structure interaction, tilted structures

Procedia PDF Downloads 186
111 Blockchain Platform Configuration for MyData Operator in Digital and Connected Health

Authors: Minna Pikkarainen, Yueqiang Xu

Abstract:

The integration of digital technology with existing healthcare processes has been painfully slow, a huge gap exists between the fields of strictly regulated official medical care and the quickly moving field of health and wellness technology. We claim that the promises of preventive healthcare can only be fulfilled when this gap is closed – health care and self-care becomes seamless continuum “correct information, in the correct hands, at the correct time allowing individuals and professionals to make better decisions” what we call connected health approach. Currently, the issues related to security, privacy, consumer consent and data sharing are hindering the implementation of this new paradigm of healthcare. This could be solved by following MyData principles stating that: Individuals should have the right and practical means to manage their data and privacy. MyData infrastructure enables decentralized management of personal data, improves interoperability, makes it easier for companies to comply with tightening data protection regulations, and allows individuals to change service providers without proprietary data lock-ins. This paper tackles today’s unprecedented challenges of enabling and stimulating multiple healthcare data providers and stakeholders to have more active participation in the digital health ecosystem. First, the paper systematically proposes the MyData approach for healthcare and preventive health data ecosystem. In this research, the work is targeted for health and wellness ecosystems. Each ecosystem consists of key actors, such as 1) individual (citizen or professional controlling/using the services) i.e. data subject, 2) services providing personal data (e.g. startups providing data collection apps or data collection devices), 3) health and wellness services utilizing aforementioned data and 4) services authorizing the access to this data under individual’s provided explicit consent. Second, the research extends the existing four archetypes of orchestrator-driven healthcare data business models for the healthcare industry and proposes the fifth type of healthcare data model, the MyData Blockchain Platform. This new architecture is developed by the Action Design Research approach, which is a prominent research methodology in the information system domain. The key novelty of the paper is to expand the health data value chain architecture and design from centralization and pseudo-decentralization to full decentralization, enabled by blockchain, thus the MyData blockchain platform. The study not only broadens the healthcare informatics literature but also contributes to the theoretical development of digital healthcare and blockchain research domains with a systemic approach.

Keywords: blockchain, health data, platform, action design

Procedia PDF Downloads 75
110 Gene Expression Meta-Analysis of Potential Shared and Unique Pathways Between Autoimmune Diseases Under anti-TNFα Therapy

Authors: Charalabos Antonatos, Mariza Panoutsopoulou, Georgios K. Georgakilas, Evangelos Evangelou, Yiannis Vasilopoulos

Abstract:

The extended tissue damage and severe clinical outcomes of autoimmune diseases, accompanied by the high annual costs to the overall health care system, highlight the need for an efficient therapy. Increasing knowledge over the pathophysiology of specific chronic inflammatory diseases, namely Psoriasis (PsO), Inflammatory Bowel Diseases (IBD) consisting of Crohn’s disease (CD) and Ulcerative colitis (UC), and Rheumatoid Arthritis (RA), has provided insights into the underlying mechanisms that lead to the maintenance of the inflammation, such as Tumor Necrosis Factor alpha (TNF-α). Hence, the anti-TNFα biological agents pose as an ideal therapeutic approach. Despite the efficacy of anti-TNFα agents, several clinical trials have shown that 20-40% of patients do not respond to treatment. Nowadays, high-throughput technologies have been recruited in order to elucidate the complex interactions in multifactorial phenotypes, with the most ubiquitous ones referring to transcriptome quantification analyses. In this context, a random effects meta-analysis of available gene expression cDNA microarray datasets was performed between responders and non-responders to anti-TNFα therapy in patients with IBD, PsO, and RA. Publicly available datasets were systematically searched from inception to 10th of November 2020 and selected for further analysis if they assessed the response to anti-TNFα therapy with clinical score indexes from inflamed biopsies. Specifically, 4 IBD (79 responders/72 non-responders), 3 PsO (40 responders/11 non-responders) and 2 RA (16 responders/6 non-responders) datasetswere selected. After the separate pre-processing of each dataset, 4 separate meta-analyses were conducted; three disease-specific and a single combined meta-analysis on the disease-specific results. The MetaVolcano R package (v.1.8.0) was utilized for a random-effects meta-analysis through theRestricted Maximum Likelihood (RELM) method. The top 1% of the most consistently perturbed genes in the included datasets was highlighted through the TopConfects approach while maintaining a 5% False Discovery Rate (FDR). Genes were considered as Differentialy Expressed (DEGs) as those with P ≤ 0.05, |log2(FC)| ≥ log2(1.25) and perturbed in at least 75% of the included datasets. Over-representation analysis was performed using Gene Ontology and Reactome Pathways for both up- and down-regulated genes in all 4 performed meta-analyses. Protein-Protein interaction networks were also incorporated in the subsequentanalyses with STRING v11.5 and Cytoscape v3.9. Disease-specific meta-analyses detected multiple distinct pro-inflammatory and immune-related down-regulated genes for each disease, such asNFKBIA, IL36, and IRAK1, respectively. Pathway analyses revealed unique and shared pathways between each disease, such as Neutrophil Degranulation and Signaling by Interleukins. The combined meta-analysis unveiled 436 DEGs, 86 out of which were up- and 350 down-regulated, confirming the aforementioned shared pathways and genes, as well as uncovering genes that participate in anti-inflammatory pathways, namely IL-10 signaling. The identification of key biological pathways and regulatory elements is imperative for the accurate prediction of the patient’s response to biological drugs. Meta-analysis of such gene expression data could aid the challenging approach to unravel the complex interactions implicated in the response to anti-TNFα therapy in patients with PsO, IBD, and RA, as well as distinguish gene clusters and pathways that are altered through this heterogeneous phenotype.

Keywords: anti-TNFα, autoimmune, meta-analysis, microarrays

Procedia PDF Downloads 148
109 Controlling the Release of Cyt C and L- Dopa from pNIPAM-AAc Nanogel Based Systems

Authors: Sulalit Bandyopadhyay, Muhammad Awais Ashfaq Alvi, Anuvansh Sharma, Wilhelm R. Glomm

Abstract:

Release of drugs from nanogels and nanogel-based systems can occur under the influence of external stimuli like temperature, pH, magnetic fields and so on. pNIPAm-AAc nanogels respond to the combined action of both temperature and pH, the former being mostly determined by hydrophilic-to-hydrophobic transitions above the volume phase transition temperature (VPTT), while the latter is controlled by the degree of protonation of the carboxylic acid groups. These nanogels based systems are promising candidates in the field of drug delivery. Combining nanogels with magneto-plasmonic nanoparticles (NPs) introduce imaging and targeting modalities along with stimuli-response in one hybrid system, thereby incorporating multifunctionality. Fe@Au core-shell NPs possess optical signature in the visible spectrum owing to localized surface plasmon resonance (LSPR) of the Au shell, and superparamagnetic properties stemming from the Fe core. Although there exist several synthesis methods to control the size and physico-chemical properties of pNIPAm-AAc nanogels, yet, there is no comprehensive study that highlights the dependence of incorporation of one or more layers of NPs to these nanogels. In addition, effective determination of volume phase transition temperature (VPTT) of the nanogels is a challenge which complicates their uses in biological applications. Here, we have modified the swelling-collapse properties of pNIPAm-AAc nanogels, by combining with Fe@Au NPs using different solution based methods. The hydrophilic-hydrophobic transition of the nanogels above the VPTT has been confirmed to be reversible. Further, an analytical method has been developed to deduce the average VPTT which is found to be 37.3°C for the nanogels and 39.3°C for nanogel coated Fe@Au NPs. An opposite swelling –collapse behaviour is observed for the latter where the Fe@Au NPs act as bridge molecules pulling together the gelling units. Thereafter, Cyt C, a model protein drug and L-Dopa, a drug used in the clinical treatment of Parkinson’s disease were loaded separately into the nanogels and nanogel coated Fe@Au NPs, using a modified breathing-in mechanism. This gave high loading and encapsulation efficiencies (L Dopa: ~9% and 70µg/mg of nanogels, Cyt C: ~30% and 10µg/mg of nanogels respectively for both the drugs. The release kinetics of L-Dopa, monitored using UV-vis spectrophotometry was observed to be rather slow (over several hours) with highest release happening under a combination of high temperature (above VPTT) and acidic conditions. However, the release of L-Dopa from nanogel coated Fe@Au NPs was the fastest, accounting for release of almost 87% of the initially loaded drug in ~30 hours. The chemical structure of the drug, drug incorporation method, location of the drug and presence of Fe@Au NPs largely alter the drug release mechanism and the kinetics of these nanogels and Fe@Au NPs coated with nanogels.

Keywords: controlled release, nanogels, volume phase transition temperature, l-dopa

Procedia PDF Downloads 307
108 Secondhand Clothing and the Future of Fashion

Authors: Marike Venter de Villiers, Jessica Ramoshaba

Abstract:

In recent years, the fashion industry has been associated with the exploitation of both people and resources. This is largely due to the emergence of the fast fashion concept, which entails rapid and continual style changes where clothes quickly lose their appeal, become out-of-fashion, and are then disposed of. This cycle often entails appalling working conditions in sweatshops with low wages, child labor, and a significant amount of textile waste that ends up in landfills. Although the awareness of the negative implications of ‘mindless fashion production and consumption’ is growing, fast fashion remains to be a popular choice among the youth. This is especially prevalent in South Africa, a poverty-stricken country where a vast number of young adults are unemployed and living in poverty. Despite being in poverty, the celebrity conscious culture and fashion products frequently portrayed on the growing intrusive social media platforms in South Africa pressurizes the consumers to purchase fashion and luxury products. Young adults are therefore more vulnerable to the temptation to purchase fast fashion products. A possible solution to the detrimental effects that the fast fashion industry has on the environment is the revival of the secondhand clothing trend. Although the popularity of secondhand clothing has gained momentum among selected consumer segments, the adoption rate of such remains slow. The main purpose of this study was to explore consumers’ perceptions of the secondhand clothing trend and to gain insight into factors that inhibit the adoption of secondhand clothing. This study also aimed to investigate whether consumers are aware of the negative implications of the fast fashion industry and their likelihood to shift their clothing purchases to that of secondhand clothing. By means of a quantitative study, fifty young females were asked to complete a semi-structured questionnaire. The researcher approached females between the ages of 18 and 35 in a face-to-face setting. The results indicated that although they had an awareness of the negative consequences of fast fashion, they lacked detailed insight into the pertinent effects of fast fashion on the environment. Further, a number of factors inhibit their decision to buy from secondhand stores: firstly, the accessibility to the latest trends was not always available in secondhand stores; secondly, the convenience of shopping from a chain store outweighs the inconvenience of searching for and finding a secondhand store; and lastly, they perceived secondhand clothing to pose a hygiene risk. The findings of this study provide fashion marketers, and secondhand clothing stores, with insight into how they can incorporate the secondhand clothing trend into their strategies and marketing campaigns in an attempt to make the fashion industry more sustainable.

Keywords: eco-friendly fashion, fast fashion, secondhand clothing, eco-friendly fashion

Procedia PDF Downloads 109
107 Plastic Waste Sorting by the People of Dakar

Authors: E. Gaury, P. Mandausch, O. Picot, A. R. Thomas, L. Veisblat, L. Ralambozanany, C. Delsart

Abstract:

In Dakar, demographic and spatial growth was accompanied by a 50% increase in household waste between 1988 and 2008 in the city. In addition, a change in the nature of household waste was observed between 1990 and 2007. The share of plastic increased by 15% between 2004 and 2007 in Dakar. Plastics represent the seventh category of household waste, the most produced per year in Senegal. The share of plastic in household and similar waste is 9% in Senegal. Waste management in the city of Dakar is a complex process involving a multitude of formal and informal actors with different perceptions and objectives. The objective of this study was to understand the motivations that could lead to sorting action, as well as the perception of plastic waste sorting within the Dakar population (households and institutions). The problematic of this study was as follows: what may be the factors playing a role in the sorting action? In an attempt to answer this, two approaches have been developed: (1) An exploratory qualitative study by semi-structured interviews with two groups of individuals concerned by the sorting of plastic waste: on the one hand, the experts in charge of waste management and on the other the households-producers of waste plastics. This study served as the basis for formulating the hypotheses and thus for the quantitative analysis. (2) A quantitative study using a questionnaire survey method among households producing plastic waste in order to test the previously formulated hypotheses. The objective was to have quantitative results representative of the population of Dakar in relation to the behavior and the process inherent in the adoption of the plastic waste sorting action. The exploratory study shows that the perception of state responsibility varies between institutions and households. Public institutions perceive this as a shared responsibility because the problem of plastic waste affects many sectors (health, environmental education, etc.). Their involvement is geared more towards raising awareness and educating young people. As state action is limited, the emergence of private companies in this sector seems logical as they are setting up collection networks to develop a recycling activity. The state plays a moral support role in these activities and encourages companies to do more. The study of the understanding of the action of sorting plastic waste by the population of Dakar through a quantitative analysis was able to demonstrate the attitudes and constraints inherent in the adoption of plastic waste sorting.Cognitive attitude, knowledge, and visible consequences have been shown to correlate positively with sorting behavior. Thus, it would seem that the population of Dakar is more sensitive to what they see and what they know to adopt sorting behavior.It has also been shown that the strongest constraints that could slow down sorting behavior were the complexity of the process, too much time and the lack of infrastructure in which to deposit plastic waste.

Keywords: behavior, Dakar, plastic waste, waste management

Procedia PDF Downloads 63
106 Risk Based Inspection and Proactive Maintenance for Civil and Structural Assets in Oil and Gas Plants

Authors: Mohammad Nazri Mustafa, Sh Norliza Sy Salim, Pedram Hatami Abdullah

Abstract:

Civil and structural assets normally have an average of more than 30 years of design life. Adding to this advantage, the assets are normally subjected to slow degradation process. Due to the fact that repair and strengthening work for these assets are normally not dependent on plant shut down, the maintenance and integrity restoration of these assets are mostly done based on “as required” and “run to failure” basis. However unlike other industries, the exposure in oil and gas environment is harsher as the result of corrosive soil and groundwater, chemical spill, frequent wetting and drying, icing and de-icing, steam and heat, etc. Due to this type of exposure and the increasing level of structural defects and rectification in line with the increasing age of plants, assets integrity assessment requires a more defined scope and procedures that needs to be based on risk and assets criticality. This leads to the establishment of risk based inspection and proactive maintenance procedure for civil and structural assets. To date there is hardly any procedure and guideline as far as integrity assessment and systematic inspection and maintenance of civil and structural assets (onshore) are concerned. Group Technical Solutions has developed a procedure and guideline that takes into consideration credible failure scenario, assets risk and criticality from process safety and structural engineering perspective, structural importance, modeling and analysis among others. Detailed inspection that includes destructive and non-destructive tests (DT & NDT) and structural monitoring is also being performed to quantify defects, assess severity and impact on integrity as well as identify the timeline for integrity restoration. Each defect and its credible failure scenario is assessed against the risk on people, environment, reputation and production loss. This technical paper is intended to share on the established procedure and guideline and their execution in oil & gas plants. In line with the overall roadmap, the procedure and guideline will form part of specialized solutions to increase production and to meet the “Operational Excellence” target while extending service life of civil and structural assets. As the result of implementation, the management of civil and structural assets is now more systematically done and the “fire-fighting” mode of maintenance is being gradually phased out and replaced by a proactive and preventive approach. This technical paper will also set the criteria and pose the challenge to the industry for innovative repair and strengthening methods for civil & structural assets in oil & gas environment, in line with safety, constructability and continuous modification and revamp of plant facilities to meet production demand.

Keywords: assets criticality, credible failure scenario, proactive and preventive maintenance, risk based inspection

Procedia PDF Downloads 371
105 Specific Earthquake Ground Motion Levels That Would Affect Medium-To-High Rise Buildings

Authors: Rhommel Grutas, Ishmael Narag, Harley Lacbawan

Abstract:

Construction of high-rise buildings is a means to address the increasing population in Metro Manila, Philippines. The existence of the Valley Fault System within the metropolis and other nearby active faults poses threats to a densely populated city. The distant, shallow and large magnitude earthquakes have the potential to generate slow and long-period vibrations that would affect medium-to-high rise buildings. Heavy damage and building collapse are consequences of prolonged shaking of the structure. If the ground and the building have almost the same period, there would be a resonance effect which would cause the prolonged shaking of the building. Microzoning the long-period ground response would aid in the seismic design of medium to high-rise structures. The shear-wave velocity structure of the subsurface is an important parameter in order to evaluate ground response. Borehole drilling is one of the conventional methods of determining shear-wave velocity structure however, it is an expensive approach. As an alternative geophysical exploration, microtremor array measurements can be used to infer the structure of the subsurface. Microtremor array measurement system was used to survey fifty sites around Metro Manila including some municipalities of Rizal and Cavite. Measurements were carried out during the day under good weather conditions. The team was composed of six persons for the deployment and simultaneous recording of the microtremor array sensors. The instruments were laid down on the ground away from sewage systems and leveled using the adjustment legs and bubble level. A total of four sensors were deployed for each site, three at the vertices of an equilateral triangle with one sensor at the centre. The circular arrays were set up with a maximum side length of approximately four kilometers and the shortest side length for the smallest array is approximately at 700 meters. Each recording lasted twenty to sixty minutes. From the recorded data, f-k analysis was applied to obtain phase velocity curves. Inversion technique is applied to construct the shear-wave velocity structure. This project provided a microzonation map of the metropolis and a profile showing the long-period response of the deep sedimentary basin underlying Metro Manila which would be suitable for local administrators in their land use planning and earthquake resistant design of medium to high-rise buildings.

Keywords: earthquake, ground motion, microtremor, seismic microzonation

Procedia PDF Downloads 448
104 Acceleration of Adsorption Kinetics by Coupling Alternating Current with Adsorption Process onto Several Adsorbents

Authors: A. Kesraoui, M. Seffen

Abstract:

Applications of adsorption onto activated carbon for water treatment are well known. The process has been demonstrated to be widely effective for removing dissolved organic substances from wastewaters, but this treatment has a major drawback is the high operating cost. The main goal of our research work is to improve the retention capacity of Tunisian biomass for the depollution of industrial wastewater and retention of pollutants considered toxic. The biosorption process is based on the retention of molecules and ions onto a solid surface composed of biological materials. The evaluation of the potential use of these materials is important to propose as an alternative to the adsorption process generally expensive, used to remove organic compounds. Indeed, these materials are very abundant in nature and are low cost. Certainly, the biosorption process is effective to remove the pollutants, but it presents a slow kinetics. The improvement of the biosorption rates is a challenge to make this process competitive with respect to oxidation and adsorption onto lignocellulosic fibers. In this context, the alternating current appears as a new alternative, original and a very interesting phenomenon in the acceleration of chemical reactions. Our main goal is to increase the retention acceleration of dyes (indigo carmine, methylene blue) and phenol by using a new alternative: alternating current. The adsorption experiments have been performed in a batch reactor by adding some of the adsorbents in 150 mL of pollutants solution with the desired concentration and pH. The electrical part of the mounting comprises a current source which delivers an alternating current voltage of 2 to 15 V. It is connected to a voltmeter that allows us to read the voltage. In a 150 mL capacity cell, we plunged two zinc electrodes and the distance between two Zinc electrodes has been 4 cm. Thanks to alternating current, we have succeeded to improve the performance of activated carbon by increasing the speed of the indigo carmine adsorption process and reducing the treatment time. On the other hand, we have studied the influence of the alternating current on the biosorption rate of methylene blue onto Luffa cylindrica fibers and the hybrid material (Luffa cylindrica-ZnO). The results showed that the alternating current accelerated the biosorption rate of methylene blue onto the Luffa cylindrica and the Luffa cylindrica-ZnO hybrid material and increased the adsorbed amount of methylene blue on both adsorbents. In order to improve the removal of phenol, we performed the coupling between the alternating current and the biosorption onto two adsorbents: Luffa cylindrica and the hybrid material (Luffa cylindrica-ZnO). In fact, the alternating current has succeeded to improve the performance of adsorbents by increasing the speed of the adsorption process and the adsorption capacity and reduce the processing time.

Keywords: adsorption, alternating current, dyes, modeling

Procedia PDF Downloads 132
103 Nuclear Near Misses and Their Learning for Healthcare

Authors: Nick Woodier, Iain Moppett

Abstract:

Background: It is estimated that one in ten patients admitted to hospital will suffer an adverse event in their care. While the majority of these will result in low harm, patients are being significantly harmed by the processes meant to help them. Healthcare, therefore, seeks to make improvements in patient safety by taking learning from other industries that are perceived to be more mature in their management of safety events. Of particular interest to healthcare are ‘near misses,’ those events that almost happened but for an intervention. Healthcare does not have any guidance as to how best to manage and learn from near misses to reduce the chances of harm to patients. The authors, as part of a larger study of near-miss management in healthcare, sought to learn from the UK nuclear sector to develop principles for how healthcare can identify, report, and learn from near misses to improve patient safety. The nuclear sector was chosen as an exemplar due to its status as an ultra-safe industry. Methods: A Grounded Theory (GT) methodology, augmented by a scoping review, was used. Data collection included interviews, scenario discussion, field notes, and the literature. The review protocol is accessible online. The GT aimed to develop theories about how nuclear manages near misses with a focus on defining them and clarifying how best to support reporting and analysis to extract learning. Near misses related to radiation release or exposure were focused on. Results: Eightnuclear interviews contributed to the GT across nuclear power, decommissioning, weapons, and propulsion. The scoping review identified 83 articles across a range of safety-critical industries, with only six focused on nuclear. The GT identified that nuclear has a particular focus on precursors and low-level events, with regulation supporting their management. Exploration of definitions led to the recognition of the importance of several interventions in a sequence of events, but that do not solely rely on humans as these cannot be assumed to be robust barriers. Regarding reporting and analysis, no consistent methods were identified, but for learning, the role of operating experience learning groups was identified as an exemplar. The safety culture across nuclear, however, was heard to vary, which undermined reporting of near misses and other safety events. Some parts of the industry described that their focus on near misses is new and that despite potential risks existing, progress to mitigate hazards is slow. Conclusions: Healthcare often sees ‘nuclear,’ as well as other ultra-safe industries such as ‘aviation,’ as homogenous. However, the findings here suggest significant differences in safety culture and maturity across various parts of the nuclear sector. Healthcare can take learning from some aspects of management of near misses in nuclear, such as how they are defined and how learning is shared through operating experience networks. However, healthcare also needs to recognise that variability exists across industries, and comparably, it may be more mature in some areas of safety.

Keywords: culture, definitions, near miss, nuclear safety, patient safety

Procedia PDF Downloads 82
102 Effects of Temperature and the Use of Bacteriocins on Cross-Contamination from Animal Source Food Processing: A Mathematical Model

Authors: Benjamin Castillo, Luis Pastenes, Fernando Cerdova

Abstract:

The contamination of food by microbial agents is a common problem in the industry, especially regarding the elaboration of animal source products. Incorrect manipulation of the machinery or on the raw materials can cause a decrease in production or an epidemiological outbreak due to intoxication. In order to improve food product quality, different methods have been used to reduce or, at least, to slow down the growth of the pathogens, especially deteriorated, infectious or toxigenic bacteria. These methods are usually carried out under low temperatures and short processing time (abiotic agents), along with the application of antibacterial substances, such as bacteriocins (biotic agents). This, in a controlled and efficient way that fulfills the purpose of bacterial control without damaging the final product. Therefore, the objective of the present study is to design a secondary mathematical model that allows the prediction of both the biotic and abiotic factor impact associated with animal source food processing. In order to accomplish this objective, the authors propose a three-dimensional differential equation model, whose components are: bacterial growth, release, production and artificial incorporation of bacteriocins and changes in pH levels of the medium. These three dimensions are constantly being influenced by the temperature of the medium. Secondly, this model adapts to an idealized situation of cross-contamination animal source food processing, with the study agents being both the animal product and the contact surface. Thirdly, the stochastic simulations and the parametric sensibility analysis are compared with referential data. The main results obtained from the analysis and simulations of the mathematical model were to discover that, although bacterial growth can be stopped in lower temperatures, even lower ones are needed to eradicate it. However, this can be not only expensive, but counterproductive as well in terms of the quality of the raw materials and, on the other hand, higher temperatures accelerate bacterial growth. In other aspects, the use and efficiency of bacteriocins are an effective alternative in the short and medium terms. Moreover, an indicator of bacterial growth is a low-level pH, since lots of deteriorating bacteria are lactic acids. Lastly, the processing times are a secondary agent of concern when the rest of the aforementioned agents are under control. Our main conclusion is that when acclimating a mathematical model within the context of the industrial process, it can generate new tools that predict bacterial contamination, the impact of bacterial inhibition, and processing method times. In addition, the mathematical modeling proposed logistic input of broad application, which can be replicated on non-meat food products, other pathogens or even on contamination by crossed contact of allergen foods.

Keywords: bacteriocins, cross-contamination, mathematical model, temperature

Procedia PDF Downloads 119
101 Comparison of Methodologies to Compute the Probabilistic Seismic Hazard Involving Faults and Associated Uncertainties

Authors: Aude Gounelle, Gloria Senfaute, Ludivine Saint-Mard, Thomas Chartier

Abstract:

The long-term deformation rates of faults are not fully captured by Probabilistic Seismic Hazard Assessment (PSHA). PSHA that use catalogues to develop area or smoothed-seismicity sources is limited by the data available to constraint future earthquakes activity rates. The integration of faults in PSHA can at least partially address the long-term deformation. However, careful treatment of fault sources is required, particularly, in low strain rate regions, where estimated seismic hazard levels are highly sensitive to assumptions concerning fault geometry, segmentation and slip rate. When integrating faults in PSHA various constraints on earthquake rates from geologic and seismologic data have to be satisfied. For low strain rate regions where such data is scarce it would be especially challenging. Faults in PSHA requires conversion of the geologic and seismologic data into fault geometries, slip rates and then into earthquake activity rates. Several approaches exist for translating slip rates into earthquake activity rates. In the most frequently used approach, the background earthquakes are handled using a truncated approach, in which earthquakes with a magnitude lower or equal to a threshold magnitude (Mw) occur in the background zone, with a rate defined by the rate in the earthquake catalogue. Although magnitudes higher than the threshold are located on the fault with a rate defined using the average slip rate of the fault. As high-lighted by several research, seismic events with magnitudes stronger than the selected magnitude threshold may potentially occur in the background and not only at the fault, especially in regions of slow tectonic deformation. It also has been known that several sections of a fault or several faults could rupture during a single fault-to-fault rupture. It is then essential to apply a consistent modelling procedure to allow for a large set of possible fault-to-fault ruptures to occur aleatory in the hazard model while reflecting the individual slip rate of each section of the fault. In 2019, a tool named SHERIFS (Seismic Hazard and Earthquake Rates in Fault Systems) was published. The tool is using a methodology to calculate the earthquake rates in a fault system where the slip-rate budget of each fault is conversed into rupture rates for all possible single faults and faultto-fault ruptures. The objective of this paper is to compare the SHERIFS method with one other frequently used model to analyse the impact on the seismic hazard and through sensibility studies better understand the influence of key parameters and assumptions. For this application, a simplified but realistic case study was selected, which is in an area of moderate to hight seismicity (South Est of France) and where the fault is supposed to have a low strain.

Keywords: deformation rates, faults, probabilistic seismic hazard, PSHA

Procedia PDF Downloads 34
100 Liquid Food Sterilization Using Pulsed Electric Field

Authors: Tanmaya Pradhan, K. Midhun, M. Joy Thomas

Abstract:

Increasing the shelf life and improving the quality are important objectives for the success of packaged liquid food industry. One of the methods by which this can be achieved is by deactivating the micro-organisms present in the liquid food through pasteurization. Pasteurization is done by heating, but some serious disadvantages such as the reduction in food quality, flavour, taste, colour, etc. were observed because of heat treatment, which leads to the development of newer methods instead of pasteurization such as treatment using UV radiation, high pressure, nuclear irradiation, pulsed electric field, etc. In recent years the use of the pulsed electric field (PEF) for inactivation of the microbial content in the food is gaining popularity. PEF uses a very high electric field for a short time for the inactivation of microorganisms, for which we require a high voltage pulsed power source. Pulsed power sources used for PEF treatments are usually in the range of 5kV to 50kV. Different pulse shapes are used, such as exponentially decaying and square wave pulses. Exponentially decaying pulses are generated by high power switches with only turn-on capacity and, therefore, discharge the total energy stored in the capacitor bank. These pulses have a sudden onset and, therefore, a high rate of rising but have a very slow decay, which yields extra heat, which is ineffective in microbial inactivation. Square pulses can be produced by an incomplete discharge of a capacitor with the help of a switch having both on/off control or by using a pulse forming network. In this work, a pulsed power-based system is designed with the help of high voltage capacitors and solid-state switches (IGBT) for the inactivation of pathogenic micro-organism in liquid food such as fruit juices. The high voltage generator is based on the Marx generator topology, which can produce variable amplitude, frequency, and pulse width according to the requirements. Liquid food is treated in a chamber where pulsed electric field is produced between stainless steel electrodes using the pulsed output voltage of the supply. Preliminary bacterial inactivation tests were performed by subjecting orange juice inoculated with Escherichia Coli bacteria. With the help of the developed pulsed power source and the chamber, the inoculated orange has been PEF treated. The voltage was varied to get a peak electric field up to 15kV/cm. For a total treatment time of 200µs, a 30% reduction in the bacterial count has been observed. The detailed results and analysis will be presented in the final paper.

Keywords: Escherichia coli bacteria, high voltage generator, microbial inactivation, pulsed electric field, pulsed forming line, solid-state switch

Procedia PDF Downloads 152
99 Federated Knowledge Distillation with Collaborative Model Compression for Privacy-Preserving Distributed Learning

Authors: Shayan Mohajer Hamidi

Abstract:

Federated learning has emerged as a promising approach for distributed model training while preserving data privacy. However, the challenges of communication overhead, limited network resources, and slow convergence hinder its widespread adoption. On the other hand, knowledge distillation has shown great potential in compressing large models into smaller ones without significant loss in performance. In this paper, we propose an innovative framework that combines federated learning and knowledge distillation to address these challenges and enhance the efficiency of distributed learning. Our approach, called Federated Knowledge Distillation (FKD), enables multiple clients in a federated learning setting to collaboratively distill knowledge from a teacher model. By leveraging the collaborative nature of federated learning, FKD aims to improve model compression while maintaining privacy. The proposed framework utilizes a coded teacher model that acts as a reference for distilling knowledge to the client models. To demonstrate the effectiveness of FKD, we conduct extensive experiments on various datasets and models. We compare FKD with baseline federated learning methods and standalone knowledge distillation techniques. The results show that FKD achieves superior model compression, faster convergence, and improved performance compared to traditional federated learning approaches. Furthermore, FKD effectively preserves privacy by ensuring that sensitive data remains on the client devices and only distilled knowledge is shared during the training process. In our experiments, we explore different knowledge transfer methods within the FKD framework, including Fine-Tuning (FT), FitNet, Correlation Congruence (CC), Similarity-Preserving (SP), and Relational Knowledge Distillation (RKD). We analyze the impact of these methods on model compression and convergence speed, shedding light on the trade-offs between size reduction and performance. Moreover, we address the challenges of communication efficiency and network resource utilization in federated learning by leveraging the knowledge distillation process. FKD reduces the amount of data transmitted across the network, minimizing communication overhead and improving resource utilization. This makes FKD particularly suitable for resource-constrained environments such as edge computing and IoT devices. The proposed FKD framework opens up new avenues for collaborative and privacy-preserving distributed learning. By combining the strengths of federated learning and knowledge distillation, it offers an efficient solution for model compression and convergence speed enhancement. Future research can explore further extensions and optimizations of FKD, as well as its applications in domains such as healthcare, finance, and smart cities, where privacy and distributed learning are of paramount importance.

Keywords: federated learning, knowledge distillation, knowledge transfer, deep learning

Procedia PDF Downloads 47
98 Challenges Faced by the Parents of Mentally Challenged Children in India

Authors: Chamaraja Parulli

Abstract:

Family is an important social institution devoted to the growth of a child, and parents are the important agents of socialization. Mentally challenged children are those who are affected by intellectual disability, which is manifested by limitation in intellectual functioning and adoptive behavior. Intellectual disability affects about 3-4 percentage of the general population. Intellectual disability is caused by genetic condition, problems during pregnancy, problems during childbirth, or illness. Mental retardation is the world’s most complex and challenging issue. The stigmatization of disability results in social and economic marginalization. Parents of the mentally challenged children will have a very high level of parenting stress, which is significantly more than the stress perceived by the parents of the children without disability. The prevalence of severe mental disorder called Schizophrenia is among 1.1 percent of the total population in India. On the other hand, 11 to 12 percent is the overall lifetime occurrence rate of mental disorders. While the government has a separate program for mental health, the segment is marred by lack of adequate doctors and infrastructure. Mentally retarded children have certain limitations in mental functioning and skills, which makes them slow learners in speaking, walking, and taking care of their personal needs such as dressing and eating. Accepting a child with mental handicap becomes difficult for parents and to the whole family, as they have to face many problems, including those of management, finance, deprivation of rest, and leisure. Also, the problems faced by the parents can be seen in different areas like – educational, psychological, social, emotional, financial and family related issues. The study brought out various difficulties and problems faced by the parents as well as family members. The findings revealed that the mental retardation is not only a medico-psychological problem but also a socio-cultural problem. The study results, however, indicate that the quality of life of the family having children with mental retardation can be improved to a greater extent by building up a child-friendly ambience at home. The main aim of the present study is to assess the problems faced by the parents of mentally challenged children, with the help of personal interview data collected from the parents of mentally challenged children, residing in Shimoga District of Karnataka State, India. These individuals were selected using stratified random sampling method. Organizing effective intervention programs for parents, family, society, and educational institutions towards reduction of family stress, augmenting the family’s strengths, increasing child’s competence and enhancing the positive attitudes and values of the society will go a long way for the peaceful existence of the mentally challenged children.

Keywords: mentally challenged children, intellectual disability, special children, social infrastructure, differently abled, psychological stress, marginalization

Procedia PDF Downloads 91
97 Interaction between Cognitive Control and Language Processing in Non-Fluent Aphasia

Authors: Izabella Szollosi, Klara Marton

Abstract:

Aphasia can be defined as a weakness in accessing linguistic information. Accessing linguistic information is strongly related to information processing, which in turn is associated with the cognitive control system. According to the literature, a deficit in the cognitive control system interferes with language processing and contributes to non-fluent speech performance. The aim of our study was to explore this hypothesis by investigating how cognitive control interacts with language performance in participants with non-fluent aphasia. Cognitive control is a complex construct that includes working memory (WM) and the ability to resist proactive interference (PI). Based on previous research, we hypothesized that impairments in domain-general (DG) cognitive control abilities have negative effects on language processing. In contrast, better DG cognitive control functioning supports goal-directed behavior in language-related processes as well. Since stroke itself might slow down information processing, it is important to examine its negative effects on both cognitive control and language processing. Participants (N=52) in our study were individuals with non-fluent Broca’s aphasia (N = 13), with transcortical motor aphasia (N=13), individuals with stroke damage without aphasia (N=13), and unimpaired speakers (N = 13). All participants performed various computer-based tasks targeting cognitive control functions such as WM and resistance to PI in both linguistic and non-linguistic domains. Non-linguistic tasks targeted primarily DG functions, while linguistic tasks targeted more domain specific (DS) processes. The results showed that participants with Broca’s aphasia differed from the other three groups in the non-linguistic tasks. They performed significantly worse even in the baseline conditions. In contrast, we found a different performance profile in the linguistic domain, where the control group differed from all three stroke-related groups. The three groups with impairment performed more poorly than the controls but similar to each other in the verbal baseline condition. In the more complex verbal PI condition, however, participants with Broca’s aphasia performed significantly worse than all the other groups. Participants with Broca’s aphasia demonstrated the most severe language impairment and the highest vulnerability in tasks measuring DG cognitive control functions. Results support the notion that the more severe the cognitive control impairment, the more severe the aphasia. Thus, our findings suggest a strong interaction between cognitive control and language. Individuals with the most severe and most general cognitive control deficit - participants with Broca’s aphasia - showed the most severe language impairment. Individuals with better DG cognitive control functions demonstrated better language performance. While all participants with stroke damage showed impaired cognitive control functions in the linguistic domain, participants with better language skills performed also better in tasks that measured non-linguistic cognitive control functions. The overall results indicate that the level of cognitive control deficit interacts with the language functions in individuals along with the language spectrum (from severe to no impairment). However, future research is needed to determine any directionality.

Keywords: cognitive control, information processing, language performance, non-fluent aphasia

Procedia PDF Downloads 98
96 A Hybrid LES-RANS Approach to Analyse Coupled Heat Transfer and Vortex Structures in Separated and Reattached Turbulent Flows

Authors: C. D. Ellis, H. Xia, X. Chen

Abstract:

Experimental and computational studies investigating heat transfer in separated flows have been of increasing importance over the last 60 years, as efforts are being made to understand and improve the efficiency of components such as combustors, turbines, heat exchangers, nuclear reactors and cooling channels. Understanding of not only the time-mean heat transfer properties but also the unsteady properties is vital for design of these components. As computational power increases, more sophisticated methods of modelling these flows become available for use. The hybrid LES-RANS approach has been applied to a blunt leading edge flat plate, utilising a structured grid at a moderate Reynolds number of 20300 based on the plate thickness. In the region close to the wall, the RANS method is implemented for two turbulence models; the one equation Spalart-Allmaras model and Menter’s two equation SST k-ω model. The LES region occupies the flow away from the wall and is formulated without any explicit subgrid scale LES modelling. Hybridisation is achieved between the two methods by the blending of the nearest wall distance. Validation of the flow was obtained by assessing the mean velocity profiles in comparison to similar studies. Identifying the vortex structures of the flow was obtained by utilising the λ2 criterion to identify vortex cores. The qualitative structure of the flow compared with experiments of similar Reynolds number. This identified the 2D roll up of the shear layer, breaking down via the Kelvin-Helmholtz instability. Through this instability the flow progressed into hairpin like structures, elongating as they advanced downstream. Proper Orthogonal Decomposition (POD) analysis has been performed on the full flow field and upon the surface temperature of the plate. As expected, the breakdown of POD modes for the full field revealed a relatively slow decay compared to the surface temperature field. Both POD fields identified the most energetic fluctuations occurred in the separated and recirculation region of the flow. Latter modes of the surface temperature identified these levels of fluctuations to dominate the time-mean region of maximum heat transfer and flow reattachment. In addition to the current research, work will be conducted in tracking the movement of the vortex cores and the location and magnitude of temperature hot spots upon the plate. This information will support the POD and statistical analysis performed to further identify qualitative relationships between the vortex dynamics and the response of the surface heat transfer.

Keywords: heat transfer, hybrid LES-RANS, separated and reattached flow, vortex dynamics

Procedia PDF Downloads 204
95 Influence of Biochar Application on Growth, Dry Matter Yield and Nutrition of Corn (Zea mays L.) Grown on Sandy Loam Soils of Gujarat, India

Authors: Pravinchandra Patel

Abstract:

Sustainable agriculture in sandy loam soil generally faces large constraints due to low water holding and nutrient retention capacity, and accelerated mineralization of soil organic matter. There is need to increase soil organic carbon in the soil for higher crop productivity and soil sustainability. Recently biochar is considered as sixth element and work as a catalyst for increasing crop yield, soil fertility, soil sustainability and mitigation of climate change. Biochar was generated at the Sansoli Farm of Anand Agricultural University, Gujarat, India by pyrolysis at temperatures (250-400°C) in absence of oxygen using slow chemical process (using two kilns) from corn stover (Zea mays, L), cluster bean stover (Cyamopsis tetragonoloba) and Prosopis julifera wood. There were 16 treatments; 4 organic sources (3 biochar; corn stover biochar (MS), cluster bean stover (CB) & Prosopis julifera wood (PJ) and one farmyard manure-FYM) with two rate of application (5 & 10 metric tons/ha), so there were eight treatments of organic sources. Eight organic sources was applied with the recommended dose of fertilizers (RDF) (80-40-0 kg/ha N-P-K) while remaining eight organic sources were kept without RDF. Application of corn stover biochar @ 10 metric tons/ha along with RDF (RDF+MS) increased dry matter (DM) yield, crude protein (CP) yield, chlorophyll content and plant height (at 30 and 60 days after sowing) than CB and PJ biochar and FYM. Nutrient uptake of P, K, Ca, Mg, S and Cu were significantly increased with the application of RDF + corn stover @ 10 metric tons/ha while uptake of N and Mn were significantly increased in RDF + corn stover @ 5 metric tons/ha. It was found that soil application of corn stover biochar @ 10 metric tons/ha along with the recommended dose of chemical fertilizers (RDF+MS ) exhibited the highest impact in obtaining significantly higher dry matter and crude protein yields and larger removal of nutrients from the soil and it also beneficial for built up nutrients in soil. It also showed significantly higher organic carbon content and cation exchange capacity in sandy loam soil. The lower dose of corn stover biochar @ 5 metric tons/ha (RDF+ MS) was also remained the second highest for increasing dry matter and crude protein yields of forage corn crop which ultimately resulted in larger removals of nutrients from the soil. This study highlights the importance of mixing of biochar along with recommended dose of fertilizers on its synergistic effect on sandy loam soil nutrient retention, organic carbon content and water holding capacity hence, the amendment value of biochar in sandy loam soil.

Keywords: biochar, corn yield, plant nutrient, fertility status

Procedia PDF Downloads 119
94 The Implementation of Human Resource Information System in the Public Sector: An Exploratory Study of Perceived Benefits and Challenges

Authors: Aneeqa Suhail, Shabana Naveed

Abstract:

The public sector (in both developed and developing countries) has gone through various waves of radical reforms in recent decades. In Pakistan, under the influence of New Public Management(NPM) Reforms; best practices of private sector are introduced in the public sector to modernize public organizations. Human Resource Information System (HRIS) has been popular in the private sector and proven to be a successful system, therefore it is being adopted in the public sector too. However, implementation of private business practices in public organizations us very challenging due to differences in context. This implementation gets further critical in Pakistan due to a centralizing tendency and lack of autonomy in public organizations. Adoption of HRIS by public organizations in Pakistan raises several questions: What challenges are faced by public organizations in implementation of HRIS? Are benefits of HRIS such as efficiency, process integration and cost reduction achieved? How is the previous system improved with this change and what are the impacts? Yet, it is an under-researched topic, especially in public enterprises. This study contributes to the existing body of knowledge by empirically exploring benefits and challenges of implementation of HRIS in public organizations. The research adopts a case study approach and uses qualitative data based on in-depth interviews conducted at various levels in the hierarchy including top management, departmental heads and employees. The unit of analysis is LESCO, the Lahore Electric Supply Company, a state-owned entity that generates, transmits and distributes electricity to 4 big cities in Punjab, Pakistan. The findings of the study show that LESCO has not achieved the benefits of HRIS as established in literature. The implementation process remained quite slow and costly. Various functions of HR are still in isolation and integration is a big challenge for the organization. Although the data is automated, the previous system of manually record maintenance and paperwork is still in work, resulting in the presence of parallel practices. The findings also identified resistance to change from top management and labor workforce, lack of commitment and technical knowledge, and costly vendors as major barriers that affect the effective implementation of HRIS. The paper suggests some potential actions to overcome these barriers and to enhance effective implementation of HR-technology. The findings are explained in light of an institutional logics perspective. HRIS’ new logic of automated and integrated HR system is in sharp contrast with the prevailing logic of process-oriented manual data maintenance, leading to resistance to change and deadlock.

Keywords: human resource information system, technological changes, state-owned enterprise, implementation challenges

Procedia PDF Downloads 124
93 Nascent Federalism in Nepal: An Observational Review in its Evolution

Authors: C. Shekhar Parajulee

Abstract:

Nepal practiced a centralized unitary governing system for a long and has gone through the federal system after the promulgation of the new constitution on 20 September 2015. There is a big paradigm shift in terms of governance after it. Now, there are three levels of governments, one federal government in the center, seven provincial governments and 753 local governments. Federalism refers to a political governing system with multiple tiers of government working together with coordination. It is preferred for self and shared rule. Though it has opened the door for rights of the people, political stability, state restructuring, and sustainable peace and development, there are many prospects and challenges for its proper implementation. This research analyzes the discourses of federalism implementation in Nepal with special reference to one of seven provinces, Gandaki. Federalism is a new phenomenon in Nepali politics and informed debates on it are required for its right evolution. This research will add value in this regard. Moreover, tracking its evolution and the exploration of the attitudes and behaviors of key actors and stakeholders in a new experiment of a new governing system is also important. The administrative and political system of Gandaki province in terms of service delivery and development will critically be examined. Besides demonstrating the performances of the provincial government and assembly, it will analyze the inter-governmental relation of Gandaki with the other two tiers of government. For this research, people from provincial and local governments (elected representatives and government employees), provincial assembly members, academicians, civil society leaders and journalists are being interviewed. The interview findings will be analyzed by supplementing with published documents. Just going into the federal structure is not the solution. As in the case of other provincial governments, Gandaki had also to start from scratch. It gradually took a shape of government and has been functioning sluggishly. The provincial government has many challenges ahead, which has badly hindered its plans and actions. Additionally, fundamental laws, infrastructures and human resources are found to be insufficient at the sub-national level. Lack of clarity in the jurisdiction is another main challenge. The Nepali Constitution assumes cooperation, coexistence and coordination as the fundamental principles of federalism which, unfortunately, appear to be lacking among the three tiers of government despite their efforts. Though the devolution of power to sub-national governments is essential for the successful implementation of federalism, it has apparently been delayed due to the centralized mentality of bureaucracy as well as a political leader. This research will highlight the reasons for the delay in the implementation of federalism. There might be multiple underlying reasons for the slow pace of implementation of federalism and identifying them is very tough. Moreover, the federal spirit is found to be absent in the main players of today's political system, which is a big irony. So, there are some doubts about whether the federal system in Nepal is just a keepsake or a substantive.

Keywords: federalism, inter-governmental relations, Nepal, provincial government

Procedia PDF Downloads 175
92 Polypyrrole as Bifunctional Materials for Advanced Li-S Batteries

Authors: Fang Li, Jiazhao Wang, Jianmin Ma

Abstract:

The practical application of Li-S batteries is hampered due to poor cycling stability caused by electrolyte-dissolved lithium polysulfides. Dual functionalities such as strong chemical adsorption stability and high conductivity are highly desired for an ideal host material for a sulfur-based cathode. Polypyrrole (PPy), as a conductive polymer, was widely studied as matrixes for sulfur cathode due to its high conductivity and strong chemical interaction with soluble polysulfides. Thus, a novel cathode structure consisting of a free-standing sulfur-polypyrrole cathode and a polypyrrole coated separator was designed for flexible Li-S batteries. The PPy materials show strong interaction with dissoluble polysulfides, which could suppress the shuttle effect and improve the cycling stability. In addition, the synthesized PPy film with a rough surface acts as a current collector, which improves the adhesion of sulfur materials and restrain the volume expansion, enhancing the structural stability during the cycling process. For further enhancing the cycling stability, a PPy coated separator was also applied, which could make polysulfides into the cathode side to alleviate the shuttle effect. Moreover, the PPy layer coated on commercial separator is much lighter than other reported interlayers. A soft-packaged flexible Li-S battery has been designed and fabricated for testing the practical application of the designed cathode and separator, which could power a device consisting of 24 light-emitting diode (LED) lights. Moreover, the soft-packaged flexible battery can still show relatively stable cycling performance after repeated bending, indicating the potential application in flexible batteries. A novel vapor phase deposition method was also applied to prepare uniform polypyrrole layer coated sulfur/graphene aerogel composite. The polypyrrole layer simultaneously acts as host and adsorbent for efficient suppression of polysulfides dissolution through strong chemical interaction. The density functional theory (DFT) calculations reveal that the polypyrrole could trap lithium polysulfides through stronger bonding energy. In addition, the deflation of sulfur/graphene hydrogel during the vapor phase deposition process enhances the contact of sulfur with matrixes, resulting in high sulfur utilization and good rate capability. As a result, the synthesized polypyrrole coated sulfur/graphene aerogel composite delivers a specific discharge capacity of 1167 mAh g⁻¹ and 409.1 mAh g⁻¹ at 0.2 C and 5 C respectively. The capacity can maintain at 698 mAh g⁻¹ at 0.5 C after 500 cycles, showing an ultra-slow decay rate of 0.03% per cycle.

Keywords: polypyrrole, strong chemical interaction, long-term stability, Li-S batteries

Procedia PDF Downloads 109
91 Linguistic Insights Improve Semantic Technology in Medical Research and Patient Self-Management Contexts

Authors: William Michael Short

Abstract:

Semantic Web’ technologies such as the Unified Medical Language System Metathesaurus, SNOMED-CT, and MeSH have been touted as transformational for the way users access online medical and health information, enabling both the automated analysis of natural-language data and the integration of heterogeneous healthrelated resources distributed across the Internet through the use of standardized terminologies that capture concepts and relationships between concepts that are expressed differently across datasets. However, the approaches that have so far characterized ‘semantic bioinformatics’ have not yet fulfilled the promise of the Semantic Web for medical and health information retrieval applications. This paper argues within the perspective of cognitive linguistics and cognitive anthropology that four features of human meaning-making must be taken into account before the potential of semantic technologies can be realized for this domain. First, many semantic technologies operate exclusively at the level of the word. However, texts convey meanings in ways beyond lexical semantics. For example, transitivity patterns (distributions of active or passive voice) and modality patterns (configurations of modal constituents like may, might, could, would, should) convey experiential and epistemic meanings that are not captured by single words. Language users also naturally associate stretches of text with discrete meanings, so that whole sentences can be ascribed senses similar to the senses of words (so-called ‘discourse topics’). Second, natural language processing systems tend to operate according to the principle of ‘one token, one tag’. For instance, occurrences of the word sound must be disambiguated for part of speech: in context, is sound a noun or a verb or an adjective? In syntactic analysis, deterministic annotation methods may be acceptable. But because natural language utterances are typically characterized by polyvalency and ambiguities of all kinds (including intentional ambiguities), such methods leave the meanings of texts highly impoverished. Third, ontologies tend to be disconnected from everyday language use and so struggle in cases where single concepts are captured through complex lexicalizations that involve profile shifts or other embodied representations. More problematically, concept graphs tend to capture ‘expert’ technical models rather than ‘folk’ models of knowledge and so may not match users’ common-sense intuitions about the organization of concepts in prototypical structures rather than Aristotelian categories. Fourth, and finally, most ontologies do not recognize the pervasively figurative character of human language. However, since the time of Galen the widespread use of metaphor in the linguistic usage of both medical professionals and lay persons has been recognized. In particular, metaphor is a well-documented linguistic tool for communicating experiences of pain. Because semantic medical knowledge-bases are designed to help capture variations within technical vocabularies – rather than the kinds of conventionalized figurative semantics that practitioners as well as patients actually utilize in clinical description and diagnosis – they fail to capture this dimension of linguistic usage. The failure of semantic technologies in these respects degrades the efficiency and efficacy not only of medical research, where information retrieval inefficiencies can lead to direct financial costs to organizations, but also of care provision, especially in contexts of patients’ self-management of complex medical conditions.

Keywords: ambiguity, bioinformatics, language, meaning, metaphor, ontology, semantic web, semantics

Procedia PDF Downloads 102
90 Product Life Cycle Assessment of Generatively Designed Furniture for Interiors Using Robot Based Additive Manufacturing

Authors: Andrew Fox, Qingping Yang, Yuanhong Zhao, Tao Zhang

Abstract:

Furniture is a very significant subdivision of architecture and its inherent interior design activities. The furniture industry has developed from an artisan-driven craft industry, whose forerunners saw themselves manifested in their crafts and treasured a sense of pride in the creativity of their designs, these days largely reduced to an anonymous collective mass-produced output. Although a very conservative industry, there is great potential for the implementation of collaborative digital technologies allowing a reconfigured artisan experience to be reawakened in a new and exciting form. The furniture manufacturing industry, in general, has been slow to adopt new methodologies for a design using artificial and rule-based generative design. This tardiness has meant the loss of potential to enhance its capabilities in producing sustainable, flexible, and mass customizable ‘right first-time’ designs. This paper aims to demonstrate the concept methodology for the creation of alternative and inspiring aesthetic structures for robot-based additive manufacturing (RBAM). These technologies can enable the economic creation of previously unachievable structures, which traditionally would not have been commercially economic to manufacture. The integration of these technologies with the computing power of generative design provides the tools for practitioners to create concepts which are well beyond the insight of even the most accomplished traditional design teams. This paper aims to address the problem by introducing generative design methodologies employing the Autodesk Fusion 360 platform. Examination of the alternative methods for its use has the potential to significantly reduce the estimated 80% contribution to environmental impact at the initial design phase. Though predominantly a design methodology, generative design combined with RBAM has the potential to leverage many lean manufacturing and quality assurance benefits, enhancing the efficiency and agility of modern furniture manufacturing. Through a case study examination of a furniture artifact, the results will be compared to a traditionally designed and manufactured product employing the Ecochain Mobius product life cycle analysis (LCA) platform. This will highlight the benefits of both generative design and robot-based additive manufacturing from an environmental impact and manufacturing efficiency standpoint. These step changes in design methodology and environmental assessment have the potential to revolutionise the design to manufacturing workflow, giving momentum to the concept of conceiving a pre-industrial model of manufacturing, with the global demand for a circular economy and bespoke sustainable design at its heart.

Keywords: robot, manufacturing, generative design, sustainability, circular econonmy, product life cycle assessment, furniture

Procedia PDF Downloads 112