Search results for: market comparison
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8345

Search results for: market comparison

155 The Contemporary Format of E-Learning in Teaching Foreign Languages

Authors: Nataliya G. Olkhovik

Abstract:

Nowadays in the system of Russian higher medical education there have been undertaken initiatives that resulted in focusing on the resources of e-learning in teaching foreign languages. Obviously, the face-to-face communication in foreign languages bears much more advantages in terms of effectiveness in comparison with the potential of e-learning. Thus, we’ve faced the necessity of strengthening the capacity of e-learning via integration of active methods into the process of teaching foreign languages, such as project activity of students. Successful project activity of students should involve the following components: monitoring, control, methods of organizing the student’s activity in foreign languages, stimulating their interest in the chosen project, approaches to self-assessment and methods of raising their self-esteem. The contemporary methodology assumes the project as a specific method, which activates potential of a student’s cognitive function, emotional reaction, ability to work in the team, commitment, skills of cooperation and, consequently, their readiness to verbalize ideas, thoughts and attitudes. Verbal activity in the foreign language is a complex conception that consolidates both cognitive (involving speech) capacity and individual traits and attitudes such as initiative, empathy, devotion, responsibility etc. Once we organize the project activity by the means of e-learning within the ‘Foreign language’ discipline we have to take into consideration all mentioned above characteristics and work out an effective way to implement it into the teaching practice to boost its educational potential. We have integrated into the e-platform Moodle the module of project activity consisting of the following blocks of tasks that lead students to research, cooperate, strive to leadership, chase the goal and finally verbalize their intentions. Firstly, we introduce the project through activating self-activity of students by the tasks of the phase ‘Preparation of the project’: choose the topic and justify it; find out the problematic situation and its components; set the goals; create your team, choose the leader, distribute the roles in your team; make a written report on grounding the validity of your choices. Secondly, in the ‘Planning the project’ phase we ask students to represent the analysis of the problem in terms of reasons, ways and methods of solution and define the structure of their project (here students may choose oral or written presentation by drawing up the claim in the e-platform about their wish, whereas the teacher decides what form of presentation to prefer). Thirdly, the students have to design the visual aids, speech samples (functional phrases, introductory words, keywords, synonyms, opposites, attributive constructions) and then after checking, discussing and correcting with a teacher via the means of Moodle present it in front of the audience. And finally, we introduce the phase of self-reflection that aims to awake the inner desire of students to improve their verbal activity in a foreign language. As a result, by implementing the project activity into the e-platform and project activity, we try to widen the frameworks of a traditional lesson of foreign languages through tapping the potential of personal traits and attitudes of students.

Keywords: active methods, e-learning, improving verbal activity in foreign languages, personal traits and attitudes

Procedia PDF Downloads 105
154 Adaptive Power Control of the City Bus Integrated Photovoltaic System

Authors: Piotr Kacejko, Mariusz Duk, Miroslaw Wendeker

Abstract:

This paper presents an adaptive controller to track the maximum power point of a photovoltaic modules (PV) under fast irradiation change on the city-bus roof. Photovoltaic systems have been a prominent option as an additional energy source for vehicles. The Municipal Transport Company (MPK) in Lublin has installed photovoltaic panels on its buses roofs. The solar panels turn solar energy into electric energy and are used to load the buses electric equipment. This decreases the buses alternators load, leading to lower fuel consumption and bringing both economic and ecological profits. A DC–DC boost converter is selected as the power conditioning unit to coordinate the operating point of the system. In addition to the conversion efficiency of a photovoltaic panel, the maximum power point tracking (MPPT) method also plays a main role to harvest most energy out of the sun. The MPPT unit on a moving vehicle must keep tracking accuracy high in order to compensate rapid change of irradiation change due to dynamic motion of the vehicle. Maximum power point track controllers should be used to increase efficiency and power output of solar panels under changing environmental factors. There are several different control algorithms in the literature developed for maximum power point tracking. However, energy performances of MPPT algorithms are not clarified for vehicle applications that cause rapid changes of environmental factors. In this study, an adaptive MPPT algorithm is examined at real ambient conditions. PV modules are mounted on a moving city bus designed to test the solar systems on a moving vehicle. Some problems of a PV system associated with a moving vehicle are addressed. The proposed algorithm uses a scanning technique to determine the maximum power delivering capacity of the panel at a given operating condition and controls the PV panel. The aim of control algorithm was matching the impedance of the PV modules by controlling the duty cycle of the internal switch, regardless of changes of the parameters of the object of control and its outer environment. Presented algorithm was capable of reaching the aim of control. The structure of an adaptive controller was simplified on purpose. Since such a simple controller, armed only with an ability to learn, a more complex structure of an algorithm can only improve the result. The presented adaptive control system of the PV system is a general solution and can be used for other types of PV systems of both high and low power. Experimental results obtained from comparison of algorithms by a motion loop are presented and discussed. Experimental results are presented for fast change in irradiation and partial shading conditions. The results obtained clearly show that the proposed method is simple to implement with minimum tracking time and high tracking efficiency proving superior to the proposed method. This work has been financed by the Polish National Centre for Research and Development, PBS, under Grant Agreement No. PBS 2/A6/16/2013.

Keywords: adaptive control, photovoltaic energy, city bus electric load, DC-DC converter

Procedia PDF Downloads 211
153 Measuring Urban Sprawl in the Western Cape Province, South Africa: An Urban Sprawl Index for Comparative Purposes

Authors: Anele Horn, Amanda Van Eeden

Abstract:

The emphasis on the challenges posed by continued urbanisation, especially in developing countries has resulted in urban sprawl often researched and analysed in metropolitan urban areas, but rarely in small and medium towns. Consequently, there exists no comparative instrument between the proportional extent of urban sprawl in metropolitan areas measured against that of small and medium towns. This research proposes an Urban Sprawl Index as a possible tool to comparatively analyse the extent of urban sprawl between cities and towns of different sizes. The index can also be used over the longer term by authorities developing spatial policy to track the success or failure of specific tools intended to curb urban sprawl. In South Africa, as elsewhere in the world, the last two decades witnessed a proliferation of legislation and spatial policies to limit urban sprawl and contain the physical expansion and development of urban areas, but the measurement of the successes or failures of these instruments intending to curb expansive land development has remained a largely unattainable goal, largely as a result of the absence of an appropriate measure of proportionate comparison. As a result of the spatial political history of Apartheid, urban areas acquired a spatial form that contributed to the formation of single-core cities with far reaching and wide-spreading peripheral development, either in the form of affluent suburbs or as a result of post-Apartheid programmes such as the Reconstruction and Development Programme (1995) which, in an attempt to assist the immediate housing shortage, favoured the establishment of single dwelling residential units for low income communities on single plots on affordable land at the urban periphery. This invariably contributed to urban sprawl and even though this programme has since been abandoned, the trend towards low density residential development continues. The research area is the Western Cape Province in South Africa, which in all aspects exhibit the spatial challenges described above. In academia and popular media the City of Cape Town (the only Metropolitan authority in the province) has received the lion’s share of focus in terms of critique on urban development and spatial planning, however, the smaller towns and cities in the Western Cape arguably received much less public attention and were spared the naming and shaming of being unsustainable urban areas in terms of land consumption and physical expansion. The Urban Sprawl Index for the Western Cape (USIWC) put forward by this research enables local authorities in the Western Cape Province to measure the extent of urban sprawl proportionately and comparatively to other cities in the province, thereby acquiring a means of measuring the success of the spatial instruments employed to limit urban expansion and inefficient land consumption. In development of the USIWC the research made use of satellite data for reference years 2001 and 2011 and population growth data extracted from the national census, also for base years 2001 and 2011.

Keywords: urban sprawl, index, Western Cape, South Africa

Procedia PDF Downloads 329
152 Assessment of Cytogenetic Damage as a Function of Radiofrequency Electromagnetic Radiations Exposure Measured by Electric Field Strength: A Gender Based Study

Authors: Ramanpreet, Gursatej Gandhi

Abstract:

Background: Dependence on electromagnetic radiations involved in communication and information technologies has incredibly increased in the personal and professional world. Among the numerous radiations, sources are fixed site transmitters, mobile phone base stations, and power lines beside indoor devices like cordless phones, WiFi, Bluetooth, TV, radio, microwave ovens, etc. Rather there is the continuous emittance of radiofrequency radiations (RFR) even to those not using the devices from mobile phone base stations. The consistent and widespread usage of wireless devices has build-up electromagnetic fields everywhere. In fact, the radiofrequency electromagnetic field (RF-EMF) has insidiously become a part of the environment and like any contaminant may pose to be health-hazardous requiring assessment. Materials and Methods: In the present study, cytogenetic damage was assessed using the Buccal Micronucleus Cytome (BMCyt) assay as a function of radiation exposure after Institutional Ethics Committee clearance of the study and written voluntary informed consent from the participants. On a pre-designed questionnaire, general information lifestyle patterns (diet, physical activity, smoking, drinking, use of mobile phones, internet, Wi-Fi usage, etc.) genetic, reproductive (pedigrees) and medical histories were recorded. For this, 24 hour-personal exposimeter measurements (PEM) were recorded for unrelated 60 healthy adults (40 cases residing in the vicinity of mobile phone base stations since their installation and 20 controls residing in areas with no base stations). The personal exposimeter collects information from all the sources generating EMF (TETRA, GSM, UMTS, DECT, and WLAN) as total RF-EMF uplink and downlink. Findings: The cases (n=40; 23-90 years) and the controls (n=20; 19-65 years) matched for alcohol drinking, smoking habits, and mobile and cordless phone usage. The PEM in cases (149.28 ± 8.98 mV/m) revealed significantly higher (p=0.000) electric field strength compared to the recorded value (80.40 ± 0.30 mV/m) in controls. The GSM 900 uplink (p=0.000), GSM 1800 downlink (p=0.000),UMTS (both uplink; p=0.013 and downlink; p=0.001) and DECT (p=0.000) electric field strength were significantly elevated in the cases as compared to controls. The electric field strength in the cases was significantly from GSM1800 (52.26 ± 4.49mV/m) followed by GSM900 (45.69 ± 4.98mV/m), UMTS (25.03 ± 3.33mV/m), DECT (18.02 ± 2.14mV/m) and was least from WLAN (8.26 ± 2.35mV/m). The higher significantly (p=0.000) increased exposure to the cases was from GSM (97.96 ± 6.97mV/m) in comparison to UMTS, DECT, and WLAN. The frequencies of micronuclei (1.86X, p=0.007), nuclear buds (2.95X, p=0.002) and cell death parameter (condensed chromatin cells) were significantly (1.75X, p=0.007) elevated in cases compared to that in controls probably as a function of radiofrequency radiation exposure. Conclusion: In the absence of other exposure(s), any cytogenetic damage if unrepaired is a cause of concern as it can cause malignancy. Larger sample size with the clinical assessment will prove more insightful of such an effect.

Keywords: Buccal micronucleus cytome assay, cytogenetic damage, electric field strength, personal exposimeter

Procedia PDF Downloads 158
151 Early Diagnosis of Myocardial Ischemia Based on Support Vector Machine and Gaussian Mixture Model by Using Features of ECG Recordings

Authors: Merve Begum Terzi, Orhan Arikan, Adnan Abaci, Mustafa Candemir

Abstract:

Acute myocardial infarction is a major cause of death in the world. Therefore, its fast and reliable diagnosis is a major clinical need. ECG is the most important diagnostic methodology which is used to make decisions about the management of the cardiovascular diseases. In patients with acute myocardial ischemia, temporary chest pains together with changes in ST segment and T wave of ECG occur shortly before the start of myocardial infarction. In this study, a technique which detects changes in ST/T sections of ECG is developed for the early diagnosis of acute myocardial ischemia. For this purpose, a database of real ECG recordings that contains a set of records from 75 patients presenting symptoms of chest pain who underwent elective percutaneous coronary intervention (PCI) is constituted. 12-lead ECG’s of the patients were recorded before and during the PCI procedure. Two ECG epochs, which are the pre-inflation ECG which is acquired before any catheter insertion and the occlusion ECG which is acquired during balloon inflation, are analyzed for each patient. By using pre-inflation and occlusion recordings, ECG features that are critical in the detection of acute myocardial ischemia are identified and the most discriminative features for the detection of acute myocardial ischemia are extracted. A classification technique based on support vector machine (SVM) approach operating with linear and radial basis function (RBF) kernels to detect ischemic events by using ST-T derived joint features from non-ischemic and ischemic states of the patients is developed. The dataset is randomly divided into training and testing sets and the training set is used to optimize SVM hyperparameters by using grid-search method and 10fold cross-validation. SVMs are designed specifically for each patient by tuning the kernel parameters in order to obtain the optimal classification performance results. As a result of implementing the developed classification technique to real ECG recordings, it is shown that the proposed technique provides highly reliable detections of the anomalies in ECG signals. Furthermore, to develop a detection technique that can be used in the absence of ECG recording obtained during healthy stage, the detection of acute myocardial ischemia based on ECG recordings of the patients obtained during ischemia is also investigated. For this purpose, a Gaussian mixture model (GMM) is used to represent the joint pdf of the most discriminating ECG features of myocardial ischemia. Then, a Neyman-Pearson type of approach is developed to provide detection of outliers that would correspond to acute myocardial ischemia. Neyman – Pearson decision strategy is used by computing the average log likelihood values of ECG segments and comparing them with a range of different threshold values. For different discrimination threshold values and number of ECG segments, probability of detection and probability of false alarm values are computed, and the corresponding ROC curves are obtained. The results indicate that increasing number of ECG segments provide higher performance for GMM based classification. Moreover, the comparison between the performances of SVM and GMM based classification showed that SVM provides higher classification performance results over ECG recordings of considerable number of patients.

Keywords: ECG classification, Gaussian mixture model, Neyman–Pearson approach, support vector machine

Procedia PDF Downloads 162
150 Adapting Inclusive Residential Models to Match Universal Accessibility and Fire Protection

Authors: Patricia Huedo, Maria José Ruá, Raquel Agost-Felip

Abstract:

Ensuring sustainable development of urban environments means guaranteeing adequate environmental conditions, being resilient and meeting conditions of safety and inclusion for all people, regardless of their condition. All existing buildings should meet basic safety conditions and be equipped with safe and accessible routes, along with visual, acoustic and tactile signals to protect their users or potential visitors, and regardless of whether they undergo rehabilitation or change of use processes. Moreover, from a social perspective, we consider the need to prioritize buildings occupied by the most vulnerable groups of people that currently do not have specific regulations tailored to their needs. Some residential models in operation are not only outside the scope of application of the regulations in force; they also lack a project or technical data that would allow knowing the fire behavior of the construction materials. However, the difficulty and cost involved in adapting the entire building stock to current regulations can never justify the lack of safety for people. Hence, this work develops a simplified model to assess compliance with the basic safety conditions in case of fire and its compatibility with the specific accessibility needs of each user. The purpose is to support the designer in decision making, as well as to contribute to the development of a basic fire safety certification tool to be applied in inclusive residential models. This work has developed a methodology to support designers in adapting Social Services Centers, usually intended to vulnerable people. It incorporates a checklist of 9 items and information from sources or standards that designers can use to justify compliance or propose solutions. For each item, the verification system is justified, and possible sources of consultation are provided, considering the possibility of lacking technical documentation of construction systems or building materials. The procedure is based on diagnosing the degree of compliance with fire conditions of residential models used by vulnerable groups, considering the special accessibility conditions required by each user group. Through visual inspection and site surveying, the verification model can serve as a support tool, significantly streamlining the diagnostic phase and reducing the number of tests to be requested by over 75%. This speeds up and simplifies the diagnostic phase. To illustrate the methodology, two different buildings in the Valencian Region (Spain) have been selected. One case study is a mental health facility for residential purposes, located in a rural area, on the outskirts of a small town; the other one, is a day care facility for individuals with intellectual disabilities, located in a medium-sized city. The comparison between the case studies allow to validate the model in distinct conditions. Verifying compliance with a basic security level can allow a quality seal and a public register of buildings adapted to fire regulations to be established, similarly to what is being done with other types of attributes such as energy performance.

Keywords: fire safety, inclusive housing, universal accessibility, vulnerable people

Procedia PDF Downloads 22
149 The Development of Assessment Criteria Framework for Sustainable Healthcare Buildings in China

Authors: Chenyao Shen, Jie Shen

Abstract:

The rating system provides an effective framework for assessing building environmental performance and integrating sustainable development into building and construction processes; as it can be used as a design tool by developing appropriate sustainable design strategies and determining performance measures to guide the sustainable design and decision-making processes. Healthcare buildings are resource (water, energy, etc.) intensive. To maintain high-cost operations and complex medical facilities, they require a great deal of hazardous and non-hazardous materials, stringent control of environmental parameters, and are responsible for producing polluting emission. Compared with other types of buildings, the impact of healthcare buildings on the full cycle of the environment is particularly large. With broad recognition among designers and operators that energy use can be reduced substantially, many countries have set up their own green rating systems for healthcare buildings. There are four main green healthcare building evaluation systems widely acknowledged in the world - Green Guide for Health Care (GGHC), which was jointly organized by the United States HCWH and CMPBS in 2003; BREEAM Healthcare, issued by the British Academy of Building Research (BRE) in 2008; the Green Star-Healthcare v1 tool, released by the Green Building Council of Australia (GBCA) in 2009; and LEED Healthcare 2009, released by the United States Green Building Council (USGBC) in 2011. In addition, the German Association of Sustainable Building (DGNB) has also been developing the German Sustainable Building Evaluation Criteria (DGNB HC). In China, more and more scholars and policy makers have recognized the importance of assessment of sustainable development, and have adapted some tools and frameworks. China’s first comprehensive assessment standard for green building (the GBTs) was issued in 2006 (lately updated in 2014), promoting sustainability in the built-environment and raise awareness of environmental issues among architects, engineers, contractors as well as the public. However, healthcare building was not involved in the evaluation system of GBTs because of its complex medical procedures, strict requirements of indoor/outdoor environment and energy consumption of various functional rooms. Learn from advanced experience of GGHC, BREEAM, and LEED HC above, China’s first assessment criteria for green hospital/healthcare buildings was finally released in December 2015. Combined with both quantitative and qualitative assessment criteria, the standard highlight the differences between healthcare and other public buildings in meeting the functional needs for medical facilities and special groups. This paper has focused on the assessment criteria framework for sustainable healthcare buildings, for which the comparison of different rating systems is rather essential. Descriptive analysis is conducted together with the cross-matrix analysis to reveal rich information on green assessment criteria in a coherent manner. The research intends to know whether the green elements for healthcare buildings in China are different from those conducted in other countries, and how to improve its assessment criteria framework.

Keywords: assessment criteria framework, green building design, healthcare building, building performance rating tool

Procedia PDF Downloads 146
148 Visco-Hyperelastic Finite Element Analysis for Diagnosis of Knee Joint Injury Caused by Meniscal Tearing

Authors: Eiji Nakamachi, Tsuyoshi Eguchi, Sayo Yamamoto, Yusuke Morita, H. Sakamoto

Abstract:

In this study, we aim to reveal the relationship between the meniscal tearing and the articular cartilage injury of knee joint by using the dynamic explicit finite element (FE) method. Meniscal injuries reduce its functional ability and consequently increase the load on the articular cartilage of knee joint. In order to prevent the induction of osteoarthritis (OA) caused by meniscal injuries, many medical treatment techniques, such as artificial meniscus replacement and meniscal regeneration, have been developed. However, it is reported that these treatments are not the comprehensive methods. In order to reveal the fundamental mechanism of OA induction, the mechanical characterization of meniscus under the condition of normal and injured states is carried out by using FE analyses. At first, a FE model of the human knee joint in the case of normal state – ‘intact’ - was constructed by using the magnetron resonance (MR) tomography images and the image construction code, Materialize Mimics. Next, two types of meniscal injury models with the radial tears of medial and lateral menisci were constructed. In FE analyses, the linear elastic constitutive law was adopted for the femur and tibia bones, the visco-hyperelastic constitutive law for the articular cartilage, and the visco-anisotropic hyperelastic constitutive law for the meniscus, respectively. Material properties of articular cartilage and meniscus were identified using the stress-strain curves obtained by our compressive and the tensile tests. The numerical results under the normal walking condition revealed how and where the maximum compressive stress occurred on the articular cartilage. The maximum compressive stress and its occurrence point were varied in the intact and two meniscal tear models. These compressive stress values can be used to establish the threshold value to cause the pathological change for the diagnosis. In this study, FE analyses of knee joint were carried out to reveal the influence of meniscal injuries on the cartilage injury. The following conclusions are obtained. 1. 3D FE model, which consists femur, tibia, articular cartilage and meniscus was constructed based on MR images of human knee joint. The image processing code, Materialize Mimics was used by using the tetrahedral FE elements. 2. Visco-anisotropic hyperelastic constitutive equation was formulated by adopting the generalized Kelvin model. The material properties of meniscus and articular cartilage were determined by curve fitting with experimental results. 3. Stresses on the articular cartilage and menisci were obtained in cases of the intact and two radial tears of medial and lateral menisci. Through comparison with the case of intact knee joint, two tear models show almost same stress value and higher value than the intact one. It was shown that both meniscal tears induce the stress localization in both medial and lateral regions. It is confirmed that our newly developed FE analysis code has a potential to be a new diagnostic system to evaluate the meniscal damage on the articular cartilage through the mechanical functional assessment.

Keywords: finite element analysis, hyperelastic constitutive law, knee joint injury, meniscal tear, stress concentration

Procedia PDF Downloads 246
147 Estimating Poverty Levels from Satellite Imagery: A Comparison of Human Readers and an Artificial Intelligence Model

Authors: Ola Hall, Ibrahim Wahab, Thorsteinn Rognvaldsson, Mattias Ohlsson

Abstract:

The subfield of poverty and welfare estimation that applies machine learning tools and methods on satellite imagery is a nascent but rapidly growing one. This is in part driven by the sustainable development goal, whose overarching principle is that no region is left behind. Among other things, this requires that welfare levels can be accurately and rapidly estimated at different spatial scales and resolutions. Conventional tools of household surveys and interviews do not suffice in this regard. While they are useful for gaining a longitudinal understanding of the welfare levels of populations, they do not offer adequate spatial coverage for the accuracy that is needed, nor are their implementation sufficiently swift to gain an accurate insight into people and places. It is this void that satellite imagery fills. Previously, this was near-impossible to implement due to the sheer volume of data that needed processing. Recent advances in machine learning, especially the deep learning subtype, such as deep neural networks, have made this a rapidly growing area of scholarship. Despite their unprecedented levels of performance, such models lack transparency and explainability and thus have seen limited downstream applications as humans generally are apprehensive of techniques that are not inherently interpretable and trustworthy. While several studies have demonstrated the superhuman performance of AI models, none has directly compared the performance of such models and human readers in the domain of poverty studies. In the present study, we directly compare the performance of human readers and a DL model using different resolutions of satellite imagery to estimate the welfare levels of demographic and health survey clusters in Tanzania, using the wealth quintile ratings from the same survey as the ground truth data. The cluster-level imagery covers all 608 cluster locations, of which 428 were classified as rural. The imagery for the human readers was sourced from the Google Maps Platform at an ultra-high resolution of 0.6m per pixel at zoom level 18, while that of the machine learning model was sourced from the comparatively lower resolution Sentinel-2 10m per pixel data for the same cluster locations. Rank correlation coefficients of between 0.31 and 0.32 achieved by the human readers were much lower when compared to those attained by the machine learning model – 0.69-0.79. This superhuman performance by the model is even more significant given that it was trained on the relatively lower 10-meter resolution satellite data while the human readers estimated welfare levels from the higher 0.6m spatial resolution data from which key markers of poverty and slums – roofing and road quality – are discernible. It is important to note, however, that the human readers did not receive any training before ratings, and had this been done, their performance might have improved. The stellar performance of the model also comes with the inevitable shortfall relating to limited transparency and explainability. The findings have significant implications for attaining the objective of the current frontier of deep learning models in this domain of scholarship – eXplainable Artificial Intelligence through a collaborative rather than a comparative framework.

Keywords: poverty prediction, satellite imagery, human readers, machine learning, Tanzania

Procedia PDF Downloads 106
146 Confidence Envelopes for Parametric Model Selection Inference and Post-Model Selection Inference

Authors: I. M. L. Nadeesha Jayaweera, Adao Alex Trindade

Abstract:

In choosing a candidate model in likelihood-based modeling via an information criterion, the practitioner is often faced with the difficult task of deciding just how far up the ranked list to look. Motivated by this pragmatic necessity, we construct an uncertainty band for a generalized (model selection) information criterion (GIC), defined as a criterion for which the limit in probability is identical to that of the normalized log-likelihood. This includes common special cases such as AIC & BIC. The method starts from the asymptotic normality of the GIC for the joint distribution of the candidate models in an independent and identically distributed (IID) data framework and proceeds by deriving the (asymptotically) exact distribution of the minimum. The calculation of an upper quantile for its distribution then involves the computation of multivariate Gaussian integrals, which is amenable to efficient implementation via the R package "mvtnorm". The performance of the methodology is tested on simulated data by checking the coverage probability of nominal upper quantiles and compared to the bootstrap. Both methods give coverages close to nominal for large samples, but the bootstrap is two orders of magnitude slower. The methodology is subsequently extended to two other commonly used model structures: regression and time series. In the regression case, we derive the corresponding asymptotically exact distribution of the minimum GIC invoking Lindeberg-Feller type conditions for triangular arrays and are thus able to similarly calculate upper quantiles for its distribution via multivariate Gaussian integration. The bootstrap once again provides a default competing procedure, and we find that similar comparison performance metrics hold as for the IID case. The time series case is complicated by far more intricate asymptotic regime for the joint distribution of the model GIC statistics. Under a Gaussian likelihood, the default in most packages, one needs to derive the limiting distribution of a normalized quadratic form for a realization from a stationary series. Under conditions on the process satisfied by ARMA models, a multivariate normal limit is once again achieved. The bootstrap can, however, be employed for its computation, whence we are once again in the multivariate Gaussian integration paradigm for upper quantile evaluation. Comparisons of this bootstrap-aided semi-exact method with the full-blown bootstrap once again reveal a similar performance but faster computation speeds. One of the most difficult problems in contemporary statistical methodological research is to be able to account for the extra variability introduced by model selection uncertainty, the so-called post-model selection inference (PMSI). We explore ways in which the GIC uncertainty band can be inverted to make inferences on the parameters. This is being attempted in the IID case by pivoting the CDF of the asymptotically exact distribution of the minimum GIC. For inference one parameter at a time and a small number of candidate models, this works well, whence the attained PMSI confidence intervals are wider than the MLE-based Wald, as expected.

Keywords: model selection inference, generalized information criteria, post model selection, Asymptotic Theory

Procedia PDF Downloads 89
145 Convolutional Neural Network Based on Random Kernels for Analyzing Visual Imagery

Authors: Ja-Keoung Koo, Kensuke Nakamura, Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Byung-Woo Hong

Abstract:

The machine learning techniques based on a convolutional neural network (CNN) have been actively developed and successfully applied to a variety of image analysis tasks including reconstruction, noise reduction, resolution enhancement, segmentation, motion estimation, object recognition. The classical visual information processing that ranges from low level tasks to high level ones has been widely developed in the deep learning framework. It is generally considered as a challenging problem to derive visual interpretation from high dimensional imagery data. A CNN is a class of feed-forward artificial neural network that usually consists of deep layers the connections of which are established by a series of non-linear operations. The CNN architecture is known to be shift invariant due to its shared weights and translation invariance characteristics. However, it is often computationally intractable to optimize the network in particular with a large number of convolution layers due to a large number of unknowns to be optimized with respect to the training set that is generally required to be large enough to effectively generalize the model under consideration. It is also necessary to limit the size of convolution kernels due to the computational expense despite of the recent development of effective parallel processing machinery, which leads to the use of the constantly small size of the convolution kernels throughout the deep CNN architecture. However, it is often desired to consider different scales in the analysis of visual features at different layers in the network. Thus, we propose a CNN model where different sizes of the convolution kernels are applied at each layer based on the random projection. We apply random filters with varying sizes and associate the filter responses with scalar weights that correspond to the standard deviation of the random filters. We are allowed to use large number of random filters with the cost of one scalar unknown for each filter. The computational cost in the back-propagation procedure does not increase with the larger size of the filters even though the additional computational cost is required in the computation of convolution in the feed-forward procedure. The use of random kernels with varying sizes allows to effectively analyze image features at multiple scales leading to a better generalization. The robustness and effectiveness of the proposed CNN based on random kernels are demonstrated by numerical experiments where the quantitative comparison of the well-known CNN architectures and our models that simply replace the convolution kernels with the random filters is performed. The experimental results indicate that our model achieves better performance with less number of unknown weights. The proposed algorithm has a high potential in the application of a variety of visual tasks based on the CNN framework. Acknowledgement—This work was supported by the MISP (Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by IITP, and NRF-2014R1A2A1A11051941, NRF2017R1A2B4006023.

Keywords: deep learning, convolutional neural network, random kernel, random projection, dimensionality reduction, object recognition

Procedia PDF Downloads 290
144 The Impact of Right to Repair Initiatives on Environmental and Financial Performance in European Consumer Electronics Firms: An Econometric Analysis

Authors: Daniel Stabler, Anne-Laure Mention, Henri Hakala, Ahmad Alaassar

Abstract:

In Europe, 2.2 billion tons of waste annually generate severe environmental damage and economic burdens, and negatively impact human health. A stark illustration of the problem is found within the consumer electronics industry, which reflects one of the most complex global waste streams. Of the 5.3 billion globally discarded mobile phones in 2022, only 17% were properly recycled. To address these pressing issues, Europe has made significant strides in developing waste management strategies, Circular Economy initiatives, and Right to Repair policies. These endeavors aim to make product repair and maintenance more accessible, extend product lifespans, reduce waste, and promote sustainable resource use. European countries have introduced Right to Repair policies, often in conjunction with extended producer responsibility legislation, repair subsidies, and consumer repair indices, to varying degrees of regulatory rigor. Changing societal trends emphasizing sustainability and environmental responsibility have driven consumer demand for more sustainable and repairable products, benefiting repair-focused consumer electronics businesses. In academic research, much of the literature in Management studies has examined the European Circular Economy and the Right to Repair from firm-level perspectives. These studies frequently employ a business-model lens, emphasizing innovation and strategy frameworks. However, this study takes an institutional perspective, aiming to understand the adoption of Circular Economy and repair-focused business models within the European consumer electronics market. The concepts of the Circular Economy and the Right to Repair align with institutionalism as they reflect evolving societal norms favoring sustainability and consumer empowerment. Regulatory institutions play a pivotal role in shaping and enforcing these concepts through legislation, influencing the behavior of businesses and individuals. Compliance and enforcement mechanisms are essential for their success, compelling actors to adopt sustainable practices and consider product life extension. Over time, these mechanisms create a path for more sustainable choices, underscoring the influence of institutions and societal values on behavior and decision-making. Institutionalism, particularly 'neo-institutionalism,' provides valuable insights into the factors driving the adoption of Circular and repair-focused business models. Neo-institutional pressures can manifest through coercive regulatory initiatives or normative standards shaped by socio-cultural trends. The Right to Repair movement has emerged as a prominent and influential idea within academic discourse and sustainable development initiatives. Therefore, understanding how macro-level societal shifts toward the Circular Economy and the Right to Repair trigger firm-level responses is imperative. This study aims to answer a crucial question about the impact of European Right to Repair initiatives had on the financial and environmental performance of European consumer electronics companies at the firm level. A quantitative and statistical research design will be employed. The study will encompass an extensive sample of consumer electronics firms in Northern and Western Europe, analyzing their financial and environmental performance in relation to the implementation of Right to Repair mechanisms. The study's findings are expected to provide valuable insights into the broader implications of the Right to Repair and Circular Economy initiatives on the European consumer electronics industry.

Keywords: circular economy, right to repair, institutionalism, environmental management, european union

Procedia PDF Downloads 82
143 Analysis of Fish Preservation Methods for Traditional Fishermen Boat

Authors: Kusno Kamil, Andi Asni, Sungkono

Abstract:

According to a report of the World Food and Agriculture Agency (FAO): the post-harvest fish losses in Indonesia reaches 30 percent from 170 trillion rupiahs of marine fisheries reserves, then the potential loss reaches 51 trillion rupiahs (end of 2016 data). This condition is caused by traditionally vulnerable fish catches damaged due to disruption of the cold chain of preservation. The physical and chemical changes in fish flesh increase rapidly, especially if exposed to the scorching heat in the middle of the sea, exacerbated by the low awareness of catch hygiene; many unclean catches which contain blood are often treated without special attention and mixed with freshly caught fish, thereby increasing the potential for faster fish spoilage. This background encourages research on traditional fisherman catch preservation methods that aim to find the best and most affordable methods and/or combinations of fish preservation methods so that they can help fishermen increase their fishing duration without worrying that their catch will be damaged, thereby reducing their economic value when returning to the beach to sell their catches. This goal is expected to be achieved through experimental methods of treatment of fresh fish catches in containers with the addition of anti-bacterial copper, liquid smoke solution, and the use of vacuum containers. The other three treatments combined the three previous treatment variables with an electrically powered cooler (temperature 0~4 ᵒC). As a control specimen, the untreated fresh fish (placed in the open air and in the refrigerator) were also prepared for comparison for 1, 3, and 6 days. To test the level of freshness of fish for each treatment, physical observations were used, which were complemented by tests for bacterial content in a trusted laboratory. The content of copper (Cu) in fish meat (which is suspected of having a negative impact on consumers) was also part of the examination on the 6th day of experimentation. The results of physical observations on the test specimens (organoleptic method) showed that preservation assisted by the use of coolers was still better for all treatment variables. The specimens, without cooling, sequentially showed that the best preservation effectiveness was the addition of copper plates, the use of vacuum containers, and then liquid smoke immersion. Especially for liquid smoke, soaking for 6 days of preservation makes the fish meat soft and easy to crumble, even though it doesn't have a bad odor. The visual observation was then complemented by the results of testing the amount of growth (or retardation) of putrefactive bacteria in each treatment of test specimens within similar observation periods. Laboratory measurements report that the minimum amount of putrefactive bacteria achieved by preservation treatment combining cooler with liquid smoke (sample A+), then cooler only (D+), copper layer inside cooler (B+), vacuum container inside cooler (C+), respectively. Other treatments in open air produced a hundred times more putrefactive bacteria. In addition, treatment of the copper layer contaminated the preserved fresh fish more than a thousand times bigger compared to the initial amount, from 0.69 to 1241.68 µg/g.

Keywords: fish, preservation, traditional, fishermen, boat

Procedia PDF Downloads 70
142 The Effect of Ionic Liquid Anion Type on the Properties of TiO2 Particles

Authors: Marta Paszkiewicz, Justyna Łuczak, Martyna Marchelek, Adriana Zaleska-Medynska

Abstract:

In recent years, photocatalytical processes have been intensively investigated for destruction of pollutants, hydrogen evolution, disinfection of water, air and surfaces, for the construction of self-cleaning materials (tiles, glass, fibres, etc.). Titanium dioxide (TiO2) is the most popular material used in heterogeneous photocatalysis due to its excellent properties, such as high stability, chemical inertness, non-toxicity and low cost. It is well known that morphology and microstructure of TiO2 significantly influence the photocatalytic activity. This characteristics as well as other physical and structural properties of photocatalysts, i.e., specific surface area or density of crystalline defects, could be controlled by preparation route. In this regard, TiO2 particles can be obtained by sol-gel, hydrothermal, sonochemical methods, chemical vapour deposition and alternatively, by ionothermal synthesis using ionic liquids (ILs). In the TiO2 particles synthesis ILs may play a role of a solvent, soft template, reagent, agent promoting reduction of the precursor or particles stabilizer during synthesis of inorganic materials. In this work, the effect of the ILs anion type on morphology and photoactivity of TiO2 is presented. The preparation of TiO2 microparticles with spherical structure was successfully achieved by solvothermal method, using tetra-tert-butyl orthotitatane (TBOT) as the precursor. The reaction process was assisted by an ionic liquids 1-butyl-3-methylimidazolium bromide [BMIM][Br], 1-butyl-3-methylimidazolium tetrafluoroborate [BMIM][BF4] and 1-butyl-3-methylimidazolium haxafluorophosphate [BMIM][PF6]. Various molar ratios of all ILs to TBOT (IL:TBOT) were chosen. For comparison, reference TiO2 was prepared using the same method without IL addition. Scanning electron microscopy (SEM), transmission electron microscopy (TEM), X-ray diffraction (XRD), Brenauer-Emmett-Teller surface area (BET), NCHS analysis, and FTIR spectroscopy were used to characterize the surface properties of the samples. The photocatalytic activity was investigated by means of phenol photodegradation in the aqueous phase as a model pollutant, as well as formation of hydroxyl radicals based on detection of fluorescent product of coumarine hydroxylation. The analysis results showed that the TiO2 microspheres had spherical structure with the diameters ranging from 1 to 6 µm. The TEM micrographs gave a bright observation of the samples in which the particles were comprised of inter-aggregated crystals. It could be also observed that the IL-assisted TiO2 microspheres are not hollow, which provides additional information about possible formation mechanism. Application of the ILs results in rise of the photocatalytic activity as well as BET surface area of TiO2 as compared to pure TiO2. The results of the formation of 7-hydroxycoumarin indicated that the increased amount of ·OH produced at the surface of excited TiO2 for samples TiO2_ILs well correlated with more efficient degradation of phenol. NCHS analysis showed that ionic liquids remained on the TiO2 surface confirming structure directing role of that compounds.

Keywords: heterogeneous photocatalysis, IL-assisted synthesis, ionic liquids, TiO2

Procedia PDF Downloads 267
141 Climate Change Impact on Mortality from Cardiovascular Diseases: Case Study of Bucharest, Romania

Authors: Zenaida Chitu, Roxana Bojariu, Liliana Velea, Roxana Burcea

Abstract:

A number of studies show that extreme air temperature affects mortality related to cardiovascular diseases, particularly among elderly people. In Romania, the summer thermal discomfort expressed by Universal Thermal Climate Index (UTCI) is highest in the Southern part of the country, where Bucharest, the largest Romanian urban agglomeration, is also located. The urban characteristics such as high building density and reduced green areas enhance the increase of the air temperature during summer. In Bucharest, as in many other large cities, the effect of heat urban island is present and determines an increase of air temperature compared to surrounding areas. This increase is particularly important during heat wave periods in summer. In this context, the researchers performed a temperature-mortality analysis based on daily deaths related to cardiovascular diseases, recorded between 2010 and 2019 in Bucharest. The temperature-mortality relationship was modeled by applying distributed lag non-linear model (DLNM) that includes a bi-dimensional cross-basis function and flexible natural cubic spline functions with three internal knots in the 10th, 75th and 90th percentiles of the temperature distribution, for modelling both exposure-response and lagged-response dimensions. Firstly, this study applied this analysis for the present climate. Extrapolation of the exposure-response associations beyond the observed data allowed us to estimate future effects on mortality due to temperature changes under climate change scenarios and specific assumptions. We used future projections of air temperature from five numerical experiments with regional climate models included in the EURO-CORDEX initiative under the relatively moderate (RCP 4.5) and pessimistic (RCP 8.5) concentration scenarios. The results of this analysis show for RCP 8.5 an ensemble-averaged increase with 6.1% of heat-attributable mortality fraction in future in comparison with present climate (2090-2100 vs. 2010-219), corresponding to an increase of 640 deaths/year, while mortality fraction due to the cold conditions will be reduced by 2.76%, corresponding to a decrease by 288 deaths/year. When mortality data is stratified according to the age, the ensemble-averaged increase of heat-attributable mortality fraction for elderly people (> 75 years) in the future is even higher (6.5 %). These findings reveal the necessity to carefully plan urban development in Bucharest to face the public health challenges raised by the climate change. Paper Details: This work is financed by the project URCLIM which is part of ERA4CS, an ERA-NET initiated by JPI Climate, and funded by Ministry of Environment, Romania with co-funding by the European Union (Grant 690462). A part of this work performed by one of the authors has received funding from the European Union’s Horizon 2020 research and innovation programme from the project EXHAUSTION under grant agreement No 820655.

Keywords: cardiovascular diseases, climate change, extreme air temperature, mortality

Procedia PDF Downloads 128
140 Automatic Identification of Pectoral Muscle

Authors: Ana L. M. Pavan, Guilherme Giacomini, Allan F. F. Alves, Marcela De Oliveira, Fernando A. B. Neto, Maria E. D. Rosa, Andre P. Trindade, Diana R. De Pina

Abstract:

Mammography is a worldwide image modality used to diagnose breast cancer, even in asymptomatic women. Due to its large availability, mammograms can be used to measure breast density and to predict cancer development. Women with increased mammographic density have a four- to sixfold increase in their risk of developing breast cancer. Therefore, studies have been made to accurately quantify mammographic breast density. In clinical routine, radiologists perform image evaluations through BIRADS (Breast Imaging Reporting and Data System) assessment. However, this method has inter and intraindividual variability. An automatic objective method to measure breast density could relieve radiologist’s workload by providing a first aid opinion. However, pectoral muscle is a high density tissue, with similar characteristics of fibroglandular tissues. It is consequently hard to automatically quantify mammographic breast density. Therefore, a pre-processing is needed to segment the pectoral muscle which may erroneously be quantified as fibroglandular tissue. The aim of this work was to develop an automatic algorithm to segment and extract pectoral muscle in digital mammograms. The database consisted of thirty medio-lateral oblique incidence digital mammography from São Paulo Medical School. This study was developed with ethical approval from the authors’ institutions and national review panels under protocol number 3720-2010. An algorithm was developed, in Matlab® platform, for the pre-processing of images. The algorithm uses image processing tools to automatically segment and extract the pectoral muscle of mammograms. Firstly, it was applied thresholding technique to remove non-biological information from image. Then, the Hough transform is applied, to find the limit of the pectoral muscle, followed by active contour method. Seed of active contour is applied in the limit of pectoral muscle found by Hough transform. An experienced radiologist also manually performed the pectoral muscle segmentation. Both methods, manual and automatic, were compared using the Jaccard index and Bland-Altman statistics. The comparison between manual and the developed automatic method presented a Jaccard similarity coefficient greater than 90% for all analyzed images, showing the efficiency and accuracy of segmentation of the proposed method. The Bland-Altman statistics compared both methods in relation to area (mm²) of segmented pectoral muscle. The statistic showed data within the 95% confidence interval, enhancing the accuracy of segmentation compared to the manual method. Thus, the method proved to be accurate and robust, segmenting rapidly and freely from intra and inter-observer variability. It is concluded that the proposed method may be used reliably to segment pectoral muscle in digital mammography in clinical routine. The segmentation of the pectoral muscle is very important for further quantifications of fibroglandular tissue volume present in the breast.

Keywords: active contour, fibroglandular tissue, hough transform, pectoral muscle

Procedia PDF Downloads 350
139 Petrogeochemistry of Hornblende-Bearing Gabbro Intrusive, the Greater Caucasus

Authors: Giorgi Chichinadze, David Shengelia, Tamara Tsutsunava, Nikoloz Maisuradze, Giorgi Beridze

Abstract:

The Jalovchat gabbro intrusive is exposed on the northern and southern slopes of Main Range zone of the Greater Caucasus, on an area about 25km2. It is intruded in Precambrian crystalline schists and amphibolites intensively metamorphose them along the contact zone. The intrusive is represented by hornblende-bearing gabbro, gabbro-norites and norites including thin vein bodies of gabbro-pegmatites, anorthosites and micro-gabbros. Especially should be noted the veins of gabbro-pegmatites with the gigantic (up to 0.5m) hornblende crystals. From this point of view, the Jalovchat gabbroid intrusive is particularly interesting and by its unusual composition has no analog in the Caucasus overall. The comprehensive petrologic and geochemical study of the intrusive was carried out by the authors. The results of investigations are following. Amphiboles correspond to magnesiohastingsite and magnesiohornblende. In hastingsite and hornblende as a result of isovalent isomorphism of Fe2+ by Mg, content of the latter has been increased. By AMF and Na20+K diagrams the intrusive rocks correspond to tholeiitic basalts or to basalts close to it by composition. According to ACM-AMF double diagram the samples distributed in the fields of MORB and alkali cumulates. In TiO2/FeO+Fe2O3, Zr/Y-Zr and Ti-Cr/Ni diagrams and Ti-Cr-Y triangular diagram samples are arranged in the fields of island-arc and mid-oceanic basalts or along the trends reflecting mid-oceanic ridges or island arcs. K2O/TiO2 diagram shows that these rocks belong to normal and enriched MORB type. According to Th/Nb/Y ratio, the Jalovchat intrusive composition corresponds to depleted mantle, but by Sm/Y-Ce/Sm - to the MORB area. Th/Y and Nb/Y ratios coincide with the MORB composition, Th/Yb-Ta/Yb and La/Nb-Ti ratios correspond to N MORB, and Rb/Y and N/Y - to the lower crust formations. Exceptional are Ce/Pb-Ce and Nb/Th-Nb diagrams, showing the area of primitive mantle. Spidergrams are characterized by almost horizontal trend, weakly expressed Eu minimums and by a slight depletion of light REE. Similar are characteristic of typical tholeiit basalts. In comparison to MORB spidergrams, they are characterized by depletion of light REE. Their correlation to the spidergrams of Jalovchat intrusive proves that they are more depleted. The above cited points to the gradual depletion of mantle with the light REE in geological time. The RE and REE diagrams reveal unexpected regularity. In particular, petro-geochemical characteristics of Jalovchat gabbroid intrusive predominantly correspond to MORB, that usually is an anomalous phenomenon, since in ‘ophiolitic’ section magmatic formations represented mainly by gigantic prismatic hornblende-bearing gabbro and gabbro-pegmatite are not indicated. On the basis of petro-mineralogical and petro-geochemical data analysis, the authors consider that the Jalovchat intrusive belongs to the subduction geodynamic type. In the depleted mantle rich in water the MORB rock system has subducted, where the favorable conditions for crystallization of hornblende and especially for its gigantic crystals occurred. It is considered that the Jalovchat intrusive was formed in deep horizons of the Earth’s crust as a result of crystallization of water-bearing Bajocian basalt magma.

Keywords: The Greater Caucasus, gabbro-pegmatite, hornblende-bearing gabbro, petrogenesis

Procedia PDF Downloads 443
138 Dynamic Thermomechanical Behavior of Adhesively Bonded Composite Joints

Authors: Sonia Sassi, Mostapha Tarfaoui, Hamza Benyahia

Abstract:

Composite materials are increasingly being used as a substitute for metallic materials in many technological applications like aeronautics, aerospace, marine and civil engineering applications. For composite materials, the thermomechanical response evolves with the strain rate. The energy balance equation for anisotropic, elastic materials includes heat source terms that govern the conversion of some of the kinetic work into heat. The remainder contributes to the stored energy creating the damage process in the composite material. In this paper, we investigate the bulk thermomechanical behavior of adhesively-bonded composite assemblies to quantitatively asses the temperature rise which accompanies adiabatic deformations. In particular, adhesively bonded joints in glass/vinylester composite material are subjected to in-plane dynamic loads under a range of strain rates. Dynamic thermomechanical behavior of this material is investigated using compression Split Hopkinson Pressure Bars (SHPB) coupled with a high speed infrared camera and a high speed camera to measure in real time the dynamic behavior, the damage kinetic and the temperature variation in the material. The interest of using high speed IR camera is in order to view in real time the evolution of heat dissipation in the material when damage occurs. But, this technique does not produce thermal values in correlation with the stress-strain curves of composite material because of its high time response in comparison with the dynamic test time. For this reason, the authors revisit the application of specific thermocouples placed on the surface of the material to ensure the real thermal measurements under dynamic loading using small thermocouples. Experiments with dynamically loaded material show that the thermocouples record temperatures values with a short typical rise time as a result of the conversion of kinetic work into heat during compression test. This results show that small thermocouples can be used to provide an important complement to other noncontact techniques such as the high speed infrared camera. Significant temperature rise was observed in in-plane compression tests especially under high strain rates. During the tests, it has been noticed that sudden temperature rise occur when macroscopic damage occur. This rise in temperature is linked to the rate of damage. The more serve the damage is, a higher localized temperature is detected. This shows the strong relationship between the occurrence of damage and induced heat dissipation. For the case of the in plane tests, the damage takes place more abruptly as the strain rate is increased. The difference observed in the obtained thermomechanical response in plane compression is explained only by the difference in the damage process being active during the compression tests. In this study, we highlighted the dependence of the thermomechanical response on the strain rate of bonded specimens. The effect of heat dissipation of this material cannot hence be ignored and should be taken into account when defining damage models during impact loading.

Keywords: adhesively-bonded composite joints, damage, dynamic compression tests, energy balance, heat dissipation, SHPB, thermomechanical behavior

Procedia PDF Downloads 213
137 Comparison of Nutritional Status of Asthmatic vs Non-asthmatic Adults

Authors: Ayesha Mushtaq

Abstract:

Asthma is a pulmonary disease in which blockade of the airway takes place due to inflammation as a response to certain allergens. Breathing troubles, cough, and dyspnea are one of the few symptoms. Several studies have indicated a significant effect on asthma due to changes in dietary routines. Certain food items, such as oily foods and other materials, are known to cause an increase in the symptoms of asthma. Low dietary intake of fruits and vegetables may be important in relation to asthma prevalence. The objective of this study is to assess and compare the nutritional status of asthmatic and non-asthmatic patients. The significance of this study lies in the factor that it will help nutritionists to arrange a feasible dietary routine for asthmatic patients. This research was conducted at the Pulmonology Department of the Pakistan Institute of Medical Science Islamabad. About thirty hundred thirty-four million people are affected by asthma worldwide. Pakistan is on the verge of being an uplifted urban population and asthma cases are increasingly high these days. Several studies suggest an increase in the Asthmatic patient population due to improper diet. Other studies conducted at different institutions have conducted research on similar topics. These studies have suggested that there is a substantial alteration in the nutritional status of asthmatic and non-Asthmatic patients. This is a cross-sectional study aimed at assessing the nutritious standing of Asthmatic and non-asthmatic patients. This research took place at the Pakistan Institute of Medical Sciences (PIMS), Islamabad, Pakistan. The research included asthmatic and non-asthmatic patients coming to the pulmonology department clinic at the Pakistan Institute of Medical Sciences (PIMS). These patients were aged between 20-60 years. A questionnaire was developed for these patients to estimate their dietary plans in these patients. The methodology included four sections. The first section was the Socio-Demographic profile, which included age, gender, monthly income and occupation. The next section was anthropometric measurements which included the weight, height and body mass index (BMI) of the individual. The next section, section three, was about the biochemical attributes, such as for biochemical profiling, pulmonary function testing (PFT) was performed. In the next section, Dietary habits, which were assessed by using a food frequency questionnaire (FFQ) through food habits and consumption pattern, was assessed. The next section life style data, in which the person's level of physical activity, sleep and smoking habits were assessed. The next section was statistical analysis. All the data obtained from the study were statistically analyzed and assessed. Most of the asthma Patients were females, with weight more than normal or even obese. Body Mass Index (BMI) was higher in asthma Patients than those in non-Asthmatic ones. When the nutritional Values were assessed, we came to know that these patients were low on certain nutrients and their diet included more junk and oily food than healthy vegetables and fruits. Beverages intake was also included in the same assessment. It is evident from this study that nutritional status has a contributory effect on asthma. So, patients on the verge of developing asthma or those who have developed asthma should focus on their diet, maintain good eating habits and take healthy diets, including fruits and vegetables rather than oily foods. Proper sleep may also contribute to the control of asthma.

Keywords: NUTRI, BMI, asthma, food

Procedia PDF Downloads 70
136 A Novel Concept of Optical Immunosensor Based on High-Affinity Recombinant Protein Binders for Tailored Target-Specific Detection

Authors: Alena Semeradtova, Marcel Stofik, Lucie Mareckova, Petr Maly, Ondrej Stanek, Jan Maly

Abstract:

Recently, novel strategies based on so-called molecular evolution were shown to be effective for the production of various peptide ligand libraries with high affinities to molecular targets of interest comparable or even better than monoclonal antibodies. The major advantage of these peptide scaffolds is mainly their prevailing low molecular weight and simple structure. This study describes a new high-affinity binding molecules based immunesensor using a simple optical system for human serum albumin (HSA) detection as a model molecule. We present a comparison of two variants of recombinant binders based on albumin binding domain of the protein G (ABD) performed on micropatterned glass chip. Binding domains may be tailored to any specific target of interest by molecular evolution. Micropatterened glass chips were prepared using UV-photolithography on chromium sputtered glasses. Glass surface was modified by (3-aminopropyl)trietoxysilane and biotin-PEG-acid using EDC/NHS chemistry. Two variants of high-affinity binding molecules were used to detect target molecule. Firstly, a variant is based on ABD domain fused with TolA chain. This molecule is in vivo biotinylated and each molecule contains one molecule of biotin and one ABD domain. Secondly, the variant is ABD domain based on streptavidin molecule and contains four gaps for biotin and four ABD domains. These high-affinity molecules were immobilized to the chip surface via biotin-streptavidin chemistry. To eliminate nonspecific binding 1% bovine serum albumin (BSA) or 6% fetal bovine serum (FBS) were used in every step. For both variants range of measured concentrations of fluorescently labelled HSA was 0 – 30 µg/ml. As a control, we performed a simultaneous assay without high-affinity binding molecules. Fluorescent signal was measured using inverse fluorescent microscope Olympus IX 70 with COOL LED pE 4000 as a light source, related filters, and camera Retiga 2000R as a detector. The fluorescent signal from non-modified areas was substracted from the signal of the fluorescent areas. Results were presented in graphs showing the dependence of measured grayscale value on the log-scale of HSA concentration. For the TolA variant the limit of detection (LOD) of the optical immunosensor proposed in this study is calculated to be 0,20 µg/ml for HSA detection in 1% BSA and 0,24 µg/ml in 6% FBS. In the case of streptavidin-based molecule, it was 0,04 µg/ml and 0,07 µg/ml respectively. The dynamical range of the immunosensor was possible to estimate just in the case of TolA variant and it was calculated to be 0,49 – 3,75 µg/ml and 0,73-1,88 µg/ml respectively. In the case of the streptavidin-based the variant we didn´t reach the surface saturation even with the 480 ug/ml concentration and the upper value of dynamical range was not estimated. Lower value was calculated to be 0,14 µg/ml and 0,17 µg/ml respectively. Based on the obtained results, it´s clear that both variants are useful for creating the bio-recognizing layer on immunosensors. For this particular system, it is obvious that the variant based on streptavidin molecule is more useful for biosensing on glass planar surfaces. Immunosensors based on this variant would exhibit better limit of detection and wide dynamical range.

Keywords: high affinity binding molecules, human serum albumin, optical immunosensor, protein G, UV-photolitography

Procedia PDF Downloads 368
135 Response of Subfossile Diatoms, Cladocera, and Chironomidae in Sediments of Small Ponds to Changes in Wastewater Discharges from a Zn–Pb Mine

Authors: Ewa Szarek-Gwiazda, Agata Z. Wojtal, Agnieszka Pociecha, Andrzej Kownacki, Dariusz Ciszewski

Abstract:

Mining of metal ores is one of the largest sources of heavy metals, which deteriorate aquatic systems. The response of organisms to environmental changes can be well recorded in sediments of the affected water bodies and may be reconstructed based on analyses of organisms' remains. The present study aimed at the response of diatoms (Bacillariophyta), Cladocera, and Chironomidae communities to the impact of Zn-Pb mine water discharge recorded in sediment cores of small subsidence ponds on the Chechło River floodplain (Silesia–Krakow Region, southern Poland). We hypothesize various responses of the above groups to high metal concentrations (Cd, Pb, Zn, and Cu). The investigated ponds were formed either during the peak of the ore exploitation (DOWN) or after mining cessation (UP). Currently, the concentrations of dissolved metals (in µg g⁻¹) in water reached up to 0.53 for Cd, 7.3 for Pb, and up to 47.1 for Zn. All the sediment cores from subsidence ponds were heavily polluted with Cd 6.7–612 μg g⁻¹, Pb 0.1–10.2 mg g⁻¹, and Zn 0.5–23.1 mg g⁻¹. Core sediments varied also in respect to pH 5.8-7.1 and concentrations of organic matter (5.7-39.8%). The impact of high metal concentrations was expressed by the occurrence of metal-tolerant taxa like diatoms – Nitzschia amphibia, Sellaphora nigri, and Surirella brebisonii var. kuetzingii; Cladocera – Chydorus sphaericus (dominated in cores from all ponds), and Chironomidae – Chironomus and Cricotopus especially in the DOWN ponds. Statistical analysis exhibited a negative impact of metals on some taxa of diatoms and Cladocera but only on Polypedilum sp. from Chironomidae. The abundance of such diatoms like Gomphonema utae, Staurosirella pinnata, Eunotia bilunaris, and Cladocera like Alona, Chydorus, Graptoleberis, and Pleuroxus decreased with increasing Pb concentration. However, the occurrence or dominance of more sensitive species of diatoms and Cladocera indicates their adaptation to higher metal loads, which was facilitated by neutral pH and slightly alkaline waters. Diatom assemblages were generally resistant to Zn, Pb, Cu, and Cd pollution, as indicated by their large similarity to populations from non-contaminated waters. Comparison with reference objects clearly indicates the dominance of Achnanthidium minutissimum, Staurosira venter, and Fragilaria gracilis in very diverse assemblages of unpolluted waters. The distribution of the Cladocera and Chironomidae taxa depended on the habitat type. The DOWN ponds with stagnant water and overgrown with macrophytes were more suitable for cladocerans (14 taxa, higher diversity) than the UP ponds with river water flowing through their centre and with a small share of macrophytes (8 taxa). The Chironominae, mainly Chironomus and Microspectra, were abundant in cores from the UP ponds with muddy bottoms. Inversely, the density of Orthocladiinae, especially genus Cricotopus, was related to the organic matter content and dominated in cores from the DOWN ponds. The presence of diatoms like Nitzschia amphibia, Sellaphora nigri, and Surirella brebisonii var. kuetzingii, cladocerans: Bosmina longirostris, Chydorus sphaericus, Alona affinis, and A. rectangularis as well as Chironomidae Chironomus sp. (UP ponds) and Psecrotanypus varius (DOWN ponds) indicate the influence of the water trophy on their distribution.

Keywords: Chironomidae, Cladocera, diatoms, metals, Zn-Pb mine, sediment cores, subsidence ponds

Procedia PDF Downloads 77
134 Conceptualizing of Priorities in the Dynamics of Public Administration Contemporary Reforms

Authors: Larysa Novak-Kalyayeva, Aleksander Kuczabski, Orystlava Sydorchuk, Nataliia Fersman, Tatyana Zemlinskaia

Abstract:

The article presents the results of the creative analysis and comparison of trends in the development of the theory of public administration during the period from the second half of the 20th to the beginning of the 21st century. The process of conceptualization of the priorities of public administration in the dynamics of reforming was held under the influence of such factors as globalization, integration, information and technological changes and human rights is examined. The priorities of the social state in the concepts of the second half of the 20th century are studied. Peculiar approaches to determining the priorities of public administration in the countries of "Soviet dictatorship" in Central and Eastern Europe in the same period are outlined. Particular attention is paid to the priorities of public administration regarding the interaction between public power and society and the development of conceptual foundations for the modern managerial process. There is a thought that the dynamics of the formation of concepts of the European governance is characterized by the sequence of priorities: from socio-economic and moral-ethical to organizational-procedural and non-hierarchical ones. The priorities of the "welfare state" were focused on the decent level of material wellbeing of population. At the same time, the conception of "minimal state" emphasized priorities of human responsibility for their own fate under the conditions of minimal state protection. Later on, the emphasis was placed on horizontal ties and redistribution of powers and competences of "effective state" with its developed procedures and limits of responsibility at all levels of government and in close cooperation with the civil society. The priorities of the contemporary period are concentrated on human rights in the concepts of "good governance" and all the following ones, which recognize the absolute priority of public administration with compliance, provision and protection of human rights. There is a proved point of view that civilizational changes taking place under the influence of information and technological imperatives also stipulate changes in priorities, redistribution of emphases and update principles of managerial concepts on the basis of publicity, transparency, departure from traditional forms of hierarchy and control in favor of interactivity and inter-sectoral interaction, decentralization and humanization of managerial processes. The necessity to permanently carry out the reorganization, by establishing the interaction between different participants of public power and social relations, to establish a balance between political forces and social interests on the basis of mutual trust and mutual understanding determines changes of social, political, economic and humanitarian paradigms of public administration and their theoretical comprehension. The further studies of theoretical foundations of modern public administration in interdisciplinary discourse in the context of ambiguous consequences of the globalizational and integrational processes of modern European state-building would be advisable. This is especially true during the period of political transformations and economic crises which are the characteristic of the contemporary Europe, especially for democratic transition countries.

Keywords: concepts of public administration, democratic transition countries, human rights, the priorities of public administration, theory of public administration

Procedia PDF Downloads 174
133 Reimagining Kinships: Queering the Labor of Care and Motherhood in Japan’s Rental Family Services

Authors: Maari Sugawara

Abstract:

This study investigates the constructed notion of “motherhood” and queered forms of care in contemporary Japan, focusing on rental family services. In Japan, the concept of motherhood is often equated with womanhood, reflecting a pervasive ideology that views motherhood as an essential aspect of a woman's societal role, particularly amidst economic recovery and an aging population. This study interrogates these gendered expectations by linking rental family services, particularly the role of rental mothers, to traditional caregiving roles. It critiques the gendered construction of domestic labor and aims to expand conceptions of alternative family structures and caregiving roles beyond normative frameworks. Emerging in the 1980s to provide companionship for the elderly, rental family services have evolved to meet diverse social needs, with paid actors fulfilling familial roles at various social events. Despite their growing prevalence, academic exploration of this phenomenon remains limited. This research aims to fill that gap by investigating the cultural, social, and economic factors fueling the popularity of rental family services and analyzing their implications for contemporary understandings of family dynamics and care labor in Japan. Furthermore, this study underscores the disproportionate domestic labor burden women in Japan bear, often managing time-intensive household tasks, which creates a "double burden" for those in full-time employment. Care work, including elderly and disability support, is undervalued and typically compensated at near-minimum wage levels, with women predominantly filling these low-wage roles. This gender disparity in Japan's care industry contributes to labor shortages in caregiving and childcare, highlighting broader structural inequities in the labor market. Through semi-structured qualitative interviews with fifteen rental mothers, this study investigates their experiences, motivations, role dynamics, and emotional labor. It critically examines whether the labor performed by rental family actors constitutes a subversive practice deserving of appropriate compensation. Utilizing a role-playing method, the author engages with rental mothers as if they were her own, reflecting the dynamics of compensated labor. This interaction delves into the economic and emotional aspects of constructed motherhood, facilitating a broader inquiry into the value of both productive and reproductive labor in Japan. The study also investigates the relationship between sex work and rental family services within the socio-economic landscape, recognizing the links between the welfare sector and female employment in legal sex work. Although distinct, these sectors merit joint consideration due to the commonality of male clients in both industries. This research engages with theoretical perspectives framing mobile sex work as inherently queer, directly challenging the dominance of heteronormativity. The agency exercised by sex workers complicates narratives of conformity and deviance, underscoring the need to reevaluate caregiving labor in both paid and unpaid contexts. Ultimately, this research critiques the intersection of gender, care, and labor in contemporary Japan by examining the undervaluation of traditional caregiving roles alongside the labor involved in rental family services. It challenges Japanese policies that equate womanhood with motherhood and explores the potential of viewing outsourced care as queered maternal and non-reproductive labor, advocating for the recognition of alternative family structures and non-reproductive forms of motherhood.

Keywords: motherhood, alternative family structures, carework, Japan, queer studies

Procedia PDF Downloads 14
132 Geophysical Methods and Machine Learning Algorithms for Stuck Pipe Prediction and Avoidance

Authors: Ammar Alali, Mahmoud Abughaban

Abstract:

Cost reduction and drilling optimization is the goal of many drilling operators. Historically, stuck pipe incidents were a major segment of non-productive time (NPT) associated costs. Traditionally, stuck pipe problems are part of the operations and solved post-sticking. However, the real key to savings and success is in predicting the stuck pipe incidents and avoiding the conditions leading to its occurrences. Previous attempts in stuck-pipe predictions have neglected the local geology of the problem. The proposed predictive tool utilizes geophysical data processing techniques and Machine Learning (ML) algorithms to predict drilling activities events in real-time using surface drilling data with minimum computational power. The method combines two types of analysis: (1) real-time prediction, and (2) cause analysis. Real-time prediction aggregates the input data, including historical drilling surface data, geological formation tops, and petrophysical data, from wells within the same field. The input data are then flattened per the geological formation and stacked per stuck-pipe incidents. The algorithm uses two physical methods (stacking and flattening) to filter any noise in the signature and create a robust pre-determined pilot that adheres to the local geology. Once the drilling operation starts, the Wellsite Information Transfer Standard Markup Language (WITSML) live surface data are fed into a matrix and aggregated in a similar frequency as the pre-determined signature. Then, the matrix is correlated with the pre-determined stuck-pipe signature for this field, in real-time. The correlation used is a machine learning Correlation-based Feature Selection (CFS) algorithm, which selects relevant features from the class and identifying redundant features. The correlation output is interpreted as a probability curve of stuck pipe incidents prediction in real-time. Once this probability passes a fixed-threshold defined by the user, the other component, cause analysis, alerts the user of the expected incident based on set pre-determined signatures. A set of recommendations will be provided to reduce the associated risk. The validation process involved feeding of historical drilling data as live-stream, mimicking actual drilling conditions, of an onshore oil field. Pre-determined signatures were created for three problematic geological formations in this field prior. Three wells were processed as case studies, and the stuck-pipe incidents were predicted successfully, with an accuracy of 76%. This accuracy of detection could have resulted in around 50% reduction in NPT, equivalent to 9% cost saving in comparison with offset wells. The prediction of stuck pipe problem requires a method to capture geological, geophysical and drilling data, and recognize the indicators of this issue at a field and geological formation level. This paper illustrates the efficiency and the robustness of the proposed cross-disciplinary approach in its ability to produce such signatures and predicting this NPT event.

Keywords: drilling optimization, hazard prediction, machine learning, stuck pipe

Procedia PDF Downloads 229
131 Effectiveness of Participatory Ergonomic Education on Pain Due to Work Related Musculoskeletal Disorders in Food Processing Industrial Workers

Authors: Salima Bijapuri, Shweta Bhatbolan, Sejalben Patel

Abstract:

Ergonomics concerns the fitting of the environment and the equipment to the worker. Ergonomic principles can be employed in different dimensions of the industrial sector. Participation of all the stakeholders is the key to the formulation of a multifaceted and comprehensive approach to lessen the burden of occupational hazards. Taking responsibility for one’s own work activities by acquiring sufficient knowledge and potential to influence the practices and outcomes is the basis of participatory ergonomics and even hastens the process to identify workplace hazards. The study was aimed to check how participatory ergonomics can be effective in the management of work-related musculoskeletal disorders. Method: A mega kitchen was identified in a twin city of Karnataka, India. Consent was taken, and the screening of workers was done using observation methods. Kitchen work was structured to include different tasks, which included preparation, cooking, distributing, and serving food, packing food to be delivered to schools, dishwashing, cleaning and maintenance of kitchen and equipment, and receiving and storing raw material. Total 100 workers attended the education session on participatory ergonomics and its role in implementing the correct ergonomic practices, thus preventing WRMSDs. Demographic details and baseline data on related musculoskeletal pain and discomfort were collected using the Nordic pain questionnaire and VAS score pre- and post-study. Monthly visits were made, and the education sessions were reiterated on each visit, thus reminding, correcting, and problem-solving of each worker. After 9 months with a total of 4 such education session, the post education data was collected. The software SPSS 20 was used to analyse the collected data. Results: The majority of them (78%), depending on the availability and feasibility, participated in the intervention workshops were arranged four times. The average age of the participants was 39 years. The percentage of female participants was 79.49%, and 20.51% of participants comprised of males. The Nordic Musculoskeletal Questionnaire (NMQ) showed that knee pain was the most commonly reported complaint (62%) from the last 12 months with a mean VAS of 6.27, followed by low back pain. Post intervention, the mean VAS Score was reduced significantly to 2.38. The comparison of pre-post scores was made using Wilcoxon matched pairs test. Upon enquiring, it was found that, the participants learned the importance of applying ergonomics at their workplace which inturn was beneficial for them to handle any problems arising at their workplace on their own with self confidence. Conclusion: The participatory ergonomics proved effective with workers of mega kitchen, and it is a feasible and practical approach. The advantage of the given study area was that it had a sophisticated and ergonomically designed workstation; thus it was the lack of education and practical knowledge to use these stations was of utmost need. There was a significant reduction in VAS scores with the implementation of changes in the working style, and the knowledge of ergonomics helped to decrease physical load and improve musculoskeletal health.

Keywords: ergonomic awareness session, mega kitchen, participatory ergonomics, work related musculoskeletal disorders

Procedia PDF Downloads 138
130 Bio-Psycho-Social Consequences and Effects in Fall-Efficacy Scale in Seniors Using Exercise Intervention of Motor Learning According to Yoga Techniques

Authors: Milada Krejci, Martin Hill, Vaclav Hosek, Dobroslava Jandova, Jiri Kajzar, Pavel Blaha

Abstract:

The paper declares effects of exercise intervention of the research project “Basic research of balance changes in seniors”, granted by the Czech Science Foundation. The objective of the presented study is to define predictors, which influence bio-psycho-social consequences and effects of balance ability in senior 65 years old and above. We focused on the Fall-Efficacy Scale changes evaluation in seniors. Comprehensive hypothesis of the project declares, that motion uncertainty (dyskinesia) can negatively affect the well-being of a senior in bio-psycho-social context. In total, random selection and testing of 100 seniors (30 males, 70 females) from Prague and Central Bohemian region was provided. The sample was divided by stratified random selection into experimental and control groups, who underwent input and output testing. For diagnostics the methods of Medical Anamnesis, Functional anthropological examinations, Tinetti Balance Assessment Tool, SF-36 Health Survey, Anamnestic comparative self-assessment scale were used. Intervention method called "Life in Balance" based on yoga techniques was applied in four-week cycle. Results of multivariate regression were verified by repeated measures ANOVA: subject factor, phase of intervention (between-subject factor), body fluid (within-subject factor) and phase of intervention × body fluid interaction). ANOVA was performed with a repetition involving the factors of subjects, experimental/control group, phase of intervention (independent variable), and x phase interaction followed by Bonferroni multiple comparison assays with a test strength of at least 0.8 on the probability level p < 0.05. In the paper results of the first-year investigation of the three years running project are analysed. Results of balance tests confirmed no significant difference between females and males in pre-test. Significant improvements in balance and walking ability were observed in experimental group in females comparing to males (F = 128.4, p < 0.001). In the females control group, there was no significant change in post- test, while in the female experimental group positive changes in posture and spine flexibility in post-tests were found. It seems that females even in senior age react better to incentives of intervention in balance and spine flexibility. On the base of results analyses, we can declare the significant improvement in social balance markers after intervention in the experimental group (F = 10.5, p < 0.001). In average, seniors are used to take four drugs daily. Number of drugs can contribute to allergy symptoms and balance problems. It can be concluded that static balance and walking ability of seniors according Tinetti Balance scale correlate significantly with psychic and social monitored markers.

Keywords: exercises, balance, seniors 65+, health, mental and social balance

Procedia PDF Downloads 137
129 Ultrafiltration Process Intensification for Municipal Wastewater Reuse: Water Quality, Optimization of Operating Conditions and Fouling Management

Authors: J. Yang, M. Monnot, T. Eljaddi, L. Simonian, L. Ercolei, P. Moulin

Abstract:

The application of membrane technology to wastewater treatment has expanded rapidly under increasing stringent legislation and environmental protection requirements. At the same time, the water resource is becoming precious, and water reuse has gained popularity. Particularly, ultrafiltration (UF) is a very promising technology for water reuse as it can retain organic matters, suspended solids, colloids, and microorganisms. Nevertheless, few studies dealing with operating optimization of UF as a tertiary treatment for water reuse on a semi-industrial scale appear in the literature. Therefore, this study aims to explore the permeate water quality and to optimize operating parameters (maximizing productivity and minimizing irreversible fouling) through the operation of a UF pilot plant under real conditions. The fully automatic semi-industrial UF pilot plant with periodic classic backwashes (CB) and air backwashes (AB) was set up to filtrate the secondary effluent of an urban wastewater treatment plant (WWTP) in France. In this plant, the secondary treatment consists of a conventional activated sludge process followed by a sedimentation tank. The UF process was thus defined as a tertiary treatment and was operated under constant flux. It is important to note that a combination of CB and chlorinated AB was used for better fouling management. The 200 kDa hollow fiber membrane was used in the UF module, with an initial permeability (for WWTP outlet water) of 600 L·m-2·h⁻¹·bar⁻¹ and a total filtration surface of 9 m². Fifteen filtration conditions with different fluxes, filtration times, and air backwash frequencies were operated for more than 40 hours of each to observe their hydraulic filtration performances. Through comparison, the best sustainable condition was flux at 60 L·h⁻¹·m⁻², filtration time at 60 min, and backwash frequency of 1 AB every 3 CBs. The optimized condition stands out from the others with > 92% water recovery rates, better irreversible fouling control, stable permeability variation, efficient backwash reversibility (80% for CB and 150% for AB), and no chemical washing occurrence in 40h’s filtration. For all tested conditions, the permeate water quality met the water reuse guidelines of the World Health Organization (WHO), French standards, and the regulation of the European Parliament adopted in May 2020, setting minimum requirements for water reuse in agriculture. In permeate: the total suspended solids, biochemical oxygen demand, and turbidity were decreased to < 2 mg·L-1, ≤ 10 mg·L⁻¹, < 0.5 NTU respectively; the Escherichia coli and Enterococci were > 5 log removal reduction, the other required microorganisms’ analysis were below the detection limits. Additionally, because of the COVID-19 pandemic, coronavirus SARS-CoV-2 was measured in raw wastewater of WWTP, UF feed, and UF permeate in November 2020. As a result, the raw wastewater was tested positive above the detection limit but below the quantification limit. Interestingly, the UF feed and UF permeate were tested negative to SARS-CoV-2 by these PCR assays. In summary, this work confirms the great interest in UF as intensified tertiary treatment for water reuse and gives operational indications for future industrial-scale production of reclaimed water.

Keywords: semi-industrial UF pilot plant, water reuse, fouling management, coronavirus

Procedia PDF Downloads 114
128 Investigating the Thermal Comfort Properties of Mohair Fabrics

Authors: Adine Gericke, Jiri Militky, Mohanapriya Venkataraman

Abstract:

Mohair, obtained from the Angora goat, is a luxury fiber and recognized as one of the best quality natural fibers. Expansion of the use of mohair into technical and functional textile products necessitates the need for a better understanding of how the use of mohair in fabrics will impact on its thermo-physiological comfort related properties. Despite its popularity, very little information is available on the quantification of the thermal and moisture management properties of mohair fabrics. This study investigated the effect of fibrous matter composition and fabric structural parameters on conductive and convective heat transfers to attain more information on the thermal comfort properties of mohair fabrics. Dry heat transfer through textiles may involve conduction through the fibrous phase, radiation through fabric interstices and convection of air within the structure. Factors that play a major role in heat transfer by conduction are fabric areal density (g/m2) and derived quantities such as cover factor and porosity. Convective heat transfer through fabrics is found in environmental conditions where there is wind-flow or the object is moving (e.g. running or walking). The thermal comfort properties of mohair fibers were objectively evaluated firstly in comparison with other textile fibers and secondly in a variety of fabric structures. Two sample sets were developed for this purpose, with fibre content, yarn structure and fabric design as main variables. SEM and microscopic images were obtained to closely examine the physical structures of the fibers and fabrics. Thermal comfort properties such as thermal resistance and thermal conductivity, as well as fabric thickness, were measured on the well-known Alambeta test instrument. Clothing insulation (clo) was calculated from the above. The thermal properties of fabrics under heat convection was evaluated using a laboratory model device developed at the Technical University of Liberec (referred to as the TP2-instrument). The effects of the different variables on fabric thermal comfort properties were analyzed statistically using TIBCO Statistica Software. The results showed that fabric structural properties, specifically sample thickness, played a significant role in determining the thermal comfort properties of the fabrics tested. It was found that regarding thermal resistance related to conductive heat flow, the effect of fiber type was not always statistically significant, probably as a result of the amount of trapped air within the fabric structure. The very low thermal conductivity of air, compared to that of the fibers, had a significant influence on the total conductivity and thermal resistance of the samples. This was confirmed by the high correlation of these factors with sample thickness. Regarding convective heat flow, the most important factor influencing the ability of the fabric to allow dry heat to move through the structure, was again fabric thickness. However, it would be wrong to totally disregard the effect of fiber composition on the thermal resistance of textile fabrics. In this study, the samples containing mohair or mohair/wool were consistently thicker than the others even though weaving parameters were kept constant. This can be ascribed to the physical properties of the mohair fibers that renders it exceptionally well towards trapping air among fibers (in a yarn) as well as among yarns (inside a fabric structure). The thicker structures trap more air to provide higher thermal insulation, but also prevent the free flow of air that allow thermal convection.

Keywords: mohair fabrics, convective heat transfer, thermal comfort properties, thermal resistance

Procedia PDF Downloads 144
127 The Effectiveness of Prenatal Breastfeeding Education on Breastfeeding Uptake Postpartum: A Systematic Review

Authors: Jennifer Kehinde, Claire O’Donnell, Annmarie Grealish

Abstract:

Introduction: Breastfeeding has been shown to provide numerous health benefits for both infants and mothers. The decision to breastfeed is influenced by physiological, psychological, and emotional factors. However, the importance of equipping mothers with the necessary knowledge for successful breastfeeding practice cannot be ruled out. The decline in global breastfeeding rate can be linked to a lack of adequate breastfeeding education during the prenatal stage. This systematic review examined the effectiveness of prenatal breastfeeding education on breastfeeding uptake postpartum. Method: This review was undertaken and reported in conformity with the Preferred Reporting Items for Systemic Reviews and Meta-Analysis statement (PRISMA) and was registered on the international prospective register for systematic reviews (PROSPERO: CRD42020213853). A PICO analysis (population, intervention, comparison, outcome) was undertaken to inform the choice of keywords in the search strategy to formulate the review question, which was aimed at determining the effectiveness of prenatal breastfeeding educational programs in improving breastfeeding uptake following birth. A systematic search of five databases (Cumulative Index to Nursing and Allied Health Literature, Medline, Psych INFO, and Applied Social Sciences Index and Abstracts) was searched between January 2014 until July 2021 to identify eligible studies. Quality assessment and narrative synthesis were subsequently undertaken. Results: Fourteen studies were included. All 14 studies used different types of breastfeeding programs; eight used a combination of curriculum-based breastfeeding education programs, group prenatal breastfeeding counselling, and one-to-one breastfeeding educational programs, which were all delivered in person; four studies used web-based learning platforms to deliver breastfeeding education prenatally which were both delivered online and face to face over a period of 3 weeks to 2 months with follow-up periods ranging from 3 weeks to 6 months; one study delivered breastfeeding educational intervention using mother-to-mother breastfeeding support groups in promoting exclusive breastfeeding, and one study disseminated breastfeeding education to participants based on the theory of planned behaviour. The most effective interventions were those that included both theory and hands-on demonstrations. Results showed an increase in breastfeeding uptake, breastfeeding knowledge, an increase in a positive attitude to breastfeeding, and an increase in maternal breastfeeding self-efficacy among mothers who participated in breastfeeding educational programs during prenatal care. Conclusion: Prenatal breastfeeding education increases women’s knowledge of breastfeeding. Mothers who are knowledgeable about breastfeeding and hold a positive approach towards breastfeeding have the tendency to initiate breastfeeding and continue for a lengthened period. Findings demonstrate a general correlation between prenatal breastfeeding education and increased breastfeeding uptake postpartum. The high level of positive breastfeeding outcomes inherent in all the studies can be attributed to prenatal breastfeeding education. This review provides rigorous contemporary evidence that healthcare professionals and policymakers can apply when developing effective strategies to improve breastfeeding rates and ultimately improve the health outcomes of mothers and infants.

Keywords: breastfeeding, breastfeeding programs, breastfeeding self-efficacy, prenatal breastfeeding education

Procedia PDF Downloads 84
126 The Recommended Summary Plan for Emergency Care and Treatment (ReSPECT) Process: An Audit of Its Utilisation on a UK Tertiary Specialist Intensive Care Unit

Authors: Gokulan Vethanayakam, Daniel Aston

Abstract:

Introduction: The ReSPECT process supports healthcare professionals when making patient-centered decisions in the event of an emergency. It has been widely adopted by the NHS in England and allows patients to express thoughts and wishes about treatments and outcomes that they consider acceptable. It includes (but is not limited to) cardiopulmonary resuscitation decisions. ReSPECT conversations should ideally occur prior to ICU admission and should be documented in the eight sections of the nationally-standardised ReSPECT form. This audit evaluated the use of ReSPECT on a busy cardiothoracic ICU in an NHS Trust where established policies advocating its use exist. Methods: This audit was a retrospective review of ReSPECT forms for a sample of high-risk patients admitted to ICU at the Royal Papworth Hospital between January 2021 and March 2022. Patients all received one of the following interventions: Veno-Venous Extra-Corporeal Membrane Oxygenation (VV-ECMO) for severe respiratory failure (retrieved via the national ECMO service); cardiac or pulmonary transplantation-related surgical procedures (including organ transplants and Ventricular Assist Device (VAD) implantation); or elective non-transplant cardiac surgery. The quality of documentation on ReSPECT forms was evaluated using national standards and a graded ranking tool devised by the authors which was used to assess narrative aspects of the forms. Quality was ranked as A (excellent) to D (poor). Results: Of 230 patients (74 VV-ECMO, 104 transplant, 52 elective non-transplant surgery), 43 (18.7%) had a ReSPECT form and only one (0.43%) patient had a ReSPECT form completed prior to ICU admission. Of the 43 forms completed, 38 (88.4%) were completed due to the commencement of End of Life (EoL) care. No non-transplant surgical patients included in the audit had a ReSPECT form. There was documentation of balance of care (section 4a), CPR status (section 4c), capacity assessment (section 5), and patient involvement in completing the form (section 6a) on all 43 forms. Of the 34 patients assessed as lacking capacity to make decisions, only 22 (64.7%) had reasons documented. Other sections were variably completed; 29 (67.4%) forms had relevant background information included to a good standard (section 2a). Clinical guidance for the patient (section 4b) was given in 25 (58.1%), of which 11 stated the rationale that underpinned it. Seven forms (16.3%) contained information in an inappropriate section. In a comparison of ReSPECT forms completed ahead of an EoL trigger with those completed when EoL care began, there was a higher number of entries in section 3 (considering patient’s values/fears) that were assessed at grades A-B in the former group (p = 0.014), suggesting higher quality. Similarly, forms from the transplant group contained higher quality information in section 3 than those from the VV-ECMO group (p = 0.0005). Conclusions: Utilisation of the ReSPECT process in high-risk patients is yet to be well-adopted in this trust. Teams who meet patients before hospital admission for transplant or high-risk surgery should be encouraged to engage with the ReSPECT process at this point in the patient's journey. VV-ECMO retrieval teams should consider ReSPECT conversations with patients’ relatives at the time of retrieval.

Keywords: audit, critical care, end of life, ICU, ReSPECT, resuscitation

Procedia PDF Downloads 66