Search results for: performance’s Measure
525 Monitoring and Evaluation of Web-Services Quality and Medium-Term Impact on E-Government Agencies' Efficiency
Authors: A. F. Huseynov, N. T. Mardanov, J. Y. Nakhchivanski
Abstract:
This practical research is aimed to improve the management quality and efficiency of public administration agencies providing e-services. The monitoring system developed will provide continuous review of the websites compliance with the selected indicators, their evaluation based on the selected indicators and ranking of services according to the quality criteria. The responsible departments in the government agencies were surveyed; the questionnaire includes issues of management and feedback, e-services provided, and the application of information systems. By analyzing the main affecting factors and barriers, the recommendations will be given that lead to the relevant decisions to strengthen the state agencies competencies for the management and the provision of their services. Component 1. E-services monitoring system. Three separate monitoring activities are proposed to be executed in parallel: Continuous tracing of e-government sites using built-in web-monitoring program; this program generates several quantitative values which are basically related to the technical characteristics and the performance of websites. The expert assessment of e-government sites in accordance with the two general criteria. Criterion 1. Technical quality of the site. Criterion 2. Usability/accessibility (load, see, use). Each high-level criterion is in turn subdivided into several sub-criteria, such as: the fonts and the color of the background (Is it readable?), W3C coding standards, availability of the Robots.txt and the site map, the search engine, the feedback/contact and the security mechanisms. The on-line survey of the users/citizens – a small group of questions embedded in the e-service websites. The questionnaires comprise of the information concerning navigation, users’ experience with the website (whether it was positive or negative), etc. Automated monitoring of web-sites by its own could not capture the whole evaluation process, and should therefore be seen as a complement to expert’s manual web evaluations. All of the separate results were integrated to provide the complete evaluation picture. Component 2. Assessment of the agencies/departments efficiency in providing e-government services. - the relevant indicators to evaluate the efficiency and the effectiveness of e-services were identified; - the survey was conducted in all the governmental organizations (ministries, committees and agencies) that provide electronic services for the citizens or the businesses; - the quantitative and qualitative measures are covering the following sections of activities: e-governance, e-services, the feedback from the users, the information systems at the agencies’ disposal. Main results: 1. The software program and the set of indicators for internet sites evaluation has been developed and the results of pilot monitoring have been presented. 2. The evaluation of the (internal) efficiency of the e-government agencies based on the survey results with the practical recommendations related to the human potential, the information systems used and e-services provided.Keywords: e-government, web-sites monitoring, survey, internal efficiency
Procedia PDF Downloads 304524 An Investigation on the Sandwich Panels with Flexible and Toughened Adhesives under Flexural Loading
Authors: Emre Kara, Şura Karakuzu, Ahmet Fatih Geylan, Metehan Demir, Kadir Koç, Halil Aykul
Abstract:
The material selection in the design of the sandwich structures is very crucial aspect because of the positive or negative influences of the base materials to the mechanical properties of the entire panel. In the literature, it was presented that the selection of the skin and core materials plays very important role on the behavior of the sandwich. Beside this, the use of the correct adhesive can make the whole structure to show better mechanical results and behavior. By this way, the sandwich structures realized in the study were obtained with the combination of aluminum foam core and three different glass fiber reinforced polymer (GFRP) skins using two different commercial adhesives which are based on flexible polyurethane and toughened epoxy. The static and dynamic tests were already applied on the sandwiches with different types of adhesives. In the present work, the static three-point bending tests were performed on the sandwiches having an aluminum foam core with the thickness of 15 mm, the skins with three different types of fabrics ([0°/90°] cross ply E-Glass Biaxial stitched, [0°/90°] cross ply E-Glass Woven and [0°/90°] cross ply S-Glass Woven which have same thickness value of 1.75 mm) and two different commercial adhesives (flexible polyurethane and toughened epoxy based) at different values of support span distances (L= 55, 70, 80, 125 mm) by aiming the analyses of their flexural performance. The skins used in the study were produced via Vacuum Assisted Resin Transfer Molding (VARTM) technique and were easily bonded onto the aluminum foam core with flexible and toughened adhesives under a very low pressure using press machine with the alignment tabs having the total thickness of the whole panel. The main results of the flexural loading are: force-displacement curves obtained after the bending tests, peak force values, absorbed energy, collapse mechanisms, adhesion quality and the effect of the support span length and adhesive type. The experimental results presented that the sandwiches with epoxy based toughened adhesive and the skins made of S-Glass Woven fabrics indicated the best adhesion quality and mechanical properties. The sandwiches with toughened adhesive exhibited higher peak force and energy absorption values compared to the sandwiches with flexible adhesive. The core shear mode occurred in the sandwiches with flexible polyurethane based adhesive through the thickness of the core while the same mode took place in the sandwiches with toughened epoxy based adhesive along the length of the core. The use of these sandwich structures can lead to a weight reduction of the transport vehicles, providing an adequate structural strength under operating conditions.Keywords: adhesive and adhesion, aluminum foam, bending, collapse mechanisms
Procedia PDF Downloads 328523 Determination of 1-Deoxynojirimycin and Phytochemical Profile from Mulberry Leaves Cultivated in Indonesia
Authors: Yasinta Ratna Esti Wulandari, Vivitri Dewi Prasasty, Adrianus Rio, Cindy Geniola
Abstract:
Mulberry is a plant that widely cultivated around the world, mostly for silk industry. In recent years, the study showed that the mulberry leaves have an anti-diabetic effect which mostly comes from the compound known as 1-deoxynojirimycin (DNJ). DNJ is a very potent α-glucosidase inhibitor. It will decrease the degradation rate of carbohydrates in digestive tract, leading to slower glucose absorption and reducing the post-prandial glucose level significantly. The mulberry leaves also known as the best source of DNJ. Since then, the DNJ in mulberry leaves had received a considerable attention, because of the increased number of diabetic patients and the raise of people awareness to find a more natural cure for diabetic. The DNJ content in mulberry leaves varied depend on the mulberry species, leaf’s age, and the plant’s growth environment. Few of the mulberry varieties that were cultivated in Indonesiaare Morus alba var. kanva-2, M. alba var. multicaulis, M. bombycis var. lembang, and M. cathayana. The lack of data concerning phytochemicals contained in the Indonesian mulberry leaves are restraining their use in the medicinal field. The aim of this study is to fully utilize the use of mulberry leaves cultivated in Indonesia as a medicinal herb in local, national, or global community, by determining the DNJ and other phytochemical contents in them. This study used eight leaf samples which are the young leaves and mature leaves of both Morus alba var. kanva-2, M. alba var. multicaulis, M. bombycis var. lembang, and M. cathayana. The DNJ content was analyzed using reverse phase high performance liquid chromatography (HPLC). The stationary phase was silica C18 column and the mobile phase was acetonitrile:acetic acid 0.1% 1:1 with elution rate 1 mL/min. Prior to HPLC analysis the samples were derivatized with FMOC to ensure the DNJ detectable by VWD detector at 254 nm. Results showed that the DNJ content in samples are ranging from 2.90-0.07 mg DNJ/ g leaves, with the highest content found in M. cathayana mature leaves (2.90 ± 0.57 mg DNJ/g leaves). All of the mature leaf samples also found to contain higher amount of DNJ from their respective young leaf samples. The phytochemicals in leaf samples was tested using qualitative test. Result showed that all of the eight leaf samples contain alkaloids, phenolics, flavonoids, tannins, and terpenes. The presence of this phytochemicals contribute to the therapeutic effect of mulberry leaves. The pyrolysis-gas chromatography-mass spectrometry (Py-GC-MS) analysis was also performed to the eight samples to quantitatively determine their phytochemicals content. The pyrolysis temperature was set at 400 °C, with capillary column Phase Rtx-5MS 60 × 0.25 mm ID stationary phase and helium gas mobile phase. Few of the terpenes found are known to have anticancer and antimicrobial properties. From all the results, all of four samples of mulberry leaves which are cultivated in Indonesia contain DNJ and various phytochemicals like alkaloids, phenolics, flavonoids, tannins, and terpenes which are beneficial to our health.Keywords: Morus, 1-deoxynojirimycin, HPLC, Py-GC-MS
Procedia PDF Downloads 330522 Different Types of Bismuth Selenide Nanostructures for Targeted Applications: Synthesis and Properties
Authors: Jana Andzane, Gunta Kunakova, Margarita Baitimirova, Mikelis Marnauza, Floriana Lombardi, Donats Erts
Abstract:
Bismuth selenide (Bi₂Se₃) is known as a narrow band gap semiconductor with pronounced thermoelectric (TE) and topological insulator (TI) properties. Unique TI properties offer exciting possibilities for fundamental research as observing the exciton condensate and Majorana fermions, as well as practical application in spintronic and quantum information. In turn, TE properties of this material can be applied for wide range of thermoelectric applications, as well as for broadband photodetectors and near-infrared sensors. Nanostructuring of this material results in improvement of TI properties due to suppression of the bulk conductivity, and enhancement of TE properties because of increased phonon scattering at the nanoscale grains and interfaces. Regarding TE properties, crystallographic growth direction, as well as orientation of the nanostructures relative to the growth substrate, play significant role in improvement of TE performance of nanostructured material. For instance, Bi₂Se₃ layers consisting of randomly oriented nanostructures and/or of combination of them with planar nanostructures show significantly enhanced in comparison with bulk and only planar Bi₂Se₃ nanostructures TE properties. In this work, a catalyst-free vapour-solid deposition technique was applied for controlled obtaining of different types of Bi₂Se₃ nanostructures and continuous nanostructured layers for targeted applications. For example, separated Bi₂Se₃ nanoplates, nanobelts and nanowires can be used for investigations of TI properties; consisting from merged planar and/or randomly oriented nanostructures Bi₂Se₃ layers are useful for applications in heat-to-power conversion devices and infrared detectors. The vapour-solid deposition was carried out using quartz tube furnace (MTI Corp), equipped with an inert gas supply and pressure/temperature control system. Bi₂Se₃ nanostructures/nanostructured layers of desired type were obtained by adjustment of synthesis parameters (process temperature, deposition time, pressure, carrier gas flow) and selection of deposition substrate (glass, quartz, mica, indium-tin-oxide, graphene and carbon nanotubes). Morphology, structure and composition of obtained Bi₂Se₃ nanostructures and nanostructured layers were inspected using SEM, AFM, EDX and HRTEM techniques, as well as home-build experimental setup for thermoelectric measurements. It was found that introducing of temporary carrier gas flow into the process tube during the synthesis and deposition substrate choice significantly influence nanostructures formation mechanism. Electrical, thermoelectric, and topological insulator properties of different types of deposited Bi₂Se₃ nanostructures and nanostructured coatings are characterized as a function of thickness and discussed.Keywords: bismuth seleinde, nanostructures, topological insulator, vapour-solid deposition
Procedia PDF Downloads 231521 X-Ray Detector Technology Optimization in Computed Tomography
Authors: Aziz Ikhlef
Abstract:
Most of multi-slices Computed Tomography (CT) scanners are built with detectors composed of scintillator - photodiodes arrays. The photodiodes arrays are mainly based on front-illuminated technology for detectors under 64 slices and on back-illuminated photodiode for systems of 64 slices or more. The designs based on back-illuminated photodiodes were being investigated for CT machines to overcome the challenge of the higher number of runs and connection required in front-illuminated diodes. In backlit diodes, the electronic noise has already been improved because of the reduction of the load capacitance due to the routing reduction. This is translated by a better image quality in low signal application, improving low dose imaging in large patient population. With the fast development of multi-detector-rows CT (MDCT) scanners and the increasing number of examinations, the clinical community has raised significant concerns on radiation dose received by the patient in both medical and regulatory community. In order to reduce individual exposure and in response to the recommendations of the International Commission on Radiological Protection (ICRP) which suggests that all exposures should be kept as low as reasonably achievable (ALARA), every manufacturer is trying to implement strategies and solutions to optimize dose efficiency and image quality based on x-ray emission and scanning parameters. The added demands on the CT detector performance also comes from the increased utilization of spectral CT or dual-energy CT in which projection data of two different tube potentials are collected. One of the approaches utilizes a technology called fast-kVp switching in which the tube voltage is switched between 80 kVp and 140 kVp in fraction of a millisecond. To reduce the cross-contamination of signals, the scintillator based detector temporal response has to be extremely fast to minimize the residual signal from previous samples. In addition, this paper will present an overview of detector technologies and image chain improvement which have been investigated in the last few years to improve the signal-noise ratio and the dose efficiency CT scanners in regular examinations and in energy discrimination techniques. Several parameters of the image chain in general and in the detector technology contribute in the optimization of the final image quality. We will go through the properties of the post-patient collimation to improve the scatter-to-primary ratio, the scintillator material properties such as light output, afterglow, primary speed, crosstalk to improve the spectral imaging, the photodiode design characteristics and the data acquisition system (DAS) to optimize for crosstalk, noise and temporal/spatial resolution.Keywords: computed tomography, X-ray detector, medical imaging, image quality, artifacts
Procedia PDF Downloads 194520 Influence of Glass Plates Different Boundary Conditions on Human Impact Resistance
Authors: Alberto Sanchidrián, José A. Parra, Jesús Alonso, Julián Pecharromán, Antonia Pacios, Consuelo Huerta
Abstract:
Glass is a commonly used material in building; there is not a unique design solution as plates with a different number of layers and interlayers may be used. In most façades, a security glazing have to be used according to its performance in the impact pendulum. The European Standard EN 12600 establishes an impact test procedure for classification under the point of view of the human security, of flat plates with different thickness, using a pendulum of two tires and 50 kg mass that impacts against the plate from different heights. However, this test does not replicate the actual dimensions and border conditions used in building configurations and so the real stress distribution is not determined with this test. The influence of different boundary conditions, as the ones employed in construction sites, is not well taking into account when testing the behaviour of safety glazing and there is not a detailed procedure and criteria to determinate the glass resistance against human impact. To reproduce the actual boundary conditions on site, when needed, the pendulum test is arranged to be used "in situ", with no account for load control, stiffness, and without a standard procedure. Fracture stress of small and large glass plates fit a Weibull distribution with quite a big dispersion so conservative values are adopted for admissible fracture stress under static loads. In fact, test performed for human impact gives a fracture strength two or three times higher, and many times without a total fracture of the glass plate. Newest standards, as for example DIN 18008-4, states for an admissible fracture stress 2.5 times higher than the ones used for static and wing loads. Now two working areas are open: a) to define a standard for the ‘in situ’ test; b) to prepare a laboratory procedure that allows testing with more real stress distribution. To work on both research lines a laboratory that allows to test medium size specimens with different border conditions, has been developed. A special steel frame allows reproducing the stiffness of the glass support substructure, including a rigid condition used as reference. The dynamic behaviour of the glass plate and its support substructure have been characterized with finite elements models updated with modal tests results. In addition, a new portable impact machine is being used to get enough force and direction control during the impact test. Impact based on 100 J is used. To avoid problems with broken glass plates, the test have been done using an aluminium plate of 1000 mm x 700 mm size and 10 mm thickness supported on four sides; three different substructure stiffness conditions are used. A detailed control of the dynamic stiffness and the behaviour of the plate is done with modal tests. Repeatability of the test and reproducibility of results prove that procedure to control both, stiffness of the plate and the impact level, is necessary.Keywords: glass plates, human impact test, modal test, plate boundary conditions
Procedia PDF Downloads 307519 Promoting Class Cooperation-Competition (Coo-Petition) and Empowerment to Graduating Architecture Students through a Holistic Planning Approach in Their Thesis Proposals
Authors: Felicisimo Azagra Tejuco Jr.
Abstract:
Mentoring architecture thesis students is a very critical and exhausting task for both the adviser and advisee. It poses the challenges of resource and time management for the candidate while the best professional guidance from the mentor. The University of Santo Tomas (Manila, Philippines) is Asia's oldest university. Among its notable program is its Architecture curriculum. Presently, the five-year Architecture program requires ten semesters of academic coursework. The last three semesters are relevant to each Architecture graduating student's thesis proposal and defense. The thesis proposal is developed and submitted for approval in the subject Research Methods for Architecture (RMA). Data gathering and initial schemes are conducted in Architectural Design (AD), 9, and are finalized and defended in AD 10. In recent years, their graduating students have maintained an average of 300 candidates before the pandemic. They are encouraged to explore any topic of interest or relevance. Since 2019-2020, one thesis class has used a community planning approach in mentoring the class. Compared to other sections, the first meeting of RMA has been allocated for a visioning exercise and assessment of the class's strengths-weaknesses and opportunities-threats (SWOT). Here, the work activities of the group have been finetuned to address some identified concerns while still being aligned with the academic calendar. Occasional peer critics complement class lectures. The course will end with the approval of the student's proposal. The final year or last two semesters of the graduating class will be focused on the approved proposal. Compared to the other class, the 18 weeks of the first semester consist of regular consultations, complemented by lectures from the adviser or guest speakers. Through remote peer consultations, the mentor maximized each meeting in groups of three to five, encouraging constructive criticism among the class. At the end of the first semester, mock presentations to the external jury are conducted to check the design outputs for improvement. The final semester is spent more on the finalization of the plans. Feedback from the previous semester is expected to be integrated into the final outputs. Before the final deliberations, at least two technical rehearsals were conducted per group. Regardless of the outcome, an assessment of each student's performance is held as a class. Personal realizations and observations are encouraged. Through Online surveys, Interviews, and Focused Group Discussions with the former students, the effectiveness of the mentoring strategies was reviewed and evaluated. Initial feedback highlighted the relevance of setting a positive tone for the course, constructive criticisms from peers & experts, and consciousness of deadlines as essential elements for a practical semester.Keywords: cooperation, competition, student empowerment, class vision
Procedia PDF Downloads 78518 Phenomena-Based Approach for Automated Generation of Process Options and Process Models
Authors: Parminder Kaur Heer, Alexei Lapkin
Abstract:
Due to global challenges of increased competition and demand for more sustainable products/processes, there is a rising pressure on the industry to develop innovative processes. Through Process Intensification (PI) the existing and new processes may be able to attain higher efficiency. However, very few PI options are generally considered. This is because processes are typically analysed at a unit operation level, thus limiting the search space for potential process options. PI performed at more detailed levels of a process can increase the size of the search space. The different levels at which PI can be achieved is unit operations, functional and phenomena level. Physical/chemical phenomena form the lowest level of aggregation and thus, are expected to give the highest impact because all the intensification options can be described by their enhancement. The objective of the current work is thus, generation of numerous process alternatives based on phenomena, and development of their corresponding computer aided models. The methodology comprises: a) automated generation of process options, and b) automated generation of process models. The process under investigation is disintegrated into functions viz. reaction, separation etc., and these functions are further broken down into the phenomena required to perform them. E.g., separation may be performed via vapour-liquid or liquid-liquid equilibrium. A list of phenomena for the process is formed and new phenomena, which can overcome the difficulties/drawbacks of the current process or can enhance the effectiveness of the process, are added to the list. For instance, catalyst separation issue can be handled by using solid catalysts; the corresponding phenomena are identified and added. The phenomena are then combined to generate all possible combinations. However, not all combinations make sense and, hence, screening is carried out to discard the combinations that are meaningless. For example, phase change phenomena need the co-presence of the energy transfer phenomena. Feasible combinations of phenomena are then assigned to the functions they execute. A combination may accomplish a single or multiple functions, i.e. it might perform reaction or reaction with separation. The combinations are then allotted to the functions needed for the process. This creates a series of options for carrying out each function. Combination of these options for different functions in the process leads to the generation of superstructure of process options. These process options, which are formed by a list of phenomena for each function, are passed to the model generation algorithm in the form of binaries (1, 0). The algorithm gathers the active phenomena and couples them to generate the model. A series of models is generated for the functions, which are combined to get the process model. The most promising process options are then chosen subjected to a performance criterion, for example purity of product, or via a multi-objective Pareto optimisation. The methodology was applied to a two-step process and the best route was determined based on the higher product yield. The current methodology can identify, produce and evaluate process intensification options from which the optimal process can be determined. It can be applied to any chemical/biochemical process because of its generic nature.Keywords: Phenomena, Process intensification, Process models , Process options
Procedia PDF Downloads 232517 Use of Progressive Feedback for Improving Team Skills and Fair Marking of Group Tasks
Authors: Shaleeza Sohail
Abstract:
Self, and peer evaluations are some of the main components in almost all group assignments and projects in higher education institutes. These evaluations provide students an opportunity to better understand the learning outcomes of the assignment and/or project. A number of online systems have been developed for this purpose that provides automated assessment and feedback of students’ contribution in a group environment based on self and peer evaluations. All these systems lack a progressive aspect of these assessments and feedbacks which is the most crucial factor for ongoing improvement and life-long learning. In addition, a number of assignments and projects are designed in a manner that smaller or initial assessment components lead to a final assignment or project. In such cases, the evaluation and feedback may provide students an insight into their performance as a group member for a particular component after the submission. Ideally, it should also create an opportunity to improve for next assessment component as well. Self and Peer Progressive Assessment and Feedback System encourages students to perform better in the next assessment by providing a comparative analysis of the individual’s contribution score on an ongoing basis. Hence, the student sees the change in their own contribution scores during the complete project based on smaller assessment components. Self-Assessment Factor is calculated as an indicator of how close the self-perception of the student’s own contribution is to the perceived contribution of that student by other members of the group. Peer-Assessment Factor is calculated to compare the perception of one student’s contribution as compared to the average value of the group. Our system also provides a Group Coherence Factor which shows collectively how group members contribute to the final submission. This feedback is provided for students and teachers to visualize the consistency of members’ contribution perceived by its group members. Teachers can use these factors to judge the individual contributions of the group members in the combined tasks and allocate marks/grades accordingly. This factor is shown to students for all groups undertaking same assessment, so the group members can comparatively analyze the efficiency of their group as compared to other groups. Our System provides flexibility to the instructors for generating their own customized criteria for self and peer evaluations based on the requirements of the assignment. Students evaluate their own and other group members’ contributions on the scale from significantly higher to significantly lower. The preliminary testing of the prototype system is done with a set of predefined cases to explicitly show the relation of system feedback factors to the case studies. The results show that such progressive feedback to students can be used to motivate self-improvement and enhanced team skills. The comparative group coherence can promote a better understanding of the group dynamics in order to improve team unity and fair division of team tasks.Keywords: effective group work, improvement of team skills, progressive feedback, self and peer assessment system
Procedia PDF Downloads 187516 Effect of 12 Weeks Pedometer-Based Workplace Program on Inflammation and Arterial Stiffness in Young Men with Cardiovascular Risks
Authors: Norsuhana Omar, Amilia Aminuddina Zaiton Zakaria, Raifana Rosa Mohamad Sattar, Kalaivani Chellappan, Mohd Alauddin Mohd Ali, Norizam Salamt, Zanariyah Asmawi, Norliza Saari, Aini Farzana Zulkefli, Nor Anita Megat Mohd. Nordin
Abstract:
Inflammation plays an important role in the pathogenesis of vascular dysfunction leading to arterial stiffness. Pulse wave velocity (PWV) and augmentation index (AS), as tools for the assessment of vascular damages are widely used and have been shown to predict cardiovascular disease (CVD). C-reactive protein (CRP) is a marker of inflammation. Several studies noted that regular exercise is associated with reduced arterial stiffness. The lack of exercise among Malaysians and the increasing CVD morbidity and mortality among young men are of concern. In Malaysia data on the workplace exercise intervention is scarce. A programme was designed to enable subjects to increase their level of walking as part of their daily work routine and self-monitored by using pedometers. The aim of this study to evaluate the reducing of inflammation by measuring CRP and improvement arterial stiffness measured by carotid femoral PWV (PWVCF) and AI. A total of 70 young men (20 - 40 years) who were sedentary, achieving less than 5,000 steps/day in casual walking with 2 or more cardiovascular risk factors were recruited in Institute of Vocational Skills for Youth (IKBN Hulu Langat). Subjects were randomly assigned to a control (CG) (n=34; no change in walking) and pedometer group (PG) (n=36; minimum target: 8,000 steps/day). The CRP was measured by using immunological method while PWVCF and AI were measured using Vicorder. All parameters were measured at baseline and after 12 weeks. Data for analysis was conducted using Statistical Package of Social Sciences Version 22 (SPSS Inc., Chicago, IL, USA). At post intervention, the CG step counts were similar (4983 ± 366vs 5697 ± 407steps/day). The PG increased step count from 4996 ± 805 to 10,128 ±511 steps/day (P<0.001). The PG showed significant improvement in anthropometric variables and lipid (time and group effect p<0.001). For vascular assessment, the PG showed significantly decreased for time and effect (p<0.001) for PWV (7.21± 0.83 to 6.42 ± 0.89) m/s; AI (11.88± 6.25 to 8.83 ± 3.7) % and CRP (pre= 2.28 ± 3.09, post=1.08± 1.37mg/L). However, no changes were seen in CG. As a conclusion, a pedometer-based walking programme may be an effective strategy for promoting increased daily physical activity which reduces cardiovascular risk markers and thus improve cardiovascular health in terms of inflammation and arterial stiffness. The community intervention for health maintenance has potential to adopt walking as an exercise and adopting vascular fitness index as the performance measuring tools.Keywords: arterial stiffness, exercise, inflammation, pedometer
Procedia PDF Downloads 354515 DeepNIC a Method to Transform Each Tabular Variable into an Independant Image Analyzable by Basic CNNs
Authors: Nguyen J. M., Lucas G., Ruan S., Digonnet H., Antonioli D.
Abstract:
Introduction: Deep Learning (DL) is a very powerful tool for analyzing image data. But for tabular data, it cannot compete with machine learning methods like XGBoost. The research question becomes: can tabular data be transformed into images that can be analyzed by simple CNNs (Convolutional Neuron Networks)? Will DL be the absolute tool for data classification? All current solutions consist in repositioning the variables in a 2x2 matrix using their correlation proximity. In doing so, it obtains an image whose pixels are the variables. We implement a technology, DeepNIC, that offers the possibility of obtaining an image for each variable, which can be analyzed by simple CNNs. Material and method: The 'ROP' (Regression OPtimized) model is a binary and atypical decision tree whose nodes are managed by a new artificial neuron, the Neurop. By positioning an artificial neuron in each node of the decision trees, it is possible to make an adjustment on a theoretically infinite number of variables at each node. From this new decision tree whose nodes are artificial neurons, we created the concept of a 'Random Forest of Perfect Trees' (RFPT), which disobeys Breiman's concepts by assembling very large numbers of small trees with no classification errors. From the results of the RFPT, we developed a family of 10 statistical information criteria, Nguyen Information Criterion (NICs), which evaluates in 3 dimensions the predictive quality of a variable: Performance, Complexity and Multiplicity of solution. A NIC is a probability that can be transformed into a grey level. The value of a NIC depends essentially on 2 super parameters used in Neurops. By varying these 2 super parameters, we obtain a 2x2 matrix of probabilities for each NIC. We can combine these 10 NICs with the functions AND, OR, and XOR. The total number of combinations is greater than 100,000. In total, we obtain for each variable an image of at least 1166x1167 pixels. The intensity of the pixels is proportional to the probability of the associated NIC. The color depends on the associated NIC. This image actually contains considerable information about the ability of the variable to make the prediction of Y, depending on the presence or absence of other variables. A basic CNNs model was trained for supervised classification. Results: The first results are impressive. Using the GSE22513 public data (Omic data set of markers of Taxane Sensitivity in Breast Cancer), DEEPNic outperformed other statistical methods, including XGBoost. We still need to generalize the comparison on several databases. Conclusion: The ability to transform any tabular variable into an image offers the possibility of merging image and tabular information in the same format. This opens up great perspectives in the analysis of metadata.Keywords: tabular data, CNNs, NICs, DeepNICs, random forest of perfect trees, classification
Procedia PDF Downloads 125514 Association of Body Composition Parameters with Lower Limb Strength and Upper Limb Functional Capacity in Quilombola Remnants
Authors: Leonardo Costa Pereira, Frederico Santos Santana, Mauro Karnikowski, Luís Sinésio Silva Neto, Aline Oliveira Gomes, Marisete Peralta Safons, Margô Gomes De Oliveira Karnikowski
Abstract:
In Brazil, projections of population aging follow all world projections, the birth rate tends to be surpassed by the mortality rate around the year 2045. Historically, the population of Brazilian blacks suffered for several centuries from the oppression of dominant classes. A group, especially of blacks, stands out in relation to territorial, historical and social aspects, and for centuries they have isolated themselves in small communities, in order to maintain their freedom and culture. The isolation of the Quilombola communities generated socioeconomic effects as well as the health of these blacks. Thus, the objective of the present study is to verify the association of body composition parameters with lower and upper limb strength and functional capacity in Quilombola remnants. The research was approved by ethics committee (1,771,159). Anthropometric evaluations of hip and waist circumference, body mass and height were performed. In order to verify the body composition, the relationship between stature and body mass (BM) was performed, generating the body mass index (BMI), as well as the dual-energy X-ray absorptiometry (DEXA) test. The Time Up and Go (TUG) test was used to evaluate the functional capacity, and a maximum repetition test (1MR) for knee extension and handgrip (HG) was applied for strength magnitude analysis. Statistical analysis was performed using the statistical package SPSS 22.0. Shapiro Wilk's normality test was performed. For the possible correlations, the suggestions of the Pearson or Spearman tests were adopted. The results obtained after the interpretation identified that the sample (n = 18) was composed of 66.7% of female individuals with mean age of 66.07 ± 8.95 years. The sample’s body fat percentage (%BF) (35.65 ± 10.73) exceeds the recommendations for age group, as well as the anthropometric parameters of hip (90.91 ± 8.44cm) and waist circumference (80.37 ± 17.5cm). The relationship between height (1.55 ± 0.1m) and body mass (63.44 ± 11.25Kg) generated a BMI of 24.16 ± 7.09Kg/m2, that was considered normal. The TUG performance was 10.71 ± 1.85s. In the 1MR test, 46.67 ± 13.06Kg and in the HG 23.93±7.96Kgf were obtained, respectively. Correlation analyzes were characterized by the high frequency of significant correlations for height, dominant arm mass (DAM), %BF, 1MR and HG variables. In addition, correlations between HG and BM (r = 0.67, p = 0.005), height (r = 0.51, p = 0.004) and DAM (r = 0.55, p = 0.026) were also observed. The strength of the lower limbs correlates with BM (r = 0.69, p = 0.003), height (r = 0.62, p = 0.01) and DAM (r = 0.772, p = 0.001). In this way, we can conclude that not only the simple spatial relationship of mass and height can influence in predictive parameters of strength or functionality, being important the verification of the conditions of the corporal composition. For this population, height seems to be a good predictor of strength and body composition.Keywords: African Continental Ancestry Group, body composition, functional capacity, strength
Procedia PDF Downloads 276513 The Symbolic Power of the IMF: Looking through Argentina’s New Period of Indebtedness
Authors: German Ricci
Abstract:
The research aims to analyse the symbolic power of the International Monetary Fund (IMF) in its relationship with a borrowing country, drawing upon Pierre Bourdieu’s Field Theory. This theory of power, typical of constructivist structuralism, has been minor used in international relations. Thus, selecting this perspective offers a new understanding of how the IMF's power operates and is structured. The IMF makes periodic economic reviews in which the staff evaluates the Government's performance. It also offers “last instance” loans when private external credit is not accessible. This relationship generates great expectations in financial agents because the IMF’s statements indicate the capacity of the Nation-State to meet its payment obligations (or not). Therefore, it is argued that the IMF is a legitimate actor for financial agents concerned about a government facing an economic crisis both for the effects of its immediate economic contribution through loans and the promotion of adjustment programs, helpful to guarantee the payment of the external debt. This legitimacy implies a symbolic power relationship in addition to the already known economic power relationship. Obtaining the IMF's consent implies that the government partially puts its political-economic decisions into play since the monetary policy must be agreed upon with the Fund. This has consequences at the local level. First, it implies that the debtor state must establish a daily relationship with the Fund. This everyday interaction with the Fund influences how officials and policymakers internalize the meaning of political management. On the other hand, if the Government has access to the IMF's seal of approval, the State will be again in a position to re-enter the financial market and go back into debt to face external debt. This means that private creditors increase the chances of collecting the debt and, again, grant credits. Thus, it is argued that the borrowing country submits to the relationship with the IMF in search of the latter's economic and symbolic capital. Access to this symbolic capital has objective and subjective repercussions at the national level that might tend to reproduce the relevance of the financial market and legitimizes the IMF’s intervention during economic crises. The paper has Argentina as its case study, given its historical relationship with the IMF and the relevance of the current indebtedness period, which remains largely unexplored. Argentina’s economy is characterized by recurrent financial crises, and it is the country to which the Fund has lent the most in its entire history. It surpasses more than three times the second, Egypt. In addition, Argentina is currently the country that owes the most to the Fund after receiving the largest loan ever granted by the IMF in 2018, and a new agreement in 2022. While the historical strong association with the Fund culminated in the most acute economic and social crisis in the country’s contemporary history, producing an unprecedented political and institutional crisis in 2001, Argentina still recognized the IMF as the only way out during economic crises.Keywords: IMF, fields theory, symbolic power, Argentina, Bourdieu
Procedia PDF Downloads 71512 Cultural Heritage, Urban Planning and the Smart City in Indian Context
Authors: Paritosh Goel
Abstract:
The conservation of historic buildings and historic Centre’s over recent years has become fully encompassed in the planning of built-up areas and their management following climate changes. The approach of the world of restoration, in the Indian context on integrated urban regeneration and its strategic potential for a smarter, more sustainable and socially inclusive urban development introduces, for urban transformations in general (historical centers and otherwise), the theme of sustainability. From this viewpoint, it envisages, as a primary objective, a real “green, ecological or environmental” requalification of the city through interventions within the main categories of sustainability: mobility, energy efficiency, use of sources of renewable energy, urban metabolism (waste, water, territory, etc.) and natural environment. With this the concept of a “resilient city” is also introduced, which can adapt through progressive transformations to situations of change which may not be predictable, behavior that the historical city has always been able to express. Urban planning on the other hand, has increasingly focused on analyses oriented towards the taxonomic description of social/economic and perceptive parameters. It is connected with human behavior, mobility and the characterization of the consumption of resources, in terms of quantity even before quality to inform the city design process, which for ancient fabrics, and mainly affects the public space also in its social dimension. An exact definition of the term “smart city” is still essentially elusive, since we can attribute three dimensions to the term: a) That of a virtual city, evolved based on digital networks and web networks b) That of a physical construction determined by urban planning based on infrastructural innovation, which in the case of historic Centre’s implies regeneration that stimulates and sometimes changes the existing fabric; c) That of a political and social/economic project guided by a dynamic process that provides new behavior and requirements of the city communities that orients the future planning of cities also through participation in their management. This paper is a preliminary research into the connections between these three dimensions applied to the specific case of the fabric of ancient cities with the aim of obtaining a scientific theory and methodology to apply to the regeneration of Indian historical Centre’s. The Smart city scheme if contextualize with heritage of the city it can be an initiative which intends to provide a transdisciplinary approach between various research networks (natural sciences, socio-economics sciences and humanities, technological disciplines, digital infrastructures) which are united in order to improve the design, livability and understanding of urban environment and high historical/cultural performance levels.Keywords: historical cities regeneration, sustainable restoration, urban planning, smart cities, cultural heritage development strategies
Procedia PDF Downloads 281511 Carbon Footprint of Educational Establishments: The Case of the University of Alicante
Authors: Maria R. Mula-Molina, Juan A. Ferriz-Papi
Abstract:
Environmental concerns are increasingly obtaining higher priority in sustainability agenda of educational establishments. This is important not only for its environmental performance in its own right as an organization, but also to present a model for its students. On the other hand, universities play an important role on research and innovative solutions for measuring, analyzing and reducing environmental impacts for different activities. The assessment and decision-making process during the activity of educational establishments is linked to the application of robust indicators. In this way, the carbon footprint is a developing indicator for sustainability that helps understand the direct impact on climate change. But it is not easy to implement. There is a large amount of considering factors involved that increases its complexity, such as different uses at the same time (research, lecturing, administration), different users (students, staff) or different levels of activity (lecturing, exam or holidays periods). The aim of this research is to develop a simplified methodology for calculating and comparing carbon emissions per user at university campus considering two main aspects for carbon accountings: Building operations and transport. Different methodologies applied in other Spanish university campuses are analyzed and compared to obtain a final proposal to be developed in this type of establishments. First, building operation calculation considers the different uses and energy sources consumed. Second, for transport calculation, the different users and working hours are calculated separately, as well as their origin and traveling preferences. For every transport, a different conversion factor is used depending on carbon emissions produced. The final result is obtained as an average of carbon emissions produced per user. A case study is applied to the University of Alicante campus in San Vicente del Raspeig (Spain), where the carbon footprint is calculated. While the building operation consumptions are known per building and month, it does not happen with transport. Only one survey about the habit of transport for users was developed in 2009/2010, so no evolution of results can be shown in this case. Besides, building operations are not split per use, as building services are not monitored separately. These results are analyzed in depth considering all factors and limitations. Besides, they are compared to other estimations in other campuses. Finally, the application of the presented methodology is also studied. The recommendations concluded in this study try to enhance carbon emission monitoring and control. A Carbon Action Plan is then a primary solution to be developed. On the other hand, the application developed in the University of Alicante campus cannot only further enhance the methodology itself, but also render the adoption by other educational establishments more readily possible and yet with a considerable degree of flexibility to cater for their specific requirements.Keywords: building operations, built environment, carbon footprint, climate change, transport
Procedia PDF Downloads 295510 Hygro-Thermal Modelling of Timber Decks
Authors: Stefania Fortino, Petr Hradil, Timo Avikainen
Abstract:
Timber bridges have an excellent environmental performance, are economical, relatively easy to build and can have a long service life. However, the durability of these bridges is the main problem because of their exposure to outdoor climate conditions. The moisture content accumulated in wood for long periods, in combination with certain temperatures, may cause conditions suitable for timber decay. In addition, moisture content variations affect the structural integrity, serviceability and loading capacity of timber bridges. Therefore, the monitoring of the moisture content in wood is important for the durability of the material but also for the whole superstructure. The measurements obtained by the usual sensor-based techniques provide hygro-thermal data only in specific locations of the wood components. In this context, the monitoring can be assisted by numerical modelling to get more information on the hygro-thermal response of the bridges. This work presents a hygro-thermal model based on a multi-phase moisture transport theory to predict the distribution of moisture content, relative humidity and temperature in wood. Below the fibre saturation point, the multi-phase theory simulates three phenomena in cellular wood during moisture transfer, i.e., the diffusion of water vapour in the pores, the sorption of bound water and the diffusion of bound water in the cell walls. In the multi-phase model, the two water phases are separated, and the coupling between them is defined through a sorption rate. Furthermore, an average between the temperature-dependent adsorption and desorption isotherms is used. In previous works by some of the authors, this approach was found very suitable to study the moisture transport in uncoated and coated stress-laminated timber decks. Compared to previous works, the hygro-thermal fluxes on the external surfaces include the influence of the absorbed solar radiation during the time and consequently, the temperatures on the surfaces exposed to the sun are higher. This affects the whole hygro-thermal response of the timber component. The multi-phase model, implemented in a user subroutine of Abaqus FEM code, provides the distribution of the moisture content, the temperature and the relative humidity in a volume of the timber deck. As a case study, the hygro-thermal data in wood are collected from the ongoing monitoring of the stress-laminated timber deck of Tapiola Bridge in Finland, based on integrated humidity-temperature sensors and the numerical results are found in good agreement with the measurements. The proposed model, used to assist the monitoring, can contribute to reducing the maintenance costs of bridges, as well as the cost of instrumentation, and increase safety.Keywords: moisture content, multi-phase models, solar radiation, timber decks, FEM
Procedia PDF Downloads 175509 Assessing the Impact of Physical Inactivity on Dialysis Adequacy and Functional Health in Peritoneal Dialysis Patients
Authors: Mohammad Ali Tabibi, Farzad Nazemi, Nasrin Salimian
Abstract:
Background: Peritoneal dialysis (PD) is a prevalent renal replacement therapy for patients with end-stage renal disease. Despite its benefits, PD patients often experience reduced physical activity and physical function, which can negatively impact dialysis adequacy and overall health outcomes. Despite the known benefits of maintaining physical activity in chronic disease management, the specific interplay between physical inactivity, physical function, and dialysis adequacy in PD patients remains underexplored. Understanding this relationship is essential for developing targeted interventions to enhance patient care and outcomes in this vulnerable population. This study aims to assess the impact of physical inactivity on dialysis adequacy and functional health in PD patients. Methods: This cross-sectional study included 135 peritoneal dialysis patients from multiple dialysis centers. Physical inactivity was measured using the International Physical Activity Questionnaire (IPAQ), while physical function was assessed using the Short Physical Performance Battery (SPPB). Dialysis adequacy was evaluated using the Kt/V ratio. Additional variables such as demographic data, comorbidities, and laboratory parameters were collected to control for potential confounders. Statistical analyses were performed to determine the relationships between physical inactivity, physical function, and dialysis adequacy. Results: The study cohort comprised 70 males and 65 females with a mean age of 55.4 ± 13.2 years. A significant proportion of the patients (65%) were categorized as physically inactive based on IPAQ scores. Inactive patients demonstrated significantly lower SPPB scores (mean 6.2 ± 2.1) compared to their more active counterparts (mean 8.5 ± 1.8, p < 0.001). Dialysis adequacy, as measured by Kt/V, was found to be suboptimal (Kt/V < 1.7) in 48% of the patients. There was a significant positive correlation between physical function scores and Kt/V values (r = 0.45, p < 0.01), indicating that better physical function is associated with higher dialysis adequacy. Also, there was a significant negative correlation between physical inactivity and physical function (r = -0.55, p < 0.01). Additionally, physically inactive patients had lower Kt/V ratios compared to their active counterparts (1.3 ± 0.3 vs. 1.8 ± 0.4, p < 0.05). Multivariate regression analysis revealed that physical inactivity was an independent predictor of reduced dialysis adequacy (β = -0.32, p < 0.01) and poorer physical function (β = -0.41, p < 0.01) after adjusting for age, sex, comorbidities, and dialysis vintage. Conclusion: This study underscores the critical role of physical activity and physical function in maintaining adequate dialysis in peritoneal dialysis patients. These findings highlight the need for targeted interventions to promote physical activity in this population to improve their overall health outcomes. Future research should focus on developing and evaluating exercise programs tailored for PD patients to enhance their physical function and dialysis adequacy. The findings suggest that interventions aimed at increasing physical activity and improving physical function may enhance dialysis adequacy and overall health outcomes in this population. Further research is warranted to explore the mechanisms underlying these associations and to develop targeted strategies for enhancing patient care.Keywords: inactivity, physical function, peritoneal dialysis, dialysis adequacy
Procedia PDF Downloads 35508 An Automated Magnetic Dispersive Solid-Phase Extraction Method for Detection of Cocaine in Human Urine
Authors: Feiyu Yang, Chunfang Ni, Rong Wang, Yun Zou, Wenbin Liu, Chenggong Zhang, Fenjin Sun, Chun Wang
Abstract:
Cocaine is the most frequently used illegal drug globally, with the global annual prevalence of cocaine used ranging from 0.3% to 0.4 % of the adult population aged 15–64 years. Growing consumption trend of abused cocaine and drug crimes are a great concern, therefore urine sample testing has become an important noninvasive sampling whereas cocaine and its metabolites (COCs) are usually present in high concentrations and relatively long detection windows. However, direct analysis of urine samples is not feasible because urine complex medium often causes low sensitivity and selectivity of the determination. On the other hand, presence of low doses of analytes in urine makes an extraction and pretreatment step important before determination. Especially, in gathered taking drug cases, the pretreatment step becomes more tedious and time-consuming. So developing a sensitive, rapid and high-throughput method for detection of COCs in human body is indispensable for law enforcement officers, treatment specialists and health officials. In this work, a new automated magnetic dispersive solid-phase extraction (MDSPE) sampling method followed by high performance liquid chromatography-mass spectrometry (HPLC-MS) was developed for quantitative enrichment of COCs from human urine, using prepared magnetic nanoparticles as absorbants. The nanoparticles were prepared by silanizing magnetic Fe3O4 nanoparticles and modifying them with divinyl benzene and vinyl pyrrolidone, which possesses the ability for specific adsorption of COCs. And this kind of magnetic particle facilitated the pretreatment steps by electromagnetically controlled extraction to achieve full automation. The proposed device significantly improved the sampling preparation efficiency with 32 samples in one batch within 40mins. Optimization of the preparation procedure for the magnetic nanoparticles was explored and the performances of magnetic nanoparticles were characterized by scanning electron microscopy, vibrating sample magnetometer and infrared spectra measurements. Several analytical experimental parameters were studied, including amount of particles, adsorption time, elution solvent, extraction and desorption kinetics, and the verification of the proposed method was accomplished. The limits of detection for the cocaine and cocaine metabolites were 0.09-1.1 ng·mL-1 with recoveries ranging from 75.1 to 105.7%. Compared to traditional sampling method, this method is time-saving and environmentally friendly. It was confirmed that the proposed automated method was a kind of highly effective way for the trace cocaine and cocaine metabolites analyses in human urine.Keywords: automatic magnetic dispersive solid-phase extraction, cocaine detection, magnetic nanoparticles, urine sample testing
Procedia PDF Downloads 204507 Effect of Supplementation of Hay with Noug Seed Cake (Guizotia abyssinica), Wheat Bran and Their Mixtures on Feed Utilization, Digestiblity and Live Weight Change in Farta Sheep
Authors: Fentie Bishaw Wagayie
Abstract:
This study was carried out with the objective of studying the response of Farta sheep in feed intake and live weight change when fed on hay supplemented with noug seed cake (NSC), wheat bran (WB), and their mixtures. The digestibility trial of 7 days and 90 days of feeding trial was conducted using 25 intact male Farta sheep with a mean initial live weight of 16.83 ± 0.169 kg. The experimental animals were arranged randomly into five blocks based on the initial live weight, and the five treatments were assigned randomly to each animal in a block. Five dietary treatments used in the experiment comprised of grass hay fed ad libitum (T1), grass hay ad libitum + 300 g DM WB (T2), grass hay ad libitum + 300 g DM (67% WB: 33% NSC mixture) (T3), grass hay ad libitum + 300 g DM (67% NSC: 33% WB) (T4) and 300 g DM/ head/day NSC (T5). Common salt and water were offered ad libitum. The supplements were offered twice daily at 0800 and 1600 hours. The experimental sheep were kept in individual pens. Supplementation of NSC, WB, and their mixtures significantly increased (p < 0.01) the total dry matter (DM) (665.84-788 g/head/day) and (p < 0.001) crude protein (CP) intake. Unsupplemented sheep consumed significantly higher (p < 0.01) grass hay DM (540.5g/head/day) as compared to the supplemented treatments (365.8-488 g/h/d), except T2. Among supplemented sheep, T5 had significantly higher (p < 0.001) CP intake (99.98 g/head/day) than the others (85.52-90.2 g/head/day). Supplementation significantly improved (p < 0.001) the digestibility of CP (66.61-78.9%), but there was no significant effect (p > 0.05) on DM, OM, NDF, and ADF digestibility between supplemented and control treatments. Very low CP digestibility (11.55%) observed in the basal diet (grass hay) used in this study indicated that feeding sole grass hay could not provide nutrients even for the maintenance requirement of growing sheep. Significant final and daily live weight gain (p < 0.001) in the range of 70.11-82.44 g/head/day was observed in supplemented Farta sheep, but unsupplemented sheep lost weight by 9.11g/head/day. Numerically, among the supplemented treatments, sheep supplemented with a higher proportion of NSC in T4 (201 NSC + 99 g WB) gained more weight than the rest, though not statistically significant (p > 0.05). The absence of statistical difference in daily body weight gain between all supplemented sheep indicated that the supplementation of NSC, WB, and their mixtures had similar potential to provide nutrients. Generally, supplementation of NSC, WB, and their mixtures to the basal grass hay diet improved feed conversion ratio, total DM intake, CP intake, and CP digestibility, and it also improved the growth performance with a similar trend for all supplemented Farta sheep over the control group. Therefore, from a biological point of view, to attain the required level of slaughter body weight within a short period of the growing program, sheep producer can use all the supplement types depending upon their local availability, but in the order of priority, T4, T5, T3, and T2, respectively. However, based on partial budget analysis, supplementation of 300 g DM/head /day NSC (T5) could be recommended as profitable for producers with no capital limitation, whereas T4 supplementation (201 g NSC + 99 WB DM/day) is recommended when there is capital scarcity.Keywords: weight gain, supplement, Farta sheep, hay as basal diet
Procedia PDF Downloads 63506 Melt–Electrospun Polyprophylene Fabrics Functionalized with TiO2 Nanoparticles for Effective Photocatalytic Decolorization
Authors: Z. Karahaliloğlu, C. Hacker, M. Demirbilek, G. Seide, E. B. Denkbaş, T. Gries
Abstract:
Currently, textile industry has played an important role in world’s economy, especially in developing countries. Dyes and pigments used in textile industry are significant pollutants. Most of theirs are azo dyes that have chromophore (-N=N-) in their structure. There are many methods for removal of the dyes from wastewater such as chemical coagulation, flocculation, precipitation and ozonation. But these methods have numerous disadvantages and alternative methods are needed for wastewater decolorization. Titanium-mediated photodegradation has been used generally due to non-toxic, insoluble, inexpensive, and highly reactive properties of titanium dioxide semiconductor (TiO2). Melt electrospinning is an attractive manufacturing process for thin fiber production through electrospinning from PP (Polyprophylene). PP fibers have been widely used in the filtration due to theirs unique properties such as hydrophobicity, good mechanical strength, chemical resistance and low-cost production. In this study, we aimed to investigate the effect of titanium nanoparticle localization and amine modification on the dye degradation. The applicability of the prepared chemical activated composite and pristine fabrics for a novel treatment of dyeing wastewater were evaluated.In this study, a photocatalyzer material was prepared from nTi (titanium dioxide nanoparticles) and PP by a melt-electrospinning technique. The electrospinning parameters of pristine PP and PP/nTi nanocomposite fabrics were optimized. Before functionalization with nTi, the surface of fabrics was activated by a technique using glutaraldehyde (GA) and polyethyleneimine to promote the dye degredation. Pristine PP and PP/nTi nanocomposite melt-electrospun fabrics were characterized using scanning electron microscopy (SEM) and X-Ray Photon Spectroscopy (XPS). Methyl orange (MO) was used as a model compound for the decolorization experiments. Photocatalytic performance of nTi-loaded pristine and nanocomposite melt-electrospun filters was investigated by varying initial dye concentration 10, 20, 40 mg/L). nTi-PP composite fabrics were successfully processed into a uniform, fibrous network of beadless fibers with diameters of 800±0.4 nm. The process parameters were determined as a voltage of 30 kV, a working distance of 5 cm, a temperature of the thermocouple and hotcoil of 260–300 ºC and a flow rate of 0.07 mL/h. SEM results indicated that TiO2 nanoparticles were deposited uniformly on the nanofibers and XPS results confirmed the presence of titanium nanoparticles and generation of amine groups after modification. According to photocatalytic decolarization test results, nTi-loaded GA-treated pristine or nTi-PP nanocomposite fabric filtern have superior properties, especially over 90% decolorization efficiency at GA-treated pristine and nTi-PP composite PP fabrics. In this work, as a photocatalyzer for wastewater treatment, surface functionalized with nTi melt-electrospun fabrics from PP were prepared. Results showed melt-electrospun nTi-loaded GA-tretaed composite or pristine PP fabrics have a great potential for use as a photocatalytic filter to decolorization of wastewater and thus, requires further investigation.Keywords: titanium oxide nanoparticles, polyprophylene, melt-electrospinning
Procedia PDF Downloads 267505 A Machine Learning Approach for Assessment of Tremor: A Neurological Movement Disorder
Authors: Rajesh Ranjan, Marimuthu Palaniswami, A. A. Hashmi
Abstract:
With the changing lifestyle and environment around us, the prevalence of the critical and incurable disease has proliferated. One such condition is the neurological disorder which is rampant among the old age population and is increasing at an unstoppable rate. Most of the neurological disorder patients suffer from some movement disorder affecting the movement of their body parts. Tremor is the most common movement disorder which is prevalent in such patients that infect the upper or lower limbs or both extremities. The tremor symptoms are commonly visible in Parkinson’s disease patient, and it can also be a pure tremor (essential tremor). The patients suffering from tremor face enormous trouble in performing the daily activity, and they always need a caretaker for assistance. In the clinics, the assessment of tremor is done through a manual clinical rating task such as Unified Parkinson’s disease rating scale which is time taking and cumbersome. Neurologists have also affirmed a challenge in differentiating a Parkinsonian tremor with the pure tremor which is essential in providing an accurate diagnosis. Therefore, there is a need to develop a monitoring and assistive tool for the tremor patient that keep on checking their health condition by coordinating them with the clinicians and caretakers for early diagnosis and assistance in performing the daily activity. In our research, we focus on developing a system for automatic classification of tremor which can accurately differentiate the pure tremor from the Parkinsonian tremor using a wearable accelerometer-based device, so that adequate diagnosis can be provided to the correct patient. In this research, a study was conducted in the neuro-clinic to assess the upper wrist movement of the patient suffering from Pure (Essential) tremor and Parkinsonian tremor using a wearable accelerometer-based device. Four tasks were designed in accordance with Unified Parkinson’s disease motor rating scale which is used to assess the rest, postural, intentional and action tremor in such patient. Various features such as time-frequency domain, wavelet-based and fast-Fourier transform based cross-correlation were extracted from the tri-axial signal which was used as input feature vector space for the different supervised and unsupervised learning tools for quantification of severity of tremor. A minimum covariance maximum correlation energy comparison index was also developed which was used as the input feature for various classification tools for distinguishing the PT and ET tremor types. An automatic system for efficient classification of tremor was developed using feature extraction methods, and superior performance was achieved using K-nearest neighbors and Support Vector Machine classifiers respectively.Keywords: machine learning approach for neurological disorder assessment, automatic classification of tremor types, feature extraction method for tremor classification, neurological movement disorder, parkinsonian tremor, essential tremor
Procedia PDF Downloads 154504 Quality by Design in the Optimization of a Fast HPLC Method for Quantification of Hydroxychloroquine Sulfate
Authors: Pedro J. Rolim-Neto, Leslie R. M. Ferraz, Fabiana L. A. Santos, Pablo A. Ferreira, Ricardo T. L. Maia-Jr., Magaly A. M. Lyra, Danilo A F. Fonte, Salvana P. M. Costa, Amanda C. Q. M. Vieira, Larissa A. Rolim
Abstract:
Initially developed as an antimalarial agent, hydroxychloroquine (HCQ) sulfate is often used as a slow-acting antirheumatic drug in the treatment of disorders of connective tissue. The United States Pharmacopeia (USP) 37 provides a reversed-phase HPLC method for quantification of HCQ. However, this method was not reproducible, producing asymmetric peaks in a long analysis time. The asymmetry of the peak may cause an incorrect calculation of the concentration of the sample. Furthermore, the analysis time is unacceptable, especially regarding the routine of a pharmaceutical industry. The aiming of this study was to develop a fast, easy and efficient method for quantification of HCQ sulfate by High Performance Liquid Chromatography (HPLC) based on the Quality by Design (QbD) methodology. This method was optimized in terms of peak symmetry using the surface area graphic as the Design of Experiments (DoE) and the tailing factor (TF) as an indicator to the Design Space (DS). The reference method used was that described at USP 37 to the quantification of the drug. For the optimized method, was proposed a 33 factorial design, based on the QbD concepts. The DS was created with the TF (in a range between 0.98 and 1.2) in order to demonstrate the ideal analytical conditions. Changes were made in the composition of the USP mobile-phase (USP-MP): USP-MP: Methanol (90:10 v/v, 80:20 v/v and 70:30 v/v), in the flow (0.8, 1.0 and 1.2 mL) and in the oven temperature (30, 35, and 40ºC). The USP method allowed the quantification of drug in a long time (40-50 minutes). In addition, the method uses a high flow rate (1,5 mL.min-1) which increases the consumption of expensive solvents HPLC grade. The main problem observed was the TF value (1,8) that would be accepted if the drug was not a racemic mixture, since the co-elution of the isomers can become an unreliable peak integration. Therefore, the optimization was suggested in order to reduce the analysis time, aiming a better peak resolution and TF. For the optimization method, by the analysis of the surface-response plot it was possible to confirm the ideal setting analytical condition: 45 °C, 0,8 mL.min-1 and 80:20 USP-MP: Methanol. The optimized HPLC method enabled the quantification of HCQ sulfate, with a peak of high resolution, showing a TF value of 1,17. This promotes good co-elution of isomers of the HCQ, ensuring an accurate quantification of the raw material as racemic mixture. This method also proved to be 18 times faster, approximately, compared to the reference method, using a lower flow rate, reducing even more the consumption of the solvents and, consequently, the analysis cost. Thus, an analytical method for the quantification of HCQ sulfate was optimized using QbD methodology. This method proved to be faster and more efficient than the USP method, regarding the retention time and, especially, the peak resolution. The higher resolution in the chromatogram peaks supports the implementation of the method for quantification of the drug as racemic mixture, not requiring the separation of isomers.Keywords: analytical method, hydroxychloroquine sulfate, quality by design, surface area graphic
Procedia PDF Downloads 639503 Comparison of Equivalent Linear and Non-Linear Site Response Model Performance in Kathmandu Valley
Authors: Sajana Suwal, Ganesh R. Nhemafuki
Abstract:
Evaluation of ground response under earthquake shaking is crucial in geotechnical earthquake engineering. Damage due to seismic excitation is mainly correlated to local geological and geotechnical conditions. It is evident from the past earthquakes (e.g. 1906 San Francisco, USA, 1923 Kanto, Japan) that the local geology has strong influence on amplitude and duration of ground motions. Since then significant studies has been conducted on ground motion amplification revealing the importance of influence of local geology on ground. Observations from the damaging earthquakes (e.g. Nigata and San Francisco, 1964; Irpinia, 1980; Mexico, 1985; Kobe, 1995; L’Aquila, 2009) divulged that non-uniform damage pattern, particularly in soft fluvio-lacustrine deposit is due to the local amplification of seismic ground motion. Non-uniform damage patterns are also observed in Kathmandu Valley during 1934 Bihar Nepal earthquake and recent 2015 Gorkha earthquake seemingly due to the modification of earthquake ground motion parameters. In this study, site effects resulting from amplification of soft soil in Kathmandu are presented. A large amount of subsoil data was collected and used for defining the appropriate subsoil model for the Kathamandu valley. A comparative study of one-dimensional total-stress equivalent linear and non-linear site response is performed using four strong ground motions for six sites of Kathmandu valley. In general, one-dimensional (1D) site-response analysis involves the excitation of a soil profile using the horizontal component and calculating the response at individual soil layers. In the present study, both equivalent linear and non-linear site response analyses were conducted using the computer program DEEPSOIL. The results show that there is no significant deviation between equivalent linear and non-linear site response models until the maximum strain reaches to 0.06-0.1%. Overall, it is clearly observed from the results that non-linear site response model perform better as compared to equivalent linear model. However, the significant deviation between two models is resulted from other influencing factors such as assumptions made in 1D site response, lack of accurate values of shear wave velocity and nonlinear properties of the soil deposit. The results are also presented in terms of amplification factors which are predicted to be around four times more in case of non-linear analysis as compared to equivalent linear analysis. Hence, the nonlinear behavior of soil prevails the urgent need of study of dynamic characteristics of the soft soil deposit that can specifically represent the site-specific design spectra for the Kathmandu valley for building resilient structures from future damaging earthquakes.Keywords: deep soil, equivalent linear analysis, non-linear analysis, site response
Procedia PDF Downloads 291502 Effects of a School-Based Mindfulness Intervention on Stress and Emotions on Students Enrolled in an Independent School
Authors: Tracie Catlett
Abstract:
Students enrolled in high-achieving schools are under tremendous pressure to perform at high levels inside and outside the classroom. Achievement pressure is a prevalent source of stress for students enrolled in high-achieving schools, and female students in particular experience a higher frequency and higher levels of stress compared to their male peers. The practice of mindfulness in a school setting is one tool that has been linked to improved self-regulation of emotions, increased positive emotions, and stress reduction. A mixed methods randomized pretest-posttest no-treatment control trial, evaluated the effects of a six-session mindfulness intervention taught during a regularly scheduled life skills period in an independent day school, one type of high-achieving school. Twenty-nine students in Grades 10 and 11 were randomized by class where Grade 11 students were in the intervention group (n = 14) and Grade 10 students were in the control group (n = 15). Findings from the study produced mixed results. There was no evidence that the mindfulness program reduced participants’ stress levels and negative emotions. In fact, contrary to what was expected, students enrolled in the intervention group experienced higher levels of stress and increased negative emotions at posttreatment when compared to pretreatment. Neither the within-group nor the between-groups changes in stress level were statistically significant, p > .05, and the between-groups effect size was small, d = .2. The study found evidence that the mindfulness program may have had a positive impact on students’ ability to regulate their emotions. The within-group comparison and the between-groups comparison at posttreatment found that students in the mindfulness course experienced statistically significant improvement in the in their ability to regulate their emotions at posttreatment, p = .009 < .05 and p =. 034 < .05, respectively. The between-groups effect size was medium, d =.7, suggesting that the positive differences in emotion regulation difficulties were substantial and have practical implications. The analysis of gender differences as they relate to stress and emotions revealed that female students perceive higher levels of stress and report experiencing stress more often than males. There were no gender differences when analyzing sources of stress experienced by the student participants. Both females and males experience regular achievement pressures related to their school performance and worry about their future, college acceptance, grades, and parental expectations. Females reported an increased awareness of their stress and actively engaged in practicing mindfulness to manage their stress. Students in the treatment group expressed that the practice of mindfulness resulted in feelings of relaxation and calmness.Keywords: achievement pressure, adolescents, emotion regulation, emotions, high-achieving schools, independent schools, mindfulness, negative affect, positive affect, stress
Procedia PDF Downloads 71501 The Impact of Online Learning on Visual Learners
Authors: Ani Demetrashvili
Abstract:
As online learning continues to reshape the landscape of education, questions arise regarding its efficacy for diverse learning styles, particularly for visual learners. This abstract delves into the impact of online learning on visual learners, exploring how digital mediums influence their educational experience and how educational platforms can be optimized to cater to their needs. Visual learners comprise a significant portion of the student population, characterized by their preference for visual aids such as diagrams, charts, and videos to comprehend and retain information. Traditional classroom settings often struggle to accommodate these learners adequately, relying heavily on auditory and written forms of instruction. The advent of online learning presents both opportunities and challenges in addressing the needs of visual learners. Online learning platforms offer a plethora of multimedia resources, including interactive simulations, virtual labs, and video lectures, which align closely with the preferences of visual learners. These platforms have the potential to enhance engagement, comprehension, and retention by presenting information in visually stimulating formats. However, the effectiveness of online learning for visual learners hinges on various factors, including the design of learning materials, user interface, and instructional strategies. Research into the impact of online learning on visual learners encompasses a multidisciplinary approach, drawing from fields such as cognitive psychology, education, and human-computer interaction. Studies employ qualitative and quantitative methods to assess visual learners' preferences, cognitive processes, and learning outcomes in online environments. Surveys, interviews, and observational studies provide insights into learners' preferences for specific types of multimedia content and interactive features. Cognitive tasks, such as memory recall and concept mapping, shed light on the cognitive mechanisms underlying learning in digital settings. Eye-tracking studies offer valuable data on attentional patterns and information processing during online learning activities. The findings from research on the impact of online learning on visual learners have significant implications for educational practice and technology design. Educators and instructional designers can use insights from this research to create more engaging and effective learning materials for visual learners. Strategies such as incorporating visual cues, providing interactive activities, and scaffolding complex concepts with multimedia resources can enhance the learning experience for visual learners in online environments. Moreover, online learning platforms can leverage the findings to improve their user interface and features, making them more accessible and inclusive for visual learners. Customization options, adaptive learning algorithms, and personalized recommendations based on learners' preferences and performance can enhance the usability and effectiveness of online platforms for visual learners.Keywords: online learning, visual learners, digital education, technology in learning
Procedia PDF Downloads 38500 Genetically Modified Fuel-Ethanol Industrial Yeast Strains as Biocontrol Agents
Authors: Patrícia Branco, Catarina Prista, Helena Albergaria
Abstract:
Industrial fuel-ethanol fermentations are carried out under non-sterile conditions, which favors the development of microbial contaminants, leading to huge economic losses. Wild yeasts such as Brettanomyces bruxellensis and lactic acid bacteria are the main contaminants of industrial bioethanol fermentation, affecting Saccharomyces cerevisiae performance and decreasing ethanol yields and productivity. In order to control microbial contaminations, the fuel-ethanol industry uses different treatments, including acid washing and antibiotics. However, these control measures carry environmental risks such as acid toxicity and the rise of antibiotic-resistant bacteria. Therefore, it is crucial to develop and apply less toxic and more environmentally friendly biocontrol methods. In the present study, an industrial fuel-ethanol starter, S. cerevisiae Ethanol-Red, was genetically modified to over-express AMPs with activity against fuel-ethanol microbial contaminants and evaluated regarding its biocontrol effect during mixed-culture alcoholic fermentations artificially contaminated with B. bruxellensis. To achieve this goal, S. cerevisiae Ethanol-Red strain was transformed with a plasmid containing the AMPs-codifying genes, i.e., partial sequences of TDH1 (925-963 bp) and TDH2/3 (925-963 bp) and a geneticin resistance marker. The biocontrol effect of those genetically modified strains was evaluated against B. bruxellensis and compared with the antagonistic effect exerted by the modified strain with an empty plasmid (without the AMPs-codifying genes) and the non-modified strain S. cerevisiae Ethanol-Red. For that purpose, mixed-culture alcoholic fermentations were performed in a synthetic must use the modified S. cerevisiae Ethanol-Red strains together with B. bruxellensis. Single-culture fermentations of B. bruxellensis strains were also performed as a negative control of the antagonistic effect exerted by S. cerevisiae strains. Results clearly showed an improved biocontrol effect of the genetically-modified strains against B. bruxellensis when compared with the modified Ethanol-Red strain with the empty plasmid (without the AMPs-codifying genes) and with the non-modified Ethanol-Red strain. In mixed-culture fermentation with the modified S. cerevisiae strain, B. bruxellensis culturability decreased from 5×104 CFU/mL on day-0 to less than 1 CFU/mL on day-10, while in single-culture B. bruxellensis increased its culturability from 6×104 to 1×106 CFU/mL in the first 6 days and kept this value until day-10. Besides, the modified Ethanol-Red strain exhibited an enhanced antagonistic effect against B. bruxellensis when compared with that induced by the non-modified Ethanol-Red strain. Indeed, culturability loss of B. bruxellensis after 10 days of fermentation with the modified Ethanol-Red strain was 98.7 and 100% higher than that occurred in fermentations performed with the non-modified Ethanol-Red and the empty-plasmid modified strain, respectively. Therefore, one can conclude that the S. cerevisiae genetically modified strain obtained in the present work may be a valuable solution for the mitigation of microbial contamination in fuel-ethanol fermentations, representing a much safer and environmentally friendly preservation strategy than the antimicrobial treatments (acid washing and antibiotics) currently applied in fuel-ethanol industry.Keywords: antimicrobial peptides, fuel-ethanol microbial contaminations, fuel-ethanol fermentation, biocontrol agents, genetically-modified yeasts
Procedia PDF Downloads 99499 Hypersensitivity Reactions Following Intravenous Administration of Contrast Medium
Authors: Joanna Cydejko, Paulina Mika
Abstract:
Hypersensitivity reactions are side effects of medications that resemble an allergic reaction. Anaphylaxis is a generalized, severe allergic reaction of the body caused by exposure to a specific agent at a dose tolerated by a healthy body. The most common causes of anaphylaxis are food (about 70%), Hymenoptera venoms (22%), and medications (7%), despite detailed diagnostics in 1% of people, the cause of the anaphylactic reaction was not indicated. Contrast media are anaphylactic agents of unknown mechanism. Hypersensitivity reactions can occur with both immunological and non-immunological mechanisms. Symptoms of anaphylaxis occur within a few seconds to several minutes after exposure to the allergen. Contrast agents are chemical compounds that make it possible to visualize or improve the visibility of anatomical structures. In the diagnosis of computed tomography, the preparations currently used are derivatives of the triiodide benzene ring. Pharmacokinetic and pharmacodynamic properties, i.e., their osmolality, viscosity, low chemotoxicity and high hydrophilicity, have an impact on better tolerance of the substance by the patient's body. In MRI diagnostics, macrocyclic gadolinium contrast agents are administered during examinations. The aim of this study is to present the results of the number and severity of anaphylactic reactions that occurred in patients in all age groups undergoing diagnostic imaging with intravenous administration of contrast agents. In non-ionic iodine CT and in macrocyclic gadolinium MRI. A retrospective assessment of the number of adverse reactions after contrast administration was carried out on the basis of data from the Department of Radiology of the University Clinical Center in Gdańsk, and it was assessed whether their different physicochemical properties had an impact on the incidence of acute complications. Adverse reactions are divided according to the severity of the patient's condition and the diagnostic method used in a given patient. Complications following the administration of a contrast medium in the form of acute anaphylaxis accounted for less than 0.5% of all diagnostic procedures performed with the use of a contrast agent. In the analysis period from January to December 2022, 34,053 CT scans and 15,279 MRI examinations with the use of contrast medium were performed. The total number of acute complications was 21, of which 17 were complications of iodine-based contrast agents and 5 of gadolinium preparations. The introduction of state-of-the-art contrast formulations was an important step toward improving the safety and tolerability of contrast agents used in imaging. Currently, contrast agents administered to patients are considered to be one of the best-tolerated preparations used in medicine. However, like any drug, they can be responsible for the occurrence of adverse reactions resulting from their toxic effects. The increase in the number of imaging tests performed with the use of contrast agents has a direct impact on the number of adverse events associated with their administration. However, despite the low risk of anaphylaxis, this risk should not be marginalized. The growing threat associated with the mass performance of radiological procedures with the use of contrast agents forces the knowledge of the rules of conduct in the event of symptoms of hypersensitivity to these preparations.Keywords: anaphylactic, contrast medium, diagnostic, medical imagine
Procedia PDF Downloads 62498 Implementation of Active Recovery at Immediate, 12 and 24 Hours Post-Training in Young Soccer Players
Authors: C. Villamizar, M. Serrato
Abstract:
In the pursuit of athletic performance, the role of physical training which is determined by a number of charges or taxes on physiological stress and musculoskeletal systems of the human body generated by the intensity and duration is fundamental. Given the physical demands of these activities both training and competitive must take into account the optimal relationship with a straining process recovery post favoring the process of overcompensation which aims to facilitate the return and rising energy potential and protein synthesis also of different tissues. Allowing muscle function returns to baseline or pre-exercise states. If this recovery process is not performed or is not allowed in a proper way, will result in an increased state of fatigue. Active recovery, is one of the strategies implemented in the sport for a return to pre-exercise physiological states. However, there are some adverse assumptions regarding the negative effects, as is the possibility of increasing the degradation of muscle glycogen and thus delaying the synthesis thereof. For them, it is necessary to investigate what would be the effects generated application made at different times after the effort. The aim of this study was to determine the effects of active recovery post effort made at three different times: immediately, at 12 and 24 hours on biochemical markers creatine kinase in youth soccer player’s categories. A randomized controlled trial with allocation to three groups was performed: A. active recovery immediately after the effort; B. active recovery performed at 12 hours after the effort; C. active recovery made at 24 hours after the effort. This study included 27 subjects belonging to a Colombian soccer team of the second division. Vital signs, weight, height, BMI, the percentage of muscle mass, fat mass percentage, personal medical history, and family were valued. The velocity, explosive force and Creatin Kinase (CK) in blood were tested before and after interventions. SAFT 90 protocol (Soccer Field specific Aerobic Test) was applied to participants for generating fatigue. CK samples were taken one hour before the application of the fatigue test, one hour after the fatigue protocol and 48 of the initial CK sample. Mean age was 18.5 ± 1.1 years old. Improvements in jumping and speed recovery the 3 groups (p < 0.05), but no statistically significant differences between groups was observed after recuperation. In all participants, there was a significant increment of CK when applied SAFT 90 in all the groups (median 103.1-111.1). The CK measurement after 48 hours reflects a recovery in all groups, however the group C, a decline below baseline levels of -55.5 (-96.3 /-20.4) which is a significant find. Other research has shown that CK does not return quickly to their baseline, but our study shows that active recovery favors the clearance of CK and also to perform recovery 24 hours after the effort generates higher clearance of this biomarker.Keywords: active recuperation, creatine phosphokinase, post training, young soccer players
Procedia PDF Downloads 160497 Development and Experimental Evaluation of a Semiactive Friction Damper
Authors: Juan S. Mantilla, Peter Thomson
Abstract:
Seismic events may result in discomfort on occupants of the buildings, structural damage or even buildings collapse. Traditional design aims to reduce dynamic response of structures by increasing stiffness, thus increasing the construction costs and the design forces. Structural control systems arise as an alternative to reduce these dynamic responses. A commonly used control systems in buildings are the passive friction dampers, which adds energy dissipation through damping mechanisms induced by sliding friction between their surfaces. Passive friction dampers are usually implemented on the diagonal of braced buildings, but such devices have the disadvantage that are optimal for a range of sliding force and out of that range its efficiency decreases. The above implies that each passive friction damper is designed, built and commercialized for a specific sliding/clamping force, in which the damper shift from a locked state to a slip state, where dissipates energy through friction. The risk of having a variation in the efficiency of the device according to the sliding force is that the dynamic properties of the building can change as result of many factor, even damage caused by a seismic event. In this case the expected forces in the building can change and thus considerably reduce the efficiency of the damper (that is designed for a specific sliding force). It is also evident than when a seismic event occurs the forces in each floor varies in the time what means that the damper's efficiency is not the best at all times. Semi-Active Friction devices adapt its sliding force trying to maintain its motion in the slipping phase as much as possible, because of this, the effectiveness of the device depends on the control strategy used. This paper deals with the development and performance evaluation of a low cost Semiactive Variable Friction Damper (SAVFD) in reduced scale to reduce vibrations of structures subject to earthquakes. The SAVFD consist in a (1) hydraulic brake adapted to (2) a servomotor which is controlled with an (3) Arduino board and acquires accelerations or displacement from (4) sensors in the immediately upper and lower floors and a (5) power supply that can be a pair of common batteries. A test structure, based on a Benchmark structure for structural control, was design and constructed. The SAVFD and the structure are experimentally characterized. A numerical model of the structure and the SAVFD is developed based on the dynamic characterization. Decentralized control algorithms were modeled and later tested experimentally using shaking table test using earthquake and frequency chirp signals. The controlled structure with the SAVFD achieved reductions greater than 80% in relative displacements and accelerations in comparison to the uncontrolled structure.Keywords: earthquake response, friction damper, semiactive control, shaking table
Procedia PDF Downloads 378496 Managing Climate Change: Vulnerability Reduction or Resilience Building
Authors: Md Kamrul Hassan
Abstract:
Adaptation interventions are the common response to manage the vulnerabilities of climate change. The nature of adaptation intervention depends on the degree of vulnerability and the capacity of a society. The coping interventions can take the form of hard adaptation – utilising technologies and capital goods like dykes, embankments, seawalls, and/or soft adaptation – engaging knowledge and information sharing, capacity building, policy and strategy development, and innovation. Hard adaptation is quite capital intensive but provides immediate relief from climate change vulnerabilities. This type of adaptation is not real development, as the investment for the adaptation cannot improve the performance – just maintain the status quo of a social or ecological system, and often lead to maladaptation in the long-term. Maladaptation creates a two-way loss for a society – interventions bring further vulnerability on top of the existing vulnerability and investment for getting rid of the consequence of interventions. Hard adaptation is popular to the vulnerable groups, but it focuses so much on the immediate solution and often ignores the environmental issues and future risks of climate change. On the other hand, soft adaptation is education oriented where vulnerable groups learn how to live with climate change impacts. Soft adaptation interventions build the capacity of vulnerable groups through training, innovation, and support, which might enhance the resilience of a system. In consideration of long-term sustainability, soft adaptation can contribute more to resilience than hard adaptation. Taking a developing society as the study context, this study aims to investigate and understand the effectiveness of the adaptation interventions of the coastal community of Sundarbans mangrove forest in Bangladesh. Applying semi-structured interviews with a range of Sundarbans stakeholders including community residents, tourism demand-supply side stakeholders, and conservation and management agencies (e.g., Government, NGOs and international agencies) and document analysis, this paper reports several key insights regarding climate change adaptation. Firstly, while adaptation interventions may offer a short-term to medium-term solution to climate change vulnerabilities, interventions need to be revised for long-term sustainability. Secondly, soft adaptation offers advantages in terms of resilience in a rapidly changing environment, as it is flexible and dynamic. Thirdly, there is a challenge to communicate to educate vulnerable groups to understand more about the future effects of hard adaptation interventions (and the potential for maladaptation). Fourthly, hard adaptation can be used if the interventions do not degrade the environmental balance and if the investment of interventions does not exceed the economic benefit of the interventions. Overall, the goal of an adaptation intervention should be to enhance the resilience of a social or ecological system so that the system can with stand present vulnerabilities and future risks. In order to be sustainable, adaptation interventions should be designed in such way that those can address vulnerabilities and risks of climate change in a long-term timeframe.Keywords: adaptation, climate change, maladaptation, resilience, Sundarbans, sustainability, vulnerability
Procedia PDF Downloads 194