Search results for: iterative calculation
347 Simulation of GAG-Analogue Biomimetics for Intervertebral Disc Repair
Authors: Dafna Knani, Sarit S. Sivan
Abstract:
Aggrecan, one of the main components of the intervertebral disc (IVD), belongs to the family of proteoglycans (PGs) that are composed of glycosaminoglycan (GAG) chains covalently attached to a core protein. Its primary function is to maintain tissue hydration and hence disc height under the high loads imposed by muscle activity and body weight. Significant PG loss is one of the first indications of disc degeneration. A possible solution to recover disc functions is by injecting a synthetic hydrogel into the joint cavity, hence mimicking the role of PGs. One of the hydrogels proposed is GAG-analogues, based on sulfate-containing polymers, which are responsible for hydration in disc tissue. In the present work, we used molecular dynamics (MD) to study the effect of the hydrogel crosslinking (type and degree) on the swelling behavior of the suggested GAG-analogue biomimetics by calculation of cohesive energy density (CED), solubility parameter, enthalpy of mixing (ΔEmix) and the interactions between the molecules at the pure form and as a mixture with water. The simulation results showed that hydrophobicity plays an important role in the swelling of the hydrogel, as indicated by the linear correlation observed between solubility parameter values of the copolymers and crosslinker weight ratio (w/w); this correlation was found useful in predicting the amount of PEGDA needed for the desirable hydration behavior of (CS)₄-peptide. Enthalpy of mixing calculations showed that all the GAG analogs, (CS)₄ and (CS)₄-peptide are water-soluble; radial distribution function analysis revealed that they form interactions with water molecules, which is important for the hydration process. To conclude, our simulation results, beyond supporting the experimental data, can be used as a useful predictive tool in the future development of biomaterials, such as disc replacement.Keywords: molecular dynamics, proteoglycans, enthalpy of mixing, swelling
Procedia PDF Downloads 75346 Descriptive Analysis of the Database of Poliomyelitis Surveillance System in Mauritania from 2012-2019
Authors: B. Baba Ahmed, P. Yanogo, B. Djibryl. N. Medas
Abstract:
Introduction: Polio is a highly contagious viral infection, with children under 5 years of age being the most affected. It is a public health emergency of international concern. Polio surveillance in Mauritania has been ongoing since 1998 and has achieved "polio free" status in 2007. our objective is to analyse a pidemiological surveillance database of poliomyélitis in Mauritania from 2012-2019. Method: A transversal descriptive analysis of poliomyélitis database was carried out in Mauritania from 2012-2019.An exhaustive sampling was done on all suspected polio cases recorded in the database from 2012 -2019. This study used EPI-INFO 7.4 for frequency calculation for qualitative variables, mean and standard deviation for quantitative variables. Results: We found 459 suspected cases of polio over the study period with an average rate of acute non-polio flaccid paralysis of 25.4 cases/100,000 children under 15 years of age. The age group 0-6 years represented 75.2%. Males constituted 50.2%. Females represented 49.78% with a ratio of M/F=1.Among the 422 observations, the average age is 4 years +/- 3.38. The four regions, TIRIS-ZEMMOUR, INCHIRI, TAGANT, NOUACHCHOTT OUEST recorded the lowest percentages of notifications, respectively (3.28%; 3.93%; 4.37%; 4.8%). 99.34% [98.09-99.78] of cases presented acute flaccid paralysis. And 56.77% [52.19-61.23], had limb asymmetry. We showed that 82.93% [79.21-86.10], had fever. we found that 89.5% of suspected polio cases were investigated before 48 hours. And 88.39% of suspected cases had two adequate samples taken 48 hours apart and within 14 days after the onset of symptoms. Only 30.95% of samples arrived at the referral laboratory before 72 hours. Conclusion: This study has shown that Mauritania has achieved the objectives in most of the quantitative performance indicators of polio surveillance. This study has shown a low notification of cases in the northern and central regions of the country. There is a problem with the transport of samples to the laboratory.Keywords: analysis, data base, Epi-Info, polio
Procedia PDF Downloads 175345 Validation of the Linear Trend Estimation Technique for Prediction of Average Water and Sewerage Charge Rate Prices in the Czech Republic
Authors: Aneta Oblouková, Eva Vítková
Abstract:
The article deals with the issue of water and sewerage charge rate prices in the Czech Republic. The research is specifically focused on the analysis of the development of the average prices of water and sewerage charge rate in the Czech Republic in the years 1994-2021 and on the validation of the chosen methodology relevant for the prediction of the development of the average prices of water and sewerage charge rate in the Czech Republic. The research is based on data collection. The data for this research was obtained from the Czech Statistical Office. The aim of the paper is to validate the relevance of the mathematical linear trend estimate technique for the calculation of the predicted average prices of water and sewerage charge rates. The real values of the average prices of water and sewerage charge rates in the Czech Republic in the years 1994-2018 were obtained from the Czech Statistical Office and were converted into a mathematical equation. The same type of real data was obtained from the Czech Statistical Office for the years 2019-2021. Prediction of the average prices of water and sewerage charge rates in the Czech Republic in the years 2019-2021 were also calculated using a chosen method -a linear trend estimation technique. The values obtained from the Czech Statistical Office and the values calculated using the chosen methodology were subsequently compared. The research result is a validation of the chosen mathematical technique to be a suitable technique for this research.Keywords: Czech Republic, linear trend estimation, price prediction, water and sewerage charge rate
Procedia PDF Downloads 120344 Embracing the Uniqueness and Potential of Each Child: Moving Theory to Practice
Authors: Joy Chadwick
Abstract:
This Study of Teaching and Learning (SoTL) research focused on the experiences of teacher candidates involved in an inclusive education methods course within a four-year direct entry Bachelor of Education program. The placement of this course within the final fourteen-week practicum semester is designed to facilitate deeper theory-practice connections between effective inclusive pedagogical knowledge and the real life of classroom teaching. The course focuses on supporting teacher candidates to understand that effective instruction within an inclusive classroom context must be intentional, responsive, and relational. Diversity is situated not as exceptional but rather as expected. This interpretive qualitative study involved the analysis of twenty-nine teacher candidate reflective journals and six individual teacher candidate semi-structured interviews. The journal entries were completed at the start of the semester and at the end of the semester with the intent of having teacher candidates reflect on their beliefs of what it means to be an effective inclusive educator and how the course and practicum experiences impacted their understanding and approaches to teaching in inclusive classrooms. The semi-structured interviews provided further depth and context to the journal data. The journals and interview transcripts were coded and themed using NVivo software. The findings suggest that instructional frameworks such as universal design for learning (UDL), differentiated instruction (DI), response to intervention (RTI), social emotional learning (SEL), and self-regulation supported teacher candidate’s abilities to meet the needs of their students more effectively. Course content that focused on specific exceptionalities also supported teacher candidates to be proactive rather than reactive when responding to student learning challenges. Teacher candidates also articulated the importance of reframing their perspective about students in challenging moments and that seeing the individual worth of each child was integral to their approach to teaching. A persisting question for teacher educators exists as to what pedagogical knowledge and understanding is most relevant in supporting future teachers to be effective at planning for and embracing the diversity of student needs within classrooms today. This research directs us to consider the critical importance of addressing personal attributes and mindsets of teacher candidates regarding children as well as considering instructional frameworks when designing coursework. Further, the alignment of an inclusive education course during a teaching practicum allows for an iterative approach to learning. The practical application of course concepts while teaching in a practicum allows for a deeper understanding of instructional frameworks, thus enhancing the confidence of teacher candidates. Research findings have implications for teacher education programs as connected to inclusive education methods courses, practicum experiences, and overall teacher education program design.Keywords: inclusion, inclusive education, pre-service teacher education, practicum experiences, teacher education
Procedia PDF Downloads 68343 Understanding the Semantic Network of Tourism Studies in Taiwan by Using Bibliometrics Analysis
Authors: Chun-Min Lin, Yuh-Jen Wu, Ching-Ting Chung
Abstract:
The formulation of tourism policies requires objective academic research and evidence as support, especially research from local academia. Taiwan is a small island, and its economic growth relies heavily on tourism revenue. Taiwanese government has been devoting to the promotion of the tourism industry over the past few decades. Scientific research outcomes by Taiwanese scholars may and will help lay the foundations for drafting future tourism policy by the government. In this study, a total of 120 full journal articles published between 2008 and 2016 from the Journal of Tourism and Leisure Studies (JTSL) were examined to explore the scientific research trend of tourism study in Taiwan. JTSL is one of the most important Taiwanese journals in the tourism discipline which focuses on tourism-related issues and uses traditional Chinese as the study language. The method of co-word analysis from bibliometrics approaches was employed for semantic analysis in this study. When analyzing Chinese words and phrases, word segmentation analysis is a crucial step. It must be carried out initially and precisely in order to obtain meaningful word or word chunks for further frequency calculation. A word segmentation system basing on N-gram algorithm was developed in this study to conduct semantic analysis, and 100 groups of meaningful phrases with the highest recurrent rates were located. Subsequently, co-word analysis was employed for semantic classification. The results showed that the themes of tourism research in Taiwan in recent years cover the scope of tourism education, environmental protection, hotel management, information technology, and senior tourism. The results can give insight on the related issues and serve as a reference for tourism-related policy making and follow-up research.Keywords: bibliometrics, co-word analysis, word segmentation, tourism research, policy
Procedia PDF Downloads 229342 Calculation of Secondary Neutron Dose Equivalent in Proton Therapy of Thyroid Gland Using FLUKA Code
Authors: M. R. Akbari, M. Sadeghi, R. Faghihi, M. A. Mosleh-Shirazi, A. R. Khorrami-Moghadam
Abstract:
Proton radiotherapy (PRT) is becoming an established treatment modality for cancer. The localized tumors, the same as undifferentiated thyroid tumors are insufficiently handled by conventional radiotherapy, while protons would propose the prospect of increasing the tumor dose without exceeding the tolerance of the surrounding healthy tissues. In spite of relatively high advantages in giving localized radiation dose to the tumor region, in proton therapy, secondary neutron production can have significant contribution on integral dose and lessen advantages of this modality contrast to conventional radiotherapy techniques. Furthermore, neutrons have high quality factor, therefore, even a small physical dose can cause considerable biological effects. Measuring of this neutron dose is a very critical step in prediction of secondary cancer incidence. It has been found that FLUKA Monte Carlo code simulations have been used to evaluate dose due to secondaries in proton therapy. In this study, first, by validating simulated proton beam range in water phantom with CSDA range from NIST for the studied proton energy range (34-54 MeV), a proton therapy in thyroid gland cancer was simulated using FLUKA code. Secondary neutron dose equivalent of some organs and tissues after the target volume caused by 34 and 54 MeV proton interactions were calculated in order to evaluate secondary cancer incidence. A multilayer cylindrical neck phantom considering all the layers of neck tissues and a proton beam impinging normally on the phantom were also simulated. Trachea (accompanied by Larynx) had the greatest dose equivalent (1.24×10-1 and 1.45 pSv per primary 34 and 54 MeV protons, respectively) among the simulated tissues after the target volume in the neck region.Keywords: FLUKA code, neutron dose equivalent, proton therapy, thyroid gland
Procedia PDF Downloads 425341 Data Centers’ Temperature Profile Simulation Optimized by Finite Elements and Discretization Methods
Authors: José Alberto García Fernández, Zhimin Du, Xinqiao Jin
Abstract:
Nowadays, data center industry faces strong challenges for increasing the speed and data processing capacities while at the same time is trying to keep their devices a suitable working temperature without penalizing that capacity. Consequently, the cooling systems of this kind of facilities use a large amount of energy to dissipate the heat generated inside the servers, and developing new cooling techniques or perfecting those already existing would be a great advance in this type of industry. The installation of a temperature sensor matrix distributed in the structure of each server would provide the necessary information for collecting the required data for obtaining a temperature profile instantly inside them. However, the number of temperature probes required to obtain the temperature profiles with sufficient accuracy is very high and expensive. Therefore, other less intrusive techniques are employed where each point that characterizes the server temperature profile is obtained by solving differential equations through simulation methods, simplifying data collection techniques but increasing the time to obtain results. In order to reduce these calculation times, complicated and slow computational fluid dynamics simulations are replaced by simpler and faster finite element method simulations which solve the Burgers‘ equations by backward, forward and central discretization techniques after simplifying the energy and enthalpy conservation differential equations. The discretization methods employed for solving the first and second order derivatives of the obtained Burgers‘ equation after these simplifications are the key for obtaining results with greater or lesser accuracy regardless of the characteristic truncation error.Keywords: Burgers' equations, CFD simulation, data center, discretization methods, FEM simulation, temperature profile
Procedia PDF Downloads 169340 Pretherapy Initial Dosimetry Results in Prostat Cancer Radionuclide Therapy with Lu-177-PSMA-DOTA-617
Authors: M. Abuqebitah, H. Tanyildizi, N. Yeyin, I. Cavdar, M. Demir, L. Kabasakal
Abstract:
Aim: Targeted radionuclide therapy (TRT) is an increasingly used treatment modality for wide range of cancers. Presently dosimetry is highly required either to plan treatment or to ascertain the absorbed dose delivered to critical organs during treatment. Methods and Materials: The study comprised 7 patients suffered from prostate cancer with progressive disease and candidate to undergo Lu-177-DOTA-617 therapy following to PSMA- PET/CT imaging for all patients. (5.2±0.3 mCi) was intravenously injected. To evaluate bone marrow absorbed dose 2 cc blood samples were withdrawn in short variable times (3, 15, 30, 60, 180 minutes) after injection. Furthermore, whole body scans were performed using scintillation gama camera in 4, 24, 48, and 120 hours after injection and in order to quantify the activity taken up in the body, kidneys , liver, right parotid, and left parotid the geometric mean of anterior and posterior counts were determined through ROI analysis, after that background subtraction and attenuation correction were applied using patients PSMA- PET/CT images taking in a consideration: organ thickness, body thickness, and Hounsfield unites from CT scan. OLINDA/EXM dosimetry program was used for curve fitting, residence time calculation, and absorbed dose calculations. Findings: Absorbed doses of bone marrow, left kidney, right kidney, liver, left parotid, right parotid, total body were 1.28±0.52, 32.36±16.36, 32.7±13.68, 10.35±3.45, 38.67±21.29, 37.55±19.77, 2.25±0.95 (mGy/mCi), respectively. Conclusion: Our first results clarify that Lu-177-DOTA-617 is safe and reliable therapy as there were no complications seen. In the other hand, the observable variation in the absorbed dose of the critical organs among the patients necessitate patient-specific dosimetry approach to save body organs and particularly highly exposed kidneys and parotid gland.Keywords: Lu-177-PSMA, prostate cancer, radionuclide therapy
Procedia PDF Downloads 480339 Syndecan -1 as Regulator of Ischemic-Reperfusion Damage Limitation in Experiment
Authors: M. E. Kolpakova, A. A. Jakovleva, L. S. Poliakova, H. El Amghari, S. Soliman, D. R. Faizullina, V. V. Sharoyko
Abstract:
Brain neuroplasticity is associated with blood-brain barrier vascular endothelial proteoglycans and post-stroke microglial activation. The study of the mechanisms of reperfusion injury limitation by remote ischemic postconditioning (RC) is of interest due to the effects on functional recovery after cerebral ischemia. The goal of the study is the assessment of the role of syndecan-1 (SDC-1) in restriction of ischemic-reperfusion injury on middle cerebral artery model in rats using RC protocol. Randomized controlled trials were conducted. Ischemia was performed by middle cerebral artery occlusion by Belayev L. (1996) on the Wistar rat-males (n= 87) weighting 250 ± 50 g. under general anesthesia (Zoletil 100 и Xylazine 2%). Syndecan-1 (SDC-1) concentration difference in plasma samples of false operated animals and animals with brain ischemia was 30% (30 min. МСАо: 41.4 * ± 1.3 ng/ml). SDC-1 concentration in animal plasma samples with ischemia + RC protocol was 112% (30 min МСАо+ RC): 67.8**± 5.8 ng/ml). Calculation of infarction volume in the ischemia group revealed brain injury in 31.97 ± 2.5%; the volume of infarction was 13.6 ± 1.3% in 30 min. МCАо + RC group. Swelling of tissue in the group 30 min. МCАо + RC was 16 ± 2.1%; it was 47 ± 3.3%. in 30 min. МCАо group. Correlation analysis showed a high direct correlation relationship between infarct area and muscle strength in the right forelimb (КК=0.72) in the 30 min. МCАо + RC group. Correlation analysis showed very high inverse correlation between infarct area and capillary blood flow in the 30 min. МCАо + RC group (p <0.01; r = -0.98). We believe the SDC-1 molecule in blood plasma may play role of potential messenger of ischemic-reperfusion injury restriction mechanisms. This leads to infarct-limiting effect of remote ischemic postconditioning and early functioning recovery.Keywords: ischemia, МСАо, remote ischemic postconditioning, syndecan-1
Procedia PDF Downloads 62338 Stability Design by Geometrical Nonlinear Analysis Using Equivalent Geometric Imperfections
Authors: S. Fominow, C. Dobert
Abstract:
The present article describes the research that deals with the development of equivalent geometric imperfections for the stability design of steel members considering lateral-torsional buckling. The application of these equivalent imperfections takes into account the stiffness-reducing effects due to inelasticity and residual stresses, which lead to a reduction of the load carrying capacity of slender members and structures. This allows the application of a simplified design method, that is performed in three steps. Application of equivalent geometric imperfections, determination of internal forces using geometrical non-linear analysis (GNIA) and verification of the cross-section resistance at the most unfavourable location. All three verification steps are closely related and influence the results. The derivation of the equivalent imperfections was carried out in several steps. First, reference lateral-torsional buckling resistances for various rolled I-sections, slenderness grades, load shapes and steel grades were determined. This was done either with geometric and material non-linear analysis with geometrical imperfections and residual stresses (GMNIA) or for standard cases based on the equivalent member method. With the aim of obtaining identical lateral-torsional buckling resistances as the reference resistances from the application of the design method, the required sizes for equivalent imperfections were derived. For this purpose, a program based on the FEM method has been developed. Based on these results, several proposals for the specification of equivalent geometric imperfections have been developed. These differ in the shape of the applied equivalent geometric imperfection, the model of the cross-sectional resistance and the steel grade. The proposed design methods allow a wide range of applications and a reliable calculation of the lateral-torsional buckling resistances, as comparisons between the calculated resistances and the reference resistances have shown.Keywords: equivalent geometric imperfections, GMNIA, lateral-torsional buckling, non-linear finite element analysis
Procedia PDF Downloads 156337 Alternative Method of Determining Seismic Loads on Buildings Without Response Spectrum Application
Authors: Razmik Atabekyan, V. Atabekyan
Abstract:
This article discusses a new alternative method for determination of seismic loads on buildings, based on resistance of structures to deformations of vibrations. The basic principles for determining seismic loads by spectral method were developed in 40… 50ies of the last century and further have been improved to pursuit true assessments of seismic effects. The base of the existing methods to determine seismic loads is response spectrum or dynamicity coefficient β (norms of RF), which are not definitively established. To this day there is no single, universal method for the determination of seismic loads and when trying to apply the norms of different countries, significant discrepancies between the results are obtained. On the other hand there is a contradiction of the results of macro seismic surveys of strong earthquakes with the principle of the calculation based on accelerations. It is well-known, on soft soils there is an increase of destructions (mainly due to large displacements), even though the accelerations decreases. Obviously, the seismic impacts are transmitted to the building through foundation, but paradoxically, the existing methods do not even include foundation data. Meanwhile acceleration of foundation of the building can differ several times from the acceleration of the ground. During earthquakes each building has its own peculiarities of behavior, depending on the interaction between the soil and the foundations, their dynamic characteristics and many other factors. In this paper we consider a new, alternative method of determining the seismic loads on buildings, without the use of response spectrum. The following main conclusions: 1) Seismic loads are revealed at the foundation level, which leads to redistribution and reduction of seismic loads on structures. 2) The proposed method is universal and allows determine the seismic loads without the use of response spectrum and any implicit coefficients. 3) The possibility of taking into account important factors such as the strength characteristics of the soils, the size of the foundation, the angle of incidence of the seismic ray and others. 4) Existing methods can adequately determine the seismic loads on buildings only for first form of vibrations, at an average soil conditions.Keywords: seismic loads, response spectrum, dynamic characteristics of buildings, momentum
Procedia PDF Downloads 505336 Econophysical Approach on Predictability of Financial Crisis: The 2001 Crisis of Turkey and Argentina Case
Authors: Arzu K. Kamberli, Tolga Ulusoy
Abstract:
Technological developments and the resulting global communication have made the 21st century when large capitals are moved from one end to the other via a button. As a result, the flow of capital inflows has accelerated, and capital inflow has brought with it crisis-related infectiousness. Considering the irrational human behavior, the financial crisis in the world under the influence of the whole world has turned into the basic problem of the countries and increased the interest of the researchers in the reasons of the crisis and the period in which they lived. Therefore, the complex nature of the financial crises and its linearly unexplained structure have also been included in the new discipline, econophysics. As it is known, although financial crises have prediction mechanisms, there is no definite information. In this context, in this study, using the concept of electric field from the electrostatic part of physics, an early econophysical approach for global financial crises was studied. The aim is to define a model that can take place before the financial crises, identify financial fragility at an earlier stage and help public and private sector members, policy makers and economists with an econophysical approach. 2001 Turkey crisis has been assessed with data from Turkish Central Bank which is covered between 1992 to 2007, and for 2001 Argentina crisis, data was taken from IMF and the Central Bank of Argentina from 1997 to 2007. As an econophysical method, an analogy is used between the Gauss's law used in the calculation of the electric field and the forecasting of the financial crisis. The concept of Φ (Financial Flux) has been adopted for the pre-warning of the crisis by taking advantage of this analogy, which is based on currency movements and money mobility. For the first time used in this study Φ (Financial Flux) calculations obtained by the formula were analyzed by Matlab software, and in this context, in 2001 Turkey and Argentina Crisis for Φ (Financial Flux) crisis of values has been confirmed to give pre-warning.Keywords: econophysics, financial crisis, Gauss's Law, physics
Procedia PDF Downloads 153335 Wildfire-Related Debris-Flow and Flooding Using 2-D Hydrologic Model
Authors: Cheong Hyeon Oh, Dongho Nam, Byungsik Kim
Abstract:
Due to the recent climate change, flood damage caused by local floods and typhoons has frequently occurred, the incidence rate and intensity of wildfires are greatly increased due to increased temperatures and changes in precipitation patterns. Wildfires cause primary damage, such as loss of forest resources, as well as secondary disasters, such as landslides, floods, and debris flow. In many countries around the world, damage and economic losses from secondary damage are occurring as well as the direct effects of forest fires. Therefore, in this study, the Rainfall-Runoff model(S-RAT) was used for the wildfire affected areas in Gangneung and Goseong, which occurred on April 2019, when the stability of vegetation and soil were destroyed by wildfires. Rainfall data from Typhoon Rusa were used in the S-RAT model, and flood discharge was calculated according to changes in land cover before and after wildfire damage. The results of the calculation showed that flood discharge increased significantly due to changes in land cover, as the increase in flood discharge increases the possibility of the occurrence of the debris flow and the extent of the damage, the debris flow height and range were calculated before and after forest fire using RAMMS. The analysis results showed that the height and extent of damage increased after wildfire, but the result value was underestimated due to the characteristics that using DEM and maximum flood discharge of the RAMMS model. This research was supported by a grant(2017-MOIS31-004) from Fundamental Technology Development Program for Extreme Disaster Response funded by Korean Ministry of Interior and Safety (MOIS). This paper work (or document) was financially supported by Ministry of the Interior and Safety as 'Human resoure development Project in Disaster management'.Keywords: wildfire, debris flow, land cover, rainfall-runoff meodel S-RAT, RAMMS, height
Procedia PDF Downloads 122334 The Impact of Land Cover Change on Stream Discharges and Water Resources in Luvuvhu River Catchment, Vhembe District, Limpopo Province, South Africa
Authors: P. M. Kundu, L. R. Singo, J. O. Odiyo
Abstract:
Luvuvhu River catchment in South Africa experiences floods resulting from heavy rainfall of intensities exceeding 15 mm per hour associated with the Inter-tropical Convergence Zone (ITCZ). The generation of runoff is triggered by the rainfall intensity and soil moisture status. In this study, remote sensing and GIS techniques were used to analyze the hydrologic response to land cover changes. Runoff was calculated as a product of the net precipitation and a curve number coefficient. It was then routed using the Muskingum-Cunge method using a diffusive wave transfer model that enabled the calculation of response functions between start and end point. Flood frequency analysis was determined using theoretical probability distributions. Spatial data on land cover was obtained from multi-temporal Landsat images while data on rainfall, soil type, runoff and stream discharges was obtained by direct measurements in the field and from the Department of Water. A digital elevation model was generated from contour maps available at http://www.ngi.gov.za. The results showed that land cover changes had impacted negatively to the hydrology of the catchment. Peak discharges in the whole catchment were noted to have increased by at least 17% over the period while flood volumes were noted to have increased by at least 11% over the same period. The flood time to peak indicated a decreasing trend, in the range of 0.5 to 1 hour within the years. The synergism between remotely sensed digital data and GIS for land surface analysis and modeling was realized, and it was therefore concluded that hydrologic modeling has potential for determining the influence of changes in land cover on the hydrologic response of the catchment.Keywords: catchment, digital elevation model, hydrological model, routing, runoff
Procedia PDF Downloads 566333 The Types of Annuities with Flexible Premium
Authors: Deniz Ünal Özpalamutcu, Burcu Altman
Abstract:
Actuaria uses mathematics, statistic and financial information when analyzing the financial impacts of uncertainties, risks, insurance and pension related issues. In other words, it deals with the likelihood of potential risks, their financial impacts and especially the financial measures. Handling these measures require some long-term payment and investments. So, it is obvious it is inevitable to plan the periodic payments with equal time intervals considering also the changing value of money over time. These series of payment made specific intervals of time is called annuity or rant. In literature, rants are classified based on start and end dates, start times, payments times, payments amount or frequency. Classification of rants based on payment amounts changes based on the constant, descending or ascending payment methods. The literature about handling the annuity is very limited. Yet in a daily life, especially in today’s world where the economic issues gained a prominence, it is very crucial to use the variable annuity method in line with the demands of the customers. In this study, the types of annuities with flexible payment are discussed. In other words, we focus on calculating payment amount of a period by adding a certain percentage of previous period payment was studied. While studying this problem, formulas were created considering both start and end period payments for cash value and accumulated. Also increase of each period payment by r interest rate each period payments calculated with previous periods increases. And the problem of annuities (rants) of which each period payment increased with previous periods’ increase by r interest rate has been analyzed. Cash value and accumulated value calculation of this problem were studied separately based on the period start/end and their relations were expressed by formulas.Keywords: actuaria, annuity, flexible payment, rant
Procedia PDF Downloads 221332 Capacity Enhancement for Agricultural Workers in Mangosteen Product
Authors: Cholpassorn Sitthiwarongchai, Chutikarn Sriviboon
Abstract:
The two primary objectives of this research were (1) to examine the current knowledge and actual circumstance of agricultural workers about mangosteen product processing; and (2) to analyze and evaluate ways to develop capacity of mangosteen product processing. The population of this study was 15,125 people who work in the agricultural sector, in this context, mangosteen production, in the eastern part of Thailand that included Chantaburi Province, Rayong Province, Trad Province and Pracheenburi Province. The sample size based on Yamane’s calculation with 95% reliability was therefore 392 samples. Mixed method was employed included questionnaire and focus group discussion with Connoisseurship Model used in order to collect quantitative and qualitative data. Key informants were used in the focus group including agricultural business owners, academic people in agro food processing, local academics, local community development staff, OTOP subcommittee, and representatives of agro processing industry professional organizations. The study found that the majority of the respondents agreed with a high level (in five-rating scale) towards most of variables of knowledge management in agro food processing. The result of the current knowledge and actual circumstance of agricultural human resource in an arena of mangosteen product processing revealed that mostly, the respondents agreed at a high level to establish 7 variables. The guideline to developing the body of knowledge in order to enhance the capacity of the agricultural workers in mangosteen product processing was delivered in the focus group discussion. The discussion finally contributed to an idea to produce manuals for mangosteen product processing methods, with 4 products chosen: (1) mangosteen soap, (2) mangosteen juice, (3) mangosteen toffee, and (4) mangosteen preserves or jam.Keywords: capacity enhancement, agricultural workers, mangosteen product processing, marketing management
Procedia PDF Downloads 212331 Interacting with Multi-Scale Structures of Online Political Debates by Visualizing Phylomemies
Authors: Quentin Lobbe, David Chavalarias, Alexandre Delanoe
Abstract:
The ICT revolution has given birth to an unprecedented world of digital traces and has impacted a wide number of knowledge-driven domains such as science, education or policy making. Nowadays, we are daily fueled by unlimited flows of articles, blogs, messages, tweets, etc. The internet itself can thus be considered as an unsteady hyper-textual environment where websites emerge and expand every day. But there are structures inside knowledge. A given text can always be studied in relation to others or in light of a specific socio-cultural context. By way of their textual traces, human beings are calling each other out: hypertext citations, retweets, vocabulary similarity, etc. We are in fact the architects of a giant web of elements of knowledge whose structures and shapes convey their own information. The global shapes of these digital traces represent a source of collective knowledge and the question of their visualization remains an opened challenge. How can we explore, browse and interact with such shapes? In order to navigate across these growing constellations of words and texts, interdisciplinary innovations are emerging at the crossroad between fields of social and computational sciences. In particular, complex systems approaches make it now possible to reconstruct the hidden structures of textual knowledge by means of multi-scale objects of research such as semantic maps and phylomemies. The phylomemy reconstruction is a generic method related to the co-word analysis framework. Phylomemies aim to reveal the temporal dynamics of large corpora of textual contents by performing inter-temporal matching on extracted knowledge domains in order to identify their conceptual lineages. This study aims to address the question of visualizing the global shapes of online political discussions related to the French presidential and legislative elections of 2017. We aim to build phylomemies on top of a dedicated collection of thousands of French political tweets enriched with archived contemporary news web articles. Our goal is to reconstruct the temporal evolution of online debates fueled by each political community during the elections. To that end, we want to introduce an iterative data exploration methodology implemented and tested within the free software Gargantext. There we combine synchronic and diachronic axis of visualization to reveal the dynamics of our corpora of tweets and web pages as well as their inner syntagmatic and paradigmatic relationships. In doing so, we aim to provide researchers with innovative methodological means to explore online semantic landscapes in a collaborative and reflective way.Keywords: online political debate, French election, hyper-text, phylomemy
Procedia PDF Downloads 186330 Use of FWD in Determination of Bonding Condition of Semi-Rigid Asphalt Pavement
Authors: Nonde Lushinga, Jiang Xin, Danstan Chiponde, Lawrence P. Mutale
Abstract:
In this paper, falling weight deflectometer (FWD) was used to determine the bonding condition of a newly constructed semi-rigid base pavement. Using Evercal back-calculation computer programme, it was possible to quickly and accurately determine the structural condition of the pavement system of FWD test data. The bonding condition of the pavement layers was determined from calculated shear stresses and strains (relative horizontal displacements) on the interface of pavement layers from BISAR 3.0 pavement computer programmes. Thus, by using non-linear layered elastic theory, a pavement structure is analysed in the same way as other civil engineering structures. From non-destructive FWD testing, the required bonding condition of pavement layers was quantified from soundly based principles of Goodman’s constitutive models shown in equation 2, thereby producing the shear reaction modulus (Ks) which gives an indication of bonding state of pavement layers. Furthermore, a Tack coat failure Ratio (TFR) which has long being used in the USA in pavement evaluation was also used in the study in order to give validity to the study. According to research [39], the interface between two asphalt layers is determined by use of Tack Coat failure Ratio (TFR) which is the ratio of the stiffness of top layer asphalt layers over the stiffness of the second asphalt layer (E1/E2) in a slipped pavement. TFR gives an indication of the strength of the tack coat which is the main determinants of interlayer slipping. The criteria is that if the interface was in the state full bond, TFR would be greater or equals to 1 and that if the TFR was 0, meant full slip. Results of the calculations showed that TFR value was 1.81 which re-affirmed the position that the pavement under study was in the state of full bond because the value was greater than 1. It was concluded that FWD can be used to determine bonding condition of existing and newly constructed pavements.Keywords: falling weight deflectometer (FWD), backcaluclation, semi-rigid base pavement, shear reaction modulus
Procedia PDF Downloads 515329 Marine Fishing and Climate Change: A China’s Perspective on Fisheries Economic Development and Greenhouse Gas Emissions
Authors: Yidan Xu, Pim Martens, Thomas Krafft
Abstract:
Marine fishing, an energy-intensive activity, directly emits greenhouse gases through fuel combustion, making it a significant contributor to oceanic greenhouse gas (GHG) emissions and worsening climate change. China is the world’s second-largest economy and the top emitter of GHG emissions, and it carries a significant energy conservation and emission reduction burden. However, the increasing GHG emissions from marine fishing is an easily overlooked but essential issue in China. This study offers a diverse perspective by integrating the concepts of total carbon emissions, carbon intensity, and per capita carbon emissions as indicators into calculation and discussion. To better understand the GHG emissions-Gross marine fishery product (GFP) relationship and influencing factors in Chinese marine fishing, the relationship between GHG emissions and economic development in marine fishing, a comprehensive framework is developed by combining the environmental Kuznets curve, the Tapio elasticity index, and the decomposition model. Results indicated that (1) The GHG emissions increased from 16.479 to 18.601 million tons in 2001-2020, in which trawlers and gillnetter are the main source in fishing operation. (2) Total carbon emissions (TC) and CI presented the same decline as GHG emissions, while per capita carbon emissions (PC) displayed an uptrend. (32) GHG emissions and gross marine fishery product (GFP) presented an inverted U-shaped relationship in China; the turning point came in the 13th Five-year Plan period (2016-2020). (43) Most provinces strongly decoupled GFP and CI. Still, PC and TC need more effort to decouple. (54) GHG emissions promoted by an industry structure driven, though carbon intensity and industry scale aid in GHG emissions reduced. (5) Compare with TC and PC, CI has been relatively affected by COVID-19 in 2020. The rise in fish and seafood prices during COVID-19 has boosted the GFP.Keywords: marine fishing economy, greenhouse gas emission, fishery management, green development
Procedia PDF Downloads 68328 Dynamic Test for Stability of Columns in Sway Mode
Authors: Elia Efraim, Boris Blostotsky
Abstract:
Testing of columns in sway mode is performed in order to determine the maximal allowable load limited by plastic deformations or their end connections and a critical load limited by columns stability. Motivation to determine accurate value of critical force is caused by its using as follow: - critical load is maximal allowable load for given column configuration and can be used as criterion of perfection; - it is used in calculation prescribed by standards for design of structural elements under combined action of compression and bending; - it is used for verification of theoretical analysis of stability at various end conditions of columns. In the present work a new non-destructive method for determination of columns critical buckling load in sway mode is proposed. The method allows performing measurements during the tests under loads that exceeds the columns critical load without losing its stability. The possibility of such loading is achieved by structure of the loading system. The system is performed as frame with rigid girder, one of the columns is the tested column and the other is additional two-hinged strut. Loading of the frame is carried out by the flexible traction element attached to the girder. The load applied on the tested column can achieve values that exceed the critical load by choice of parameters of the traction element and the additional strut. The system lateral stiffness and the column critical load are obtained by the dynamic method. The experiment planning and the comparison between the experimental and theoretical values were performed based on the developed dependency of lateral stiffness of the system on vertical load, taking into account semi-rigid connections of the column's ends. The agreement between the obtained results was established. The method can be used for testing of real full-size columns in industrial conditions.Keywords: buckling, columns, dynamic method, end-fixity factor, sway mode
Procedia PDF Downloads 351327 Computational Aided Approach for Strut and Tie Model for Non-Flexural Elements
Authors: Mihaja Razafimbelo, Guillaume Herve-Secourgeon, Fabrice Gatuingt, Marina Bottoni, Tulio Honorio-De-Faria
Abstract:
The challenge of the research is to provide engineering with a robust, semi-automatic method for calculating optimal reinforcement for massive structural elements. In the absence of such a digital post-processing tool, design office engineers make intensive use of plate modelling, for which automatic post-processing is available. Plate models in massive areas, on the other hand, produce conservative results. In addition, the theoretical foundations of automatic post-processing tools for reinforcement are those of reinforced concrete beam sections. As long as there is no suitable alternative for automatic post-processing of plates, optimal modelling and a significant improvement of the constructability of massive areas cannot be expected. A method called strut-and-tie is commonly used in civil engineering, but the result itself remains very subjective to the calculation engineer. The tool developed will facilitate the work of supporting the engineers in their choice of structure. The method implemented consists of defining a ground-structure built on the basis of the main constraints resulting from an elastic analysis of the structure and then to start an optimization of this structure according to the fully stressed design method. The first results allow to obtain a coherent return in the first network of connecting struts and ties, compared to the cases encountered in the literature. The evolution of the tool will then make it possible to adapt the obtained latticework in relation to the cracking states resulting from the loads applied during the life of the structure, cyclic or dynamic loads. In addition, with the constructability constraint, a final result of reinforcement with an orthogonal arrangement with a regulated spacing will be implemented in the tool.Keywords: strut and tie, optimization, reinforcement, massive structure
Procedia PDF Downloads 141326 Value Index, a Novel Decision Making Approach for Waste Load Allocation
Authors: E. Feizi Ashtiani, S. Jamshidi, M.H Niksokhan, A. Feizi Ashtiani
Abstract:
Waste load allocation (WLA) policies may use multi-objective optimization methods to find the most appropriate and sustainable solutions. These usually intend to simultaneously minimize two criteria, total abatement costs (TC) and environmental violations (EV). If other criteria, such as inequity, need for minimization as well, it requires introducing more binary optimizations through different scenarios. In order to reduce the calculation steps, this study presents value index as an innovative decision making approach. Since the value index contains both the environmental violation and treatment costs, it can be maximized simultaneously with the equity index. It implies that the definition of different scenarios for environmental violations is no longer required. Furthermore, the solution is not necessarily the point with minimized total costs or environmental violations. This idea is testified for Haraz River, in north of Iran. Here, the dissolved oxygen (DO) level of river is simulated by Streeter-Phelps equation in MATLAB software. The WLA is determined for fish farms using multi-objective particle swarm optimization (MOPSO) in two scenarios. At first, the trade-off curves of TC-EV and TC-Inequity are plotted separately as the conventional approach. In the second, the Value-Equity curve is derived. The comparative results show that the solutions are in a similar range of inequity with lower total costs. This is due to the freedom of environmental violation attained in value index. As a result, the conventional approach can well be replaced by the value index particularly for problems optimizing these objectives. This reduces the process to achieve the best solutions and may find better classification for scenario definition. It is also concluded that decision makers are better to focus on value index and weighting its contents to find the most sustainable alternatives based on their requirements.Keywords: waste load allocation (WLA), value index, multi objective particle swarm optimization (MOPSO), Haraz River, equity
Procedia PDF Downloads 422325 The Effect of Artificial Intelligence on Electric Machines and Welding
Authors: Mina Malak Zakaria Henin
Abstract:
The finite detail evaluation of magnetic fields in electromagnetic devices shows that the machine cores revel in extraordinary flux patterns consisting of alternating and rotating fields. The rotating fields are generated in different configurations variety, among circular and elliptical, with distinctive ratios between the fundamental and minor axes of the flux locus. Experimental measurements on electrical metal uncovered one-of-a-kind flux patterns that divulge distinctive magnetic losses in the samples below the test. Therefore, electric machines require unique interest throughout the core loss calculation technique to bear in mind the flux styles. In this look, a circular rotational unmarried sheet tester is employed to measure the middle losses in the electric-powered metallic pattern of M36G29. The sample becomes exposed to alternating fields, circular areas, and elliptical fields with axis ratios of zero.2, zero. Four, 0.6 and 0.8. The measured statistics changed into applied on 6-4 switched reluctance motors at 3 distinctive frequencies of interest to the industry 60 Hz, 400 Hz, and 1 kHz. The effects reveal an excessive margin of error, which can arise at some point in the loss calculations if the flux pattern difficulty is overlooked. The mistake in exceptional components of the gadget associated with considering the flux styles may be around 50%, 10%, and a couple of at 60Hz, 400Hz, and 1 kHz, respectively. The future paintings will focus on the optimization of gadget geometrical shape, which has a primary effect on the flux sample on the way to decrease the magnetic losses in system cores.Keywords: converters, electric machines, MEA (more electric aircraft), PES (power electronics systems) synchronous machine, vector control Multi-machine/ Multi-inverter, matrix inverter, Railway tractionalternating core losses, finite element analysis, rotational core losses
Procedia PDF Downloads 28324 A Serious Game to Upgrade the Learning of Organizational Skills in Nursing Schools
Authors: Benoit Landi, Hervé Pingaud, Jean-Benoit Culie, Michel Galaup
Abstract:
Serious games have been widely disseminated in the field of digital learning. They have proved their utility in improving skills through virtual environments that simulate the field where new competencies have to be improved and assessed. This paper describes how we created CLONE, a serious game whose purpose is to help nurses create an efficient work plan in a hospital care unit. In CLONE, the number of patients to take care of is similar to the reality of their job, going far beyond what is currently practiced in nurse school classrooms. This similarity with the operational field increases proportionally the number of activities to be scheduled. Moreover, very often, the team of nurses is composed of regular nurses and nurse assistants that must share the work with respect to the regulatory obligations. Therefore, on the one hand, building a short-term planning is a complex task with a large amount of data to deal with, and on the other, good clinical practices have to be systematically applied. We present how reference planning has been defined by addressing an optimization problem formulation using the expertise of teachers. This formulation ensures the gameplay feasibility for the scenario that has been produced and enhanced throughout the game design process. It was also crucial to steer a player toward a specific gaming strategy. As one of our most important learning outcomes is a clear understanding of the workload concept, its factual calculation for each caregiver along time and its inclusion in the nurse reasoning during planning elaboration are focal points. We will demonstrate how to modify the game scenario to create a digital environment in which these somewhat abstract principles can be understood and applied. Finally, we give input on an experience we had on a pilot of a thousand undergraduate nursing students.Keywords: care planning, workload, game design, hospital nurse, organizational skills, digital learning, serious game
Procedia PDF Downloads 191323 Numerical Investigation into Capture Efficiency of Fibrous Filters
Authors: Jayotpaul Chaudhuri, Lutz Goedeke, Torsten Hallenga, Peter Ehrhard
Abstract:
Purification of gases from aerosols or airborne particles via filters is widely applied in the industry and in our daily lives. This separation especially in the micron and submicron size range is a necessary step to protect the environment and human health. Fibrous filters are often employed due to their low cost and high efficiency. For designing any filter the two most important performance parameters are capture efficiency and pressure drop. Since the capture efficiency is directly proportional to the pressure drop which leads to higher operating costs, a detailed investigation of the separation mechanism is required to optimize the filter designing, i.e., to have a high capture efficiency with a lower pressure drop. Therefore a two-dimensional flow simulation around a single fiber using Ansys CFX and Matlab is used to get insight into the separation process. Instead of simulating a solid fiber, the present Ansys CFX model uses a fictitious domain approach for the fiber by implementing a momentum loss model. This approach has been chosen to avoid creating a new mesh for different fiber sizes, thereby saving time and effort for re-meshing. In a first step, only the flow of the continuous fluid around the fiber is simulated in Ansys CFX and the flow field data is extracted and imported into Matlab and the particle trajectory is calculated in a Matlab routine. This calculation is a Lagrangian, one way coupled approach for particles with all relevant forces acting on it. The key parameters for the simulation in both Ansys CFX and Matlab are the porosity ε, the diameter ratio of particle and fiber D, the fluid Reynolds number Re, the Reynolds particle number Rep, the Stokes number St, the Froude number Fr and the density ratio of fluid and particle ρf/ρp. The simulation results were then compared to the single fiber theory from the literature.Keywords: BBO-equation, capture efficiency, CFX, Matlab, fibrous filter, particle trajectory
Procedia PDF Downloads 207322 Role of Spatial Variability in the Service Life Prediction of Reinforced Concrete Bridges Affected by Corrosion
Authors: Omran M. Kenshel, Alan J. O'Connor
Abstract:
Estimating the service life of Reinforced Concrete (RC) bridge structures located in corrosive marine environments of a great importance to their owners/engineers. Traditionally, bridge owners/engineers relied more on subjective engineering judgment, e.g. visual inspection, in their estimation approach. However, because financial resources are often limited, rational calculation methods of estimation are needed to aid in making reliable and more accurate predictions for the service life of RC structures. This is in order to direct funds to bridges found to be the most critical. Criticality of the structure can be considered either form the Structural Capacity (i.e. Ultimate Limit State) or from Serviceability viewpoint whichever is adopted. This paper considers the service life of the structure only from the Structural Capacity viewpoint. Considering the great variability associated with the parameters involved in the estimation process, the probabilistic approach is most suited. The probabilistic modelling adopted here used Monte Carlo simulation technique to estimate the Reliability (i.e. Probability of Failure) of the structure under consideration. In this paper the authors used their own experimental data for the Correlation Length (CL) for the most important deterioration parameters. The CL is a parameter of the Correlation Function (CF) by which the spatial fluctuation of a certain deterioration parameter is described. The CL data used here were produced by analyzing 45 chloride profiles obtained from a 30 years old RC bridge located in a marine environment. The service life of the structure were predicted in terms of the load carrying capacity of an RC bridge beam girder. The analysis showed that the influence of SV is only evident if the reliability of the structure is governed by the Flexure failure rather than by the Shear failure.Keywords: Chloride-induced corrosion, Monte-Carlo simulation, reinforced concrete, spatial variability
Procedia PDF Downloads 473321 Cost Overrun in Construction Projects
Authors: Hailu Kebede Bekele
Abstract:
Construction delays are suitable where project events occur at a certain time expected due to causes related to the client, consultant, and contractor. Delay is the major cause of the cost overrun that leads to the poor efficiency of the project. The cost difference between completion and the originally estimated is known as cost overrun. The common ways of cost overruns are not simple issues that can be neglected, but more attention should be given to prevent the organization from being devastated to be failed, and financial expenses to be extended. The reasons that may raised in different studies show that the problem may arise in construction projects due to errors in budgeting, lack of favorable weather conditions, inefficient machinery, and the availability of extravagance. The study is focused on the pace of mega projects that can have a significant change in the cost overrun calculation.15 mega projects are identified to study the problem of the cost overrun in the site. The contractor, consultant, and client are the principal stakeholders in the mega projects. 20 people from each sector were selected to participate in the investigation of the current mega construction project. The main objective of the study on the construction cost overrun is to prioritize the major causes of the cost overrun problem. The methodology that was employed in the construction cost overrun is the qualitative methodology that mostly rates the causes of construction project cost overrun. Interviews, open-ended and closed-ended questions group discussions, and rating qualitative methods are the best methodologies to study construction projects overrun. The result shows that design mistakes, lack of labor, payment delay, old equipment and scheduling, weather conditions, lack of skilled labor, payment delays, transportation, inflation, and order variations, market price fluctuation, and people's thoughts and philosophies, the prior cause of the cost overrun that fail the project performance. The institute shall follow the scheduled activities to bring a positive forward in the project life.Keywords: cost overrun, delay, mega projects, design
Procedia PDF Downloads 62320 Teaching Academic Writing for Publication: A Liminal Threshold Experience Towards Development of Scholarly Identity
Authors: Belinda du Plooy, Ruth Albertyn, Christel Troskie-De Bruin, Ella Belcher
Abstract:
In the academy, scholarliness or intellectual craftsmanship is considered the highest level of achievement, culminating in being consistently successfully published in impactful, peer-reviewed journals and books. Scholarliness implies rigorous methods, systematic exposition, in-depth analysis and evaluation, and the highest level of critical engagement and reflexivity. However, being a scholar does not happen automatically when one becomes an academic or completes graduate studies. A graduate qualification is an indication of one’s level of research competence but does not necessarily prepare one for the type of scholarly writing for publication required after a postgraduate qualification has been conferred. Scholarly writing for publication requires a high-level skillset and a specific mindset, which must be intentionally developed. The rite of passage to become a scholar is an iterative process with liminal spaces, thresholds, transitions, and transformations. The journey from researcher to published author is often fraught with rejection, insecurity, and disappointment and requires resilience and tenacity from those who eventually triumph. It cannot be achieved without support, guidance, and mentorship. In this article, the authors use collective auto-ethnography (CAE) to describe the phases and types of liminality encountered during the liminal journey toward scholarship. The authors speak as long-time facilitators of Writing for Academic Publication (WfAP) capacity development events (training workshops and writing retreats) presented at South African universities. Their WfAP facilitation practice is structured around experiential learning principles that allow them to act as critical reading partners and reflective witnesses for the writer-participants of their WfAP events. They identify three essential facilitation features for the effective holding of a generative, liminal, and transformational writing space for novice academic writers in order to enable their safe passage through the various liminal spaces they encounter during their scholarly development journey. These features are that facilitators should be agents of disruption and liminality while also guiding writers through these liminal spaces; that there should be a sense of mutual trust and respect, shared responsibility and accountability in order for writers to produce publication-worthy scholarly work; and that this can only be accomplished with the continued application of high levels of sensitivity and discernment by WfAP facilitators. These are key features for successful WfAP scholarship training events, where focused, individual input triggers personal and professional transformational experiences, which in turn translate into high-quality scholarly outputs.Keywords: academic writing, liminality, scholarship, scholarliness, threshold experience, writing for publication
Procedia PDF Downloads 44319 Validation of the Formula for Air Attenuation Coefficient for Acoustic Scale Models
Authors: Katarzyna Baruch, Agata Szelag, Aleksandra Majchrzak, Tadeusz Kamisinski
Abstract:
Methodology of measurement of sound absorption coefficient in scaled models is based on the ISO 354 standard. The measurement is realised indirectly - the coefficient is calculated from the reverberation time of an empty chamber as well as a chamber with an inserted sample. It is crucial to maintain the atmospheric conditions stable during both measurements. Possible differences may be amended basing on the formulas for atmospheric attenuation coefficient α given in ISO 9613-1. Model studies require scaling particular factors in compliance with specified characteristic numbers. For absorption coefficient measurement, these are for example: frequency range or the value of attenuation coefficient m. Thanks to the possibilities of modern electroacoustic transducers, it is no longer a problem to scale the frequencies which have to be proportionally higher. However, it may be problematic to reduce values of the attenuation coefficient. It is practically obtained by drying the air down to a defined relative humidity. Despite the change of frequency range and relative humidity of the air, ISO 9613-1 standard still allows the calculation of the amendment for little differences of the atmospheric conditions in the chamber during measurements. The paper discusses a number of theoretical analyses and experimental measurements performed in order to obtain consistency between the values of attenuation coefficient calculated from the formulas given in the standard and by measurement. The authors performed measurements of reverberation time in a chamber made in a 1/8 scale in a corresponding frequency range, i.e. 800 Hz - 40 kHz and in different values of the relative air humidity (40% 5%). Based on the measurements, empirical values of attenuation coefficient were calculated and compared with theoretical ones. In general, the values correspond with each other, but for high frequencies and low values of relative air humidity the differences are significant. Those discrepancies may directly influence the values of measured sound absorption coefficient and cause errors. Therefore, the authors made an effort to determine an amendment minimizing described inaccuracy.Keywords: air absorption correction, attenuation coefficient, dimensional analysis, model study, scaled modelling
Procedia PDF Downloads 421318 Scoping Review of Biological Age Measurement Composed of Biomarkers
Authors: Diego Alejandro Espíndola-Fernández, Ana María Posada-Cano, Dagnóvar Aristizábal-Ocampo, Jaime Alberto Gallo-Villegas
Abstract:
Background: With the increase in life expectancy, aging has been subject of frequent research, and therefore multiple strategies have been proposed to quantify the advance of the years based on the known physiology of human senescence. For several decades, attempts have been made to characterize these changes through the concept of biological age, which aims to integrate, in a measure of time, structural or functional variation through biomarkers in comparison with simple chronological age. The objective of this scoping review is to deepen the updated concept of measuring biological age composed of biomarkers in the general population and to summarize recent evidence to identify gaps and priorities for future research. Methods: A scoping review was conducted according to the five-phase methodology developed by Arksey and O'Malley through a search of five bibliographic databases to February 2021. Original articles were included with no time or language limit that described the biological age composed of at least two biomarkers in those over 18 years of age. Results: 674 articles were identified, of which 105 were evaluated for eligibility and 65 were included with information on the measurement of biological age composed of biomarkers. Articles from 1974 of 15 nationalities were found, most observational studies, in which clinical or paraclinical biomarkers were used, and 11 different methods described for the calculation of the composite biological age were informed. The outcomes reported were the relationship with the same measured biomarkers, specified risk factors, comorbidities, physical or cognitive functionality, and mortality. Conclusions: The concept of biological age composed of biomarkers has evolved since the 1970s and multiple methods of its quantification have been described through the combination of different clinical and paraclinical variables from observational studies. Future research should consider the population characteristics, and the choice of biomarkers against the proposed outcomes to improve the understanding of aging variables to direct effective strategies for a proper approach.Keywords: biological age, biological aging, aging, senescence, biomarker
Procedia PDF Downloads 186