Search results for: boundary element model
369 Design of Ultra-Light and Ultra-Stiff Lattice Structure for Performance Improvement of Robotic Knee Exoskeleton
Authors: Bing Chen, Xiang Ni, Eric Li
Abstract:
With the population ageing, the number of patients suffering from chronic diseases is increasing, among which stroke is a high incidence for the elderly. In addition, there is a gradual increase in the number of patients with orthopedic or neurological conditions such as spinal cord injuries, nerve injuries, and other knee injuries. These diseases are chronic, with high recurrence and complications, and normal walking is difficult for such patients. Nowadays, robotic knee exoskeletons have been developed for individuals with knee impairments. However, the currently available robotic knee exoskeletons are generally developed with heavyweight, which makes the patients uncomfortable to wear, prone to wearing fatigue, shortening the wearing time, and reducing the efficiency of exoskeletons. Some lightweight materials, such as carbon fiber and titanium alloy, have been used for the development of robotic knee exoskeletons. However, this increases the cost of the exoskeletons. This paper illustrates the design of a new ultra-light and ultra-stiff truss type of lattice structure. The lattice structures are arranged in a fan shape, which can fit well with circular arc surfaces such as circular holes, and it can be utilized in the design of rods, brackets, and other parts of a robotic knee exoskeleton to reduce the weight. The metamaterial is formed by continuous arrangement and combination of small truss structure unit cells, which changes the diameter of the pillar section, geometrical size, and relative density of each unit cell. It can be made quickly through additive manufacturing techniques such as metal 3D printing. The unit cell of the truss structure is small, and the machined parts of the robotic knee exoskeleton, such as connectors, rods, and bearing brackets, can be filled and replaced by gradient arrangement and non-uniform distribution. Under the condition of satisfying the mechanical properties of the robotic knee exoskeleton, the weight of the exoskeleton is reduced, and hence, the patient’s wearing fatigue is relaxed, and the wearing time of the exoskeleton is increased. Thus, the efficiency and wearing comfort, and safety of the exoskeleton can be improved. In this paper, a brief description of the hardware design of the prototype of the robotic knee exoskeleton is first presented. Next, the design of the ultra-light and ultra-stiff truss type of lattice structures is proposed, and the mechanical analysis of the single-cell unit is performed by establishing the theoretical model. Additionally, simulations are performed to evaluate the maximum stress-bearing capacity and compressive performance of the uniform arrangement and gradient arrangement of the cells. Finally, the static analysis is performed for the cell-filled rod and the unmodified rod, respectively, and the simulation results demonstrate the effectiveness and feasibility of the designed ultra-light and ultra-stiff truss type of lattice structures. In future studies, experiments will be conducted to further evaluate the performance of the designed lattice structures.Keywords: additive manufacturing, lattice structures, metamaterial, robotic knee exoskeleton
Procedia PDF Downloads 104368 Altering the Solid Phase Speciation of Arsenic in Paddy Soil: An Approach to Reduce Rice Grain Arsenic Uptake
Authors: Supriya Majumder, Pabitra Banik
Abstract:
Fates of Arsenic (As) on the soil-plant environment belong to the critical emerging issue, which in turn to appraises the threatening implications of a human health risk — assessing the dynamics of As in soil solid components are likely to impose its potential availability towards plant uptake. In the present context, we introduced an improved Sequential Extraction Procedure (SEP) questioning to identify solid-phase speciation of As in paddy soil under variable soil environmental conditions during two consecutive seasons of rice cultivation practices. We coupled gradients of water management practices with the addition of fertilizer amendments to assess the changes in a partition of As through a field experimental study during monsoon and post-monsoon season using two rice cultivars. Water management regimes were varied based on the methods of cultivation of rice by Conventional (waterlogged) vis-a-vis System of Rice Intensification-SRI (saturated). Fertilizer amendment through the nutrient treatment of absolute control, NPK-RD, NPK-RD + Calcium silicate, NPK-RD + Ferrous sulfate, Farmyard manure (FYM), FYM + Calcium silicate, FYM + Ferrous sulfate, Vermicompost (VC), VC + Calcium silicate, VC + Ferrous sulfate were selected to construct the study. After harvest, soil samples were sequentially extracted to estimate partition of As among the different fractions such as: exchangeable (F1), specifically sorbed (F2), As bound to amorphous Fe oxides (F3), crystalline Fe oxides (F4), organic matter (F5) and residual phase (F6). Results showed that the major proportions of As were found in F3, F4 and F6, whereas F1 exhibited the lowest proportion of total soil As. Among the nutrient treatment mediated changes on As fractions, the application of organic manure and ferrous sulfate were significantly found to restrict the release of As from exchangeable phase. Meanwhile, conventional practice produced much higher release of As from F1 as compared to SRI, which may substantially increase the environmental risk. In contrast, SRI practice was found to retain a significantly higher proportion of As in F2, F3, and F4 phase resulting restricted mobilization of As. This was critically reflected towards rice grain As bioavailability where the reduction in grain As concentration of 33% and 55% in SRI concerning conventional treatment (p <0.05) during monsoon and post-monsoon season respectively. Also, prediction assay for rice grain As bioavailability based on the linear regression model was performed. Results demonstrated that rice grain As concentration was positively correlated with As concentration in F1 and negatively correlated with F2, F3, and F4 with a satisfactory level of variation being explained (p <0.001). Finally, we conclude that F1, F2, F3 and F4 are the major soil. As fractions critically may govern the potential availability of As in soil and suggest that rice cultivation with the SRI treatment is particularly at less risk of As availability in soil. Such exhaustive information may be useful for adopting certain management practices for rice grown in contaminated soil concerning to the environmental issues in particular.Keywords: arsenic, fractionation, paddy soil, potential availability
Procedia PDF Downloads 122367 The Increasing Trend in Research Among Orthopedic Residency Applicants is Significant to Matching: A Retrospective Analysis
Authors: Nickolas A. Stewart, Donald C. Hefelfinger, Garrett V. Brittain, Timothy C. Frommeyer, Adrienne Stolfi
Abstract:
Orthopedic surgery is currently considered one of the most competitive specialties that medical students can apply to for residency training. As evidenced by increasing United States Medical Licensing Examination (USMLE) scores, overall grades, and publication, presentation, and abstract numbers, this specialty is getting increasingly competitive. The recent change of USMLE Step 1 scores to pass/fail has resulted in additional challenges for medical students planning to apply for orthopedic residency. Until now, these scores have been a tool used by residency programs to screen applicants as an initial factor to determine the strength of their application. With USMLE STEP 1 converting to a pass/fail grading criterion, the question remains as to what will take its place on the ERAS application. The primary objective of this study is to determine the trends in the number of research projects, abstracts, presentations, and publications among orthopedic residency applicants. Secondly, this study seeks to determine if there is a relationship between the number of research projects, abstracts, presentations, and publications, and match rates. The researchers utilized the National Resident Matching Program's Charting Outcomes in the Match between 2007 and 2022 to identify mean publications and research project numbers by allopathic and osteopathic US orthopedic surgery senior applicants. A paired t test was performed between the mean number of publications and research projects by matched and unmatched applicants. Additionally, simple linear regressions within matched and unmatched applicants were used to determine the association between year and number of abstracts, presentations, and publications, and a number of research projects. For determining whether the increase in the number of abstracts, presentations, and publications, and a number of research projects is significantly different between matched and unmatched applicants, an analysis of covariance is used with an interaction term added to the model, which represents the test for the difference between the slopes of each group. The data shows that from 2007 to 2022, the average number of research publications increased from 3 to 16.5 for matched orthopedic surgery applicants. The paired t-test had a significant p-value of 0.006 for the number of research publications between matched and unmatched applicants. In conclusion, the average number of publications for orthopedic surgery applicants has significantly increased for matched and unmatched applicants from 2007 to 2022. Moreover, this increase has accelerated in recent years, as evidenced by an increase of only 1.5 publications from 2007 to 2001 versus 5.0 publications from 2018 to 2022. The number of abstracts, presentations, and publications is a significant factor regarding an applicant's likelihood to successfully match into an orthopedic residency program. With USMLE Step 1 being converted to pass/fail, the researchers expect students and program directors will place increased importance on additional factors that can help them stand out. This study demonstrates that research will be a primary component in stratifying future orthopedic surgery applicants. In addition, this suggests the average number of research publications will continue to accelerate. Further study is required to determine whether this growth is sustainable.Keywords: publications, orthopedic surgery, research, residency applications
Procedia PDF Downloads 130366 Spectral Responses of the Laser Generated Coal Aerosol
Authors: Tibor Ajtai, Noémi Utry, Máté Pintér, Tomi Smausz, Zoltán Kónya, Béla Hopp, Gábor Szabó, Zoltán Bozóki
Abstract:
Characterization of spectral responses of light absorbing carbonaceous particulate matter (LAC) is of great importance in both modelling its climate effect and interpreting remote sensing measurement data. The residential or domestic combustion of coal is one of the dominant LAC constituent. According to some related assessments the residential coal burning account for roughly half of anthropogenic BC emitted from fossil fuel burning. Despite of its significance in climate the comprehensive investigation of optical properties of residential coal aerosol is really limited in the literature. There are many reason of that starting from the difficulties associated with the controlled burning conditions of the fuel, through the lack of detailed supplementary proximate and ultimate chemical analysis enforced, the interpretation of the measured optical data, ending with many analytical and methodological difficulties regarding the in-situ measurement of coal aerosol spectral responses. Since the gas matrix of ambient can significantly mask the physicochemical characteristics of the generated coal aerosol the accurate and controlled generation of residential coal particulates is one of the most actual issues in this research area. Most of the laboratory imitation of residential coal combustion is simply based on coal burning in stove with ambient air support allowing one to measure only the apparent spectral feature of the particulates. However, the recently introduced methodology based on a laser ablation of solid coal target opens up novel possibilities to model the real combustion procedure under well controlled laboratory conditions and makes the investigation of the inherent optical properties also possible. Most of the methodology for spectral characterization of LAC is based on transmission measurement made of filter accumulated aerosol or deduced indirectly from parallel measurements of scattering and extinction coefficient using free floating sampling. In the former one the accuracy while in the latter one the sensitivity are liming the applicability of this approaches. Although the scientific community are at the common platform that aerosol-phase PhotoAcoustic Spectroscopy (PAS) is the only method for precise and accurate determination of light absorption by LAC, the PAS based instrumentation for spectral characterization of absorption has only been recently introduced. In this study, the investigation of the inherent, spectral features of laser generated and chemically characterized residential coal aerosols are demonstrated. The experimental set-up and its characteristic for residential coal aerosol generation are introduced here. The optical absorption and the scattering coefficients as well as their wavelength dependency are determined by our state-of-the-art multi wavelength PAS instrument (4λ-PAS) and multi wavelength cosinus sensor (Aurora 3000). The quantified wavelength dependency (AAE and SAE) are deduced from the measured data. Finally, some correlation between the proximate and ultimate chemical as well as the measured or deduced optical parameters are also revealed.Keywords: absorption, scattering, residential coal, aerosol generation by laser ablation
Procedia PDF Downloads 358365 Offshore Facilities Load Out: Case Study of Jacket Superstructure Loadout by Strand Jacking Skidding Method
Authors: A. Rahim Baharudin, Nor Arinee binti Mat Saaud, Muhammad Afiq Azman, Farah Adiba A. Sani
Abstract:
Objectives: This paper shares the case study on the engineering analysis, data analysis, and real-time data comparison for qualifying the stand wires' minimum breaking load and safe working load upon loadout operation for a new project and, at the same time, eliminate the risk due to discrepancies and unalignment of COMPANY Technical Standards to Industry Standards and Practices. This paper demonstrates “Lean Construction” for COMPANY’s Project by sustaining fit-for-purpose Technical Requirements of Loadout Strand Wire Factor of Safety (F.S). The case study utilizes historical engineering data from a few loadout operations by skidding methods from different projects. It is also demonstrating and qualifying the skidding wires' minimum breaking load and safe working load used for loadout operation for substructure and other facilities for the future. Methods: Engineering analysis and comparison of data were taken as referred to the international standard and internal COMPANY standard requirements. Data was taken from nine (9) previous projects for both topsides and jacket facilities executed at the several local fabrication yards where load out was conducted by three (3) different service providers with emphasis on four (4) basic elements: i) Industry Standards for Loadout Engineering and Operation Reference: COMPANY internal standard was referred to superseded documents of DNV-OS-H201 and DNV/GL 0013/ND. DNV/GL 0013/ND and DNVGL-ST-N001 do not mention any requirements of Strand Wire F.S of 4.0 for Skidding / Pulling Operations. ii) Reference to past Loadout Engineering and Execution Package: Reference was made to projects delivered by three (3) major offshore facilities operators. Strand Wire F.S observed ranges from 2.0 MBL (Min) to 2.5 MBL (Max). No Loadout Operation using the requirements of 4.0 MBL was sighted from the reference. iii) Strand Jack Equipment Manufacturer Datasheet Reference: Referring to Strand Jack Equipment Manufactured Datasheet by different loadout service providers, it is shown that the Designed F.S for the equipment is also ranging between 2.0 ~ 2.5. Eight (8) Strand Jack Datasheet Model was referred to, ranging from 15 Mt to 850 Mt Capacity; however, there are NO observations of designed F.S 4.0 sighted. iv) Site Monitoring on Actual Loadout Data and Parameter: Max Load on Strand Wire was captured during 2nd Breakout, which is during Static Condition of 12.9 MT / Strand Wire (67.9% Utilization). Max Load on Strand Wire for Dynamic Conditions during Step 8 and Step 12 is 9.4 Mt / Strand Wire (49.5% Utilization). Conclusion: This analysis and study demonstrated the adequacy of strand wires supplied by the service provider were technically sufficient in terms of strength, and via engineering analysis conducted, the minimum breaking load and safe working load utilized and calculated for the projects were satisfied and operated safely for the projects. It is recommended from this study that COMPANY’s technical requirements are to be revised for future projects’ utilization.Keywords: construction, load out, minimum breaking load, safe working load, strand jacking, skidding
Procedia PDF Downloads 111364 3D-Mesh Robust Watermarking Technique for Ownership Protection and Authentication
Authors: Farhan A. Alenizi
Abstract:
Digital watermarking has evolved in the past years as an important means for data authentication and ownership protection. The images and video watermarking was well known in the field of multimedia processing; however, 3D objects' watermarking techniques have emerged as an important means for the same purposes, as 3D mesh models are in increasing use in different areas of scientific, industrial, and medical applications. Like the image watermarking techniques, 3D watermarking can take place in either space or transform domains. Unlike images and video watermarking, where the frames have regular structures in both space and temporal domains, 3D objects are represented in different ways as meshes that are basically irregular samplings of surfaces; moreover, meshes can undergo a large variety of alterations which may be hard to tackle. This makes the watermarking process more challenging. While the transform domain watermarking is preferable in images and videos, they are still difficult to implement in 3d meshes due to the huge number of vertices involved and the complicated topology and geometry, and hence the difficulty to perform the spectral decomposition, even though significant work was done in the field. Spatial domain watermarking has attracted significant attention in the past years; they can either act on the topology or on the geometry of the model. Exploiting the statistical characteristics in the 3D mesh models from both geometrical and topological aspects was useful in hiding data. However, doing that with minimal surface distortions to the mesh attracted significant research in the field. A 3D mesh blind watermarking technique is proposed in this research. The watermarking method depends on modifying the vertices' positions with respect to the center of the object. An optimal method will be developed to reduce the errors, minimizing the distortions that the 3d object may experience due to the watermarking process, and reducing the computational complexity due to the iterations and other factors. The technique relies on the displacement process of the vertices' locations depending on the modification of the variances of the vertices’ norms. Statistical analyses were performed to establish the proper distributions that best fit each mesh, and hence establishing the bins sizes. Several optimizing approaches were introduced in the realms of mesh local roughness, the statistical distributions of the norms, and the displacements in the mesh centers. To evaluate the algorithm's robustness against other common geometry and connectivity attacks, the watermarked objects were subjected to uniform noise, Laplacian smoothing, vertices quantization, simplification, and cropping. Experimental results showed that the approach is robust in terms of both perceptual and quantitative qualities. It was also robust against both geometry and connectivity attacks. Moreover, the probability of true positive detection versus the probability of false-positive detection was evaluated. To validate the accuracy of the test cases, the receiver operating characteristics (ROC) curves were drawn, and they’ve shown robustness from this aspect. 3D watermarking is still a new field but still a promising one.Keywords: watermarking, mesh objects, local roughness, Laplacian Smoothing
Procedia PDF Downloads 159363 Healthcare Utilization and Costs of Specific Obesity Related Health Conditions in Alberta, Canada
Authors: Sonia Butalia, Huong Luu, Alexis Guigue, Karen J. B. Martins, Khanh Vu, Scott W. Klarenbach
Abstract:
Obesity-related health conditions impose a substantial economic burden on payers due to increased healthcare use. Estimates of healthcare resource use and costs associated with obesity-related comorbidities are needed to inform policies and interventions targeting these conditions. Methods: Adults living with obesity were identified (a procedure-related body mass index code for class 2/3 obesity between 2012 and 2019 in Alberta, Canada; excluding those with bariatric surgery), and outcomes were compared over 1-year (2019/2020) between those who had and did not have specific obesity-related comorbidities. The probability of using a healthcare service (based on the odds ratio of a zero [OR-zero] cost) was compared; 95% confidence intervals (CI) were reported. Logistic regression and a generalized linear model with log link and gamma distribution were used for total healthcare cost comparisons ($CDN); cost ratios and estimated cost differences (95% CI) were reported. Potential socio-demographic and clinical confounders were adjusted for, and incremental cost differences were representative of a referent case. Results: A total of 220,190 adults living with obesity were included; 44% had hypertension, 25% had osteoarthritis, 24% had type-2 diabetes, 17% had cardiovascular disease, 12% had insulin resistance, 9% had chronic back pain, and 4% of females had polycystic ovarian syndrome (PCOS). The probability of hospitalization, ED visit, and ambulatory care was higher in those with a following obesity-related comorbidity versus those without: chronic back pain (hospitalization: 1.8-times [OR-zero: 0.57 [0.55/0.59]] / ED visit: 1.9-times [OR-zero: 0.54 [0.53/0.56]] / ambulatory care visit: 2.4-times [OR-zero: 0.41 [0.40/0.43]]), cardiovascular disease (2.7-times [OR-zero: 0.37 [0.36/0.38]] / 1.9-times [OR-zero: 0.52 [0.51/0.53]] / 2.8-times [OR-zero: 0.36 [0.35/0.36]]), osteoarthritis (2.0-times [OR-zero: 0.51 [0.50/0.53]] / 1.4-times [OR-zero: 0.74 [0.73/0.76]] / 2.5-times [OR-zero: 0.40 [0.40/0.41]]), type-2 diabetes (1.9-times [OR-zero: 0.54 [0.52/0.55]] / 1.4-times [OR-zero: 0.72 [0.70/0.73]] / 2.1-times [OR-zero: 0.47 [0.46/0.47]]), hypertension (1.8-times [OR-zero: 0.56 [0.54/0.57]] / 1.3-times [OR-zero: 0.79 [0.77/0.80]] / 2.2-times [OR-zero: 0.46 [0.45/0.47]]), PCOS (not significant / 1.2-times [OR-zero: 0.83 [0.79/0.88]] / not significant), and insulin resistance (1.1-times [OR-zero: 0.88 [0.84/0.91]] / 1.1-times [OR-zero: 0.92 [0.89/0.94]] / 1.8-times [OR-zero: 0.56 [0.54/0.57]]). After fully adjusting for potential confounders, the total healthcare cost ratio was higher in those with a following obesity-related comorbidity versus those without: chronic back pain (1.54-times [1.51/1.56]), cardiovascular disease (1.45-times [1.43/1.47]), osteoarthritis (1.36-times [1.35/1.38]), type-2 diabetes (1.30-times [1.28/1.31]), hypertension (1.27-times [1.26/1.28]), PCOS (1.08-times [1.05/1.11]), and insulin resistance (1.03-times [1.01/1.04]). Conclusions: Adults with obesity who have specific disease-related health conditions have a higher probability of healthcare use and incur greater costs than those without specific comorbidities; incremental costs are larger when other obesity-related health conditions are not adjusted for. In a specific referent case, hypertension was costliest (44% had this condition with an additional annual cost of $715 [$678/$753]). If these findings hold for the Canadian population, hypertension in persons with obesity represents an estimated additional annual healthcare cost of $2.5 billion among adults living with obesity (based on an adult obesity rate of 26%). Results of this study can inform decision making on investment in interventions that are effective in treating obesity and its complications.Keywords: administrative data, healthcare cost, obesity-related comorbidities, real world evidence
Procedia PDF Downloads 147362 Valuing Cultural Ecosystem Services of Natural Treatment Systems Using Crowdsourced Data
Authors: Andrea Ghermandi
Abstract:
Natural treatment systems such as constructed wetlands and waste stabilization ponds are increasingly used to treat water and wastewater from a variety of sources, including stormwater and polluted surface water. The provision of ancillary benefits in the form of cultural ecosystem services makes these systems unique among water and wastewater treatment technologies and greatly contributes to determine their potential role in promoting sustainable water management practices. A quantitative analysis of these benefits, however, has been lacking in the literature. Here, a critical assessment of the recreational and educational benefits in natural treatment systems is provided, which combines observed public use from a survey of managers and operators with estimated public use as obtained using geotagged photos from social media as a proxy for visitation rates. Geographic Information Systems (GIS) are used to characterize the spatial boundaries of 273 natural treatment systems worldwide. Such boundaries are used as input for the Application Program Interfaces (APIs) of two popular photo-sharing websites (Flickr and Panoramio) in order to derive the number of photo-user-days, i.e., the number of yearly visits by individual photo users in each site. The adequateness and predictive power of four univariate calibration models using the crowdsourced data as a proxy for visitation are evaluated. A high correlation is found between photo-user-days and observed annual visitors (Pearson's r = 0.811; p-value < 0.001; N = 62). Standardized Major Axis (SMA) regression is found to outperform Ordinary Least Squares regression and count data models in terms of predictive power insofar as standard verification statistics – such as the root mean square error of prediction (RMSEP), the mean absolute error of prediction (MAEP), the reduction of error (RE), and the coefficient of efficiency (CE) – are concerned. The SMA regression model is used to estimate the intensity of public use in all 273 natural treatment systems. System type, influent water quality, and area are found to statistically affect public use, consistently with a priori expectations. Publicly available information regarding the home location of the sampled visitors is derived from their social media profiles and used to infer the distance they are willing to travel to visit the natural treatment systems in the database. Such information is analyzed using the travel cost method to derive monetary estimates of the recreational benefits of the investigated natural treatment systems. Overall, the findings confirm the opportunities arising from an integrated design and management of natural treatment systems, which combines the objectives of water quality enhancement and provision of cultural ecosystem services through public use in a multi-functional approach and compatibly with the need to protect public health.Keywords: constructed wetlands, cultural ecosystem services, ecological engineering, waste stabilization ponds
Procedia PDF Downloads 179361 Process of Production of an Artisanal Brewery in a City in the North of the State of Mato Grosso, Brazil
Authors: Ana Paula S. Horodenski, Priscila Pelegrini, Salli Baggenstoss
Abstract:
The brewing industry with artisanal concepts seeks to serve a specific market, with diversified production that has been gaining ground in the national environment, also in the Amazon region. This growth is due to the more demanding consumer, with a diversified taste that wants to try new types of beer, enjoying products with new aromas, flavors, as a differential of what is so widely spread through the big industrial brands. Thus, through qualitative research methods, the study aimed to investigate how is the process of managing the production of a craft brewery in a city in the northern State of Mato Grosso (BRAZIL), providing knowledge of production processes and strategies in the industry. With the efficient use of resources, it is possible to obtain the necessary quality and provide better performance and differentiation of the company, besides analyzing the best management model. The research is descriptive with a qualitative approach through a case study. For the data collection, a semi-structured interview was elaborated, composed of the areas: microbrewery characterization, artisan beer production process, and the company supply chain management. Also, production processes were observed during technical visits. With the study, it was verified that the artisan brewery researched develops preventive maintenance strategies with the inputs, machines, and equipment, so that the quality of the product and the production process are achieved. It was observed that the distance from the supplying centers makes the management of processes and the supply chain be carried out with a longer planning time so that the delivery of the final product is satisfactory. The production process of the brewery is composed of machines and equipment that allows the control and quality of the product, which the manager states that for the productive capacity of the industry and its consumer market, the available equipment meets the demand. This study also contributes to highlight one of the challenges for the development of small breweries in front of the market giants, that is, the legislation, which fits the microbreweries as producers of alcoholic beverages. This makes the micro and small business segment to be taxed as a major, who has advantages in purchasing large batches of raw materials and tax incentives because they are large employers and tax pickers. It was possible to observe that the supply chain management system relies on spreadsheets and notes that are done manually, which could be simplified with a computer program to streamline procedures and reduce risks and failures of the manual process. In relation to the control of waste and effluents affected by the industry is outsourced and meets the needs. Finally, the results showed that the industry uses preventive maintenance as a productive strategy, which allows better conditions for the production and quality of artisanal beer. The quality is directly related to the satisfaction of the final consumer, being prized and performed throughout the production process, with the selection of better inputs, the effectiveness of the production processes and the relationship with the commercial partners.Keywords: artisanal brewery, production management, production processes, supply chain
Procedia PDF Downloads 119360 A Comparative Study on the Influencing Factors of Urban Residential Land Prices Among Regions
Authors: Guo Bingkun
Abstract:
With the rapid development of China's social economy and the continuous improvement of urbanization level, people's living standards have undergone tremendous changes, and more and more people are gathering in cities. The demand for urban residents' housing has been greatly released in the past decade. The demand for housing and related construction land required for urban development has brought huge pressure to urban operations, and land prices have also risen rapidly in the short term. On the other hand, from the comparison of the eastern and western regions of China, there are also great differences in urban socioeconomics and land prices in the eastern, central and western regions. Although judging from the current overall market development, after more than ten years of housing market reform and development, the quality of housing and land use efficiency in Chinese cities have been greatly improved. However, the current contradiction between land demand for urban socio-economic development and land supply, especially the contradiction between land supply and demand for urban residential land, has not been effectively alleviated. Since land is closely linked to all aspects of society, changes in land prices will be affected by many complex factors. Therefore, this paper studies the factors that may affect urban residential land prices and compares them among eastern, central and western cities, and finds the main factors that determine the level of urban residential land prices. This paper provides guidance for urban managers in formulating land policies and alleviating land supply and demand. It provides distinct ideas for improving urban planning and improving urban planning and promotes the improvement of urban management level. The research in this paper focuses on residential land prices. Generally, the indicators for measuring land prices mainly include benchmark land prices, land price level values, parcel land prices, etc. However, considering the requirements of research data continuity and representativeness, this paper chooses to use residential land price level values. Reflects the status of urban residential land prices. First of all, based on the existing research at home and abroad, the paper considers the two aspects of land supply and demand and, based on basic theoretical analysis, determines some factors that may affect urban housing, such as urban expansion, taxation, land reserves, population, and land benefits. Factors of land price and correspondingly selected certain representative indicators. Secondly, using conventional econometric analysis methods, we established a model of factors affecting urban residential land prices, quantitatively analyzed the relationship and intensity of influencing factors and residential land prices, and compared the differences in the impact of urban residential land prices between the eastern, central and western regions. Compare similarities. Research results show that the main factors affecting China's urban residential land prices are urban expansion, land use efficiency, taxation, population size, and residents' consumption. Then, the main reason for the difference in residential land prices between the eastern, central and western regions is the differences in urban expansion patterns, industrial structures, urban carrying capacity and real estate development investment.Keywords: urban housing, urban planning, housing prices, comparative study
Procedia PDF Downloads 48359 Seismic Response Control of Multi-Span Bridge Using Magnetorheological Dampers
Authors: B. Neethu, Diptesh Das
Abstract:
The present study investigates the performance of a semi-active controller using magneto-rheological dampers (MR) for seismic response reduction of a multi-span bridge. The application of structural control to the structures during earthquake excitation involves numerous challenges such as proper formulation and selection of the control strategy, mathematical modeling of the system, uncertainty in system parameters and noisy measurements. These problems, however, need to be tackled in order to design and develop controllers which will efficiently perform in such complex systems. A control algorithm, which can accommodate un-certainty and imprecision compared to all the other algorithms mentioned so far, due to its inherent robustness and ability to cope with the parameter uncertainties and imprecisions, is the sliding mode algorithm. A sliding mode control algorithm is adopted in the present study due to its inherent stability and distinguished robustness to system parameter variation and external disturbances. In general a semi-active control scheme using an MR damper requires two nested controllers: (i) an overall system controller, which derives the control force required to be applied to the structure and (ii) an MR damper voltage controller which determines the voltage required to be supplied to the damper in order to generate the desired control force. In the present study a sliding mode algorithm is used to determine the desired optimal force. The function of the voltage controller is to command the damper to produce the desired force. The clipped optimal algorithm is used to find the command voltage supplied to the MR damper which is regulated by a semi active control law based on sliding mode algorithm. The main objective of the study is to propose a robust semi active control which can effectively control the responses of the bridge under real earthquake ground motions. Lumped mass model of the bridge is developed and time history analysis is carried out by solving the governing equations of motion in the state space form. The effectiveness of MR dampers is studied by analytical simulations by subjecting the bridge to real earthquake records. In this regard, it may also be noted that the performance of controllers depends, to a great extent, on the characteristics of the input ground motions. Therefore, in order to study the robustness of the controller in the present study, the performance of the controllers have been investigated for fourteen different earthquake ground motion records. The earthquakes are chosen in such a way that all possible characteristic variations can be accommodated. Out of these fourteen earthquakes, seven are near-field and seven are far-field. Also, these earthquakes are divided into different frequency contents, viz, low-frequency, medium-frequency, and high-frequency earthquakes. The responses of the controlled bridge are compared with the responses of the corresponding uncontrolled bridge (i.e., the bridge without any control devices). The results of the numerical study show that the sliding mode based semi-active control strategy can substantially reduce the seismic responses of the bridge showing a stable and robust performance for all the earthquakes.Keywords: bridge, semi active control, sliding mode control, MR damper
Procedia PDF Downloads 123358 The Association between Attachment Styles, Satisfaction of Life, Alexithymia, and Psychological Resilience: The Mediational Role of Self-Esteem
Authors: Zahide Tepeli Temiz, Itir Tari Comert
Abstract:
Attachment patterns based on early emotional interactions between infant and primary caregiver continue to be influential in adult life, in terms of mental health and behaviors of individuals. Several studies reveal that infant-caregiver relationships have impressed the affect regulation, coping with stressful and negative situations, general satisfaction of life, and self image in adulthood, besides the attachment styles. The present study aims to examine the relationships between university students’ attachment style and their self-esteem, alexithymic features, satisfaction of life, and level of resilience. In line with this aim, the hypothesis of the prediction of attachment styles (anxious and avoidant) over life satisfaction, self-esteem, alexithymia, and psychological resilience was tested. Additionally, in this study Structural Equational Modeling was conducted to investigate the mediational role of self-esteem in the relationship between attachment styles and alexithymia, life satisfaction, and resilience. This model was examined with path analysis. The sample of the research consists of 425 university students who take education from several region of Turkey. The participants who sign the informed consent completed the Demographic Information Form, Experiences in Close Relationships-Revised, Rosenberg Self-Esteem Scale, The Satisfaction with Life Scale, Toronto Alexithymia Scale, and Resilience Scale for Adults. According to results, anxious, and avoidant dimensions of insecure attachment predicted the self-esteem score and alexithymia in positive direction. On the other hand, these dimensions of attachment predicted life satisfaction in negative direction. The results of linear regression analysis indicated that anxious and avoidant attachment styles didn’t predict the resilience. This result doesn’t support the theory and research indicating the relationship between attachment style and psychological resilience. The results of path analysis revealed the mediational role self esteem in the relation between anxious, and avoidant attachment styles and life satisfaction. In addition, SEM analysis indicated the indirect effect of attachment styles over alexithymia and resilience besides their direct effect. These findings support the hypothesis of this research relation to mediating role of self-esteem. Attachment theorists suggest that early attachment experiences, including supportive and responsive family interactions, have an effect on resilience to harmful situations in adult life, ability to identify, describe, and regulate emotions and also general satisfaction with life. Several studies examining the relationship between attachment styles and life satisfaction, alexithymia, and psychological resilience draw attention to mediational role of self-esteem. Results of this study support the theory of attachment patterns with the mediation of self-image influence the emotional, cognitive, and behavioral regulation of person throughout the adulthood. Therefore, it is thought that any intervention intended for recovery in attachment relationship will increase the self-esteem, life satisfaction, and resilience level, on the one side, decrease the alexithymic features, on the other side.Keywords: alexithymia, anxious attachment, avoidant attachment, life satisfaction, path analysis, resilience, self-esteem, structural equation
Procedia PDF Downloads 194357 Reliability and Validity of a Portable Inertial Sensor and Pressure Mat System for Measuring Dynamic Balance Parameters during Stepping
Authors: Emily Rowe
Abstract:
Introduction: Balance assessments can be used to help evaluate a person’s risk of falls, determine causes of balance deficits and inform intervention decisions. It is widely accepted that instrumented quantitative analysis can be more reliable and specific than semi-qualitative ordinal scales or itemised scoring methods. However, the uptake of quantitative methods is hindered by expense, lack of portability, and set-up requirements. During stepping, foot placement is actively coordinated with the body centre of mass (COM) kinematics during pre-initiation. Based on this, the potential to use COM velocity just prior to foot off and foot placement error as an outcome measure of dynamic balance is currently being explored using complex 3D motion capture. Inertial sensors and pressure mats might be more practical technologies for measuring these parameters in clinical settings. Objective: The aim of this study was to test the criterion validity and test-retest reliability of a synchronised inertial sensor and pressure mat-based approach to measure foot placement error and COM velocity while stepping. Methods: Trials were held with 15 healthy participants who each attended for two sessions. The trial task was to step onto one of 4 targets (2 for each foot) multiple times in a random, unpredictable order. The stepping target was cued using an auditory prompt and electroluminescent panel illumination. Data was collected using 3D motion capture and a combined inertial sensor-pressure mat system simultaneously in both sessions. To assess the reliability of each system, ICC estimates and their 95% confident intervals were calculated based on a mean-rating (k = 2), absolute-agreement, 2-way mixed-effects model. To test the criterion validity of the combined inertial sensor-pressure mat system against the motion capture system multi-factorial two-way repeated measures ANOVAs were carried out. Results: It was found that foot placement error was not reliably measured between sessions by either system (ICC 95% CIs; motion capture: 0 to >0.87 and pressure mat: <0.53 to >0.90). This could be due to genuine within-subject variability given the nature of the stepping task and brings into question the suitability of average foot placement error as an outcome measure. Additionally, results suggest the pressure mat is not a valid measure of this parameter since it was statistically significantly different from and much less precise than the motion capture system (p=0.003). The inertial sensor was found to be a moderately reliable (ICC 95% CIs >0.46 to >0.95) but not valid measure for anteroposterior and mediolateral COM velocities (AP velocity: p=0.000, ML velocity target 1 to 4: p=0.734, 0.001, 0.000 & 0.376). However, it is thought that with further development, the COM velocity measure validity could be improved. Possible options which could be investigated include whether there is an effect of inertial sensor placement with respect to pelvic marker placement or implementing more complex methods of data processing to manage inherent accelerometer and gyroscope limitations. Conclusion: The pressure mat is not a suitable alternative for measuring foot placement errors. The inertial sensors have the potential for measuring COM velocity; however, further development work is needed.Keywords: dynamic balance, inertial sensors, portable, pressure mat, reliability, stepping, validity, wearables
Procedia PDF Downloads 151356 Effectiveness of Imagery Compared with Exercise Training on Hip Abductor Strength and EMG Production in Healthy Adults
Authors: Majid Manawer Alenezi, Gavin Lawrence, Hans-Peter Kubis
Abstract:
Imagery training could be an important treatment for muscle function improvements in patients who are facing limitations in exercise training by pain or other adverse symptoms. However, recent studies are mostly limited to small muscle groups and are often contradictory. Moreover, a possible bilateral transfer effect of imagery training has not been examined. We, therefore, investigated the effectiveness of unilateral imagery training in comparison with exercise training on hip abductor muscle strength and EMG. Additionally, both limbs were assessed to investigate bilateral transfer effects. Healthy individuals took part in an imagery or exercise training intervention for two weeks and were assesses pre and post training. Participants (n=30), after randomization into an imagery and an exercise group, trained 5 times a week under supervision with additional self-performed training on the weekends. The training consisted of performing, or to imagine, 5 maximal isometric hip abductor contractions (= one set), repeating the set 7 times. All measurements and trainings were performed laying on the side on a dynamometer table. The imagery script combined kinesthetic and visual imagery with internal perspective for producing imagined maximal hip abduction contractions. The exercise group performed the same number of tasks but performing the maximal hip abductor contractions. Maximal hip abduction strength and EMG amplitudes were measured of right and left limbs pre- and post-training period. Additionally, handgrip strength and right shoulder abduction (Strength and EMG) were measured. Using mixed model ANOVA (strength measures) and Wilcoxen-tests (EMGs), data revealed a significant increase in hip abductor strength production in the imagery group on the trained right limb (~6%). However, this was not reported for the exercise group. Additionally, the left hip abduction strength (not used for training) did not show a main effect in strength, however, there was a significant interaction of group and time revealing that the strength increased in the imagery group while it remained constant in the exercise group. EMG recordings supported the strength findings showing significant elevation of EMG amplitudes after imagery training on right and left side, while the exercise training group did not show any changes. Moreover, measures of handgrip strength and shoulder abduction showed no effects over time and no interactions in both groups. Experiments showed that imagery training is a suitable method for effectively increasing functional parameters of larger limb muscles (strength and EMG) which were enhanced on both sides (trained and untrained) confirming a bilateral transfer effect. Indeed, exercise training did not reveal any increases in the parameters above omitting functional improvements. The healthy individuals tested might not easily achieve benefits from exercise training within the time tested. However, it is evident that imagery training is effective in increasing the central motor command towards the muscles and that the effect seems to be segmental (no increase in handgrip strength and shoulder abduction parameters) and affects both sides (trained and untrained). In conclusion, imagery training was effective in functional improvements in limb muscles and produced a bilateral transfer on strength and EMG measures.Keywords: imagery, exercise, physiotherapy, motor imagery
Procedia PDF Downloads 232355 Shocks and Flows - Employing a Difference-In-Difference Setup to Assess How Conflicts and Other Grievances Affect the Gender and Age Composition of Refugee Flows towards Europe
Authors: Christian Bruss, Simona Gamba, Davide Azzolini, Federico Podestà
Abstract:
In this paper, the authors assess the impact of different political and environmental shocks on the size and on the age and gender composition of asylum-related migration flows to Europe. With this paper, the authors contribute to the literature by looking at the impact of different political and environmental shocks on the gender and age composition of migration flows in addition to the size of these flows. Conflicting theories predict different outcomes concerning the relationship between political and environmental shocks and the migration flows composition. Analyzing the relationship between the causes of migration and the composition of migration flows could yield more insights into the mechanisms behind migration decisions. In addition, this research may contribute to better informing national authorities in charge of receiving these migrant, as women and children/the elderly require different assistance than young men. To be prepared to offer the correct services, the relevant institutions have to be aware of changes in composition based on the shock in question. The authors analyze the effect of different types of shocks on the number, the gender and age composition of first time asylum seekers originating from 154 sending countries. Among the political shocks, the authors consider: violence between combatants, violence against civilians, infringement of political rights and civil liberties, and state terror. Concerning environmental shocks, natural disasters (such as droughts, floods, epidemics, etc.) have been included. The data on asylum seekers applying to any of the 32 Schengen Area countries between 2008 and 2015 is on a monthly basis. Data on asylum applications come from Eurostat, data on shocks are retrieved from various sources: georeferenced conflict data come from the Uppsala Conflict Data Program (UCDP), data on natural disasters from the Centre for Research on the Epidemiology of Disasters (CRED), data on civil liberties and political rights from Freedom House, data on state terror from the Political Terror Scale (PTS), GDP and population data from the World Bank, and georeferenced population data from the Socioeconomic Data and Applications Center (SEDAC). The authors adopt a Difference-in-Differences identification strategy, exploiting the different timing of several kinds of shocks across countries. The highly skewed distribution of the dependent variable is taken into account by using count data models. In particular, a Zero Inflated Negative Binomial model is adopted. Preliminary results show that different shocks - such as armed conflict and epidemics - exert weak immediate effects on asylum-related migration flows and almost non-existent effects on the gender and age composition. However, this result is certainly affected by the fact that no time lags have been introduced so far. Finding the correct time lags depends on a great many variables not limited to distance alone. Therefore, finding the appropriate time lags is still a work in progress. Considering the ongoing refugee crisis, this topic is more important than ever. The authors hope that this research contributes to a less emotionally led debate.Keywords: age, asylum, Europe, forced migration, gender
Procedia PDF Downloads 259354 The Efficacy of Government Strategies to Control COVID 19: Evidence from 22 High Covid Fatality Rated Countries
Authors: Imalka Wasana Rathnayaka, Rasheda Khanam, Mohammad Mafizur Rahman
Abstract:
TheCOVID-19 pandemic has created unprecedented challenges to both the health and economic states in countries around the world. This study aims to evaluate the effectiveness of governments' decisions to mitigate the risks of COVID-19 through proposing policy directions to reduce its magnitude. The study is motivated by the ongoing coronavirus outbreaks and comprehensive policy responses taken by countries to mitigate the spread of COVID-19 and reduce death rates. This study contributes to filling the knowledge by exploiting the long-term efficacy of extensive plans of governments. This study employs a Panel autoregressive distributed lag (ARDL) framework. The panels incorporate both a significant number of variables and fortnightly observations from22 countries. The dependent variables adopted in this study are the fortnightly death rates and the rates of the spread of COVID-19. Mortality rate and the rate of infection data were computed based on the number of deaths and the number of new cases per 10000 people.The explanatory variables are fortnightly values of indexes taken to investigate the efficacy of government interventions to control COVID-19. Overall government response index, Stringency index, Containment and health index, and Economic support index were selected as explanatory variables. The study relies on the Oxford COVID-19 Government Measure Tracker (OxCGRT). According to the procedures of ARDL, the study employs (i) the unit root test to check stationarity, (ii) panel cointegration, and (iii) PMG and ARDL estimation techniques. The study shows that the COVID-19 pandemic forced immediate responses from policymakers across the world to mitigate the risks of COVID-19. Of the four types of government policy interventions: (i) Stringency and (ii) Economic Support have been most effective and reveal that facilitating Stringency and financial measures has resulted in a reduction in infection and fatality rates, while (iii) Government responses are positively associated with deaths but negatively with infected cases. Even though this positive relationship is unexpected to some extent in the long run, social distancing norms of the governments have been broken by the public in some countries, and population age demographics would be a possible reason for that result. (iv) Containment and healthcare improvements reduce death rates but increase the infection rates, although the effect has been lower (in absolute value). The model implies that implementation of containment health practices without association with tracing and individual-level quarantine does not work well. The policy implication based on containment health measures must be applied together with targeted, aggressive, and rapid containment to extensively reduce the number of people infected with COVID 19. Furthermore, the results demonstrate that economic support for income and debt relief has been the key to suppressing the rate of COVID-19 infections and fatality rates.Keywords: COVID-19, infection rate, deaths rate, government response, panel data
Procedia PDF Downloads 75353 In vitro Evaluation of Immunogenic Properties of Oral Application of Rabies Virus Surface Glycoprotein Antigen Conjugated to Beta-Glucan Nanoparticles in a Mouse Model
Authors: Narges Bahmanyar, Masoud Ghorbani
Abstract:
Rabies is caused by several species of the genus Lyssavirus in the Rhabdoviridae family. The disease is deadly encephalitis transmitted from warm-blooded animals to humans, and domestic and wild carnivores play the most crucial role in its transmission. The prevalence of rabies in poor areas of developing salinities is constantly posed as a global threat to public health. According to the World Health Organization, approximately 60,000 people die yearly from rabies. Of these, 60% of deaths are related to the Middle East. Although rabies encephalitis is incurable to date, awareness of the disease and the use of vaccines is the best way to combat the disease. Although effective vaccines are available, there is a high cost involved in vaccine production and management to combat rabies. Increasing the prevalence and discovery of new strains of rabies virus requires the need for safe, effective, and as inexpensive vaccines as possible. One of the approaches considered to achieve the quality and quantity expressed through the manufacture of recombinant types of rabies vaccine. Currently, livestock rabies vaccines are used only in inactivated or live attenuated vaccines, the process of inactivation of which pays attention to considerations. The rabies virus contains a negatively polarized single-stranded RNA genome that encodes the five major structural genes (N, P, M, G, L) from '3 to '5 . Rabies virus glycoprotein G, the major antigen, can produce the virus-neutralizing antibody. N-antigen is another candidate for developing recombinant vaccines. However, because it is within the RNP complex of the virus, the possibility of genetic diversity based on different geographical locations is very high. Glycoprotein G is structurally and antigenically more protected than other genes. Protection at the level of its nucleotide sequence is about 90% and at the amino acid level is 96%. Recombinant vaccines, consisting of a pathogenic subunit, contain fragments of the protein or polysaccharide of the pathogen that have been carefully studied to determine which of these molecules elicits a stronger and more effective immune response. These vaccines minimize the risk of side effects by limiting the immune system's access to the pathogen. Such vaccines are relatively inexpensive, easy to produce, and more stable than vaccines containing viruses or whole bacteria. The problem with these vaccines is that the pathogenic subunits may elicit a weak immune response in the body or may be destroyed before they reach the immune cells, which requires nanoparticles to overcome. Suitable for use as an adjuvant. Among these, biodegradable nanoparticles with functional levels are good candidates as adjuvants for the vaccine. In this study, we intend to use beta-glucan nanoparticles as adjuvants. The surface glycoprotein of the rabies virus (G) is responsible for identifying and binding the virus to the target cell. This glycoprotein is the major protein in the structure of the virus and induces an antibody response in the host. In this study, we intend to use rabies virus surface glycoprotein conjugated with beta-glucan nanoparticles to produce vaccines.Keywords: rabies, vaccines, beta glucan, nanoprticles, adjuvant, recombinant protein
Procedia PDF Downloads 15352 Insulin Resistance in Early Postmenopausal Women Can Be Attenuated by Regular Practice of 12 Weeks of Yoga Therapy
Authors: Praveena Sinha
Abstract:
Context: Diabetes is a global public health burden, particularly affecting postmenopausal women. Insulin resistance (IR) is prevalent in this population, and it is associated with an increased risk of developing type 2 diabetes. Yoga therapy is gaining attention as a complementary intervention for diabetes due to its potential to address stress psychophysiology. This study focuses on the efficacy of a 12-week yoga practice in attenuating insulin resistance in early postmenopausal women. Research Aim: The aim of this research is to investigate the effect of a 3-month long yoga practice on insulin resistance in early postmenopausal women. Methodology: The study conducted a prospective longitudinal design with 67 women within five years of menopause. Participants were divided into two groups based on their willingness to join yoga. The Yoga group (n = 37) received routine gynecological management along with an integrated yoga module, while the Non-Yoga group (n = 30) received only routine management. Insulin resistance was measured using the homeostasis model assessment of insulin resistance (HOMA-IR) method before and after the intervention. Statistical analysis was performed using GraphPad Prism Version 5 software, with statistical significance set at P < 0.05. Findings: The results indicate a significant decrease in serum fasting insulin levels and HOMA-IR measurements in the Yoga group, although the decrease did not reach statistical significance. In contrast, the Non-Yoga group showed a significant rise in serum fasting insulin levels and HOMA-IR measurements after 3 months, suggesting a detrimental effect on insulin resistance in these postmenopausal women. Theoretical Importance: This study provides evidence that a 12-week yoga practice can attenuate the increase in insulin resistance in early postmenopausal women. It highlights the potential of yoga as a preventive measure against the early onset of insulin resistance and the development of type 2 diabetes mellitus. Regular yoga practice can be a valuable tool in addressing hormonal imbalances associated with early postmenopause, leading to a decrease in morbidity and mortality related to insulin resistance and type 2 diabetes mellitus in this population. Data Collection and Analysis Procedures: Data collection involved measuring serum fasting insulin levels and calculating HOMA-IR. Statistical analysis was performed using GraphPad Prism Version 5 software, and mean values with standard error of the mean were reported. The significance level was set at P < 0.05. Question Addressed: The study aimed to address whether a 3-month long yoga practice could attenuate insulin resistance in early postmenopausal women. Conclusion: The research findings support the efficacy of a 12-week yoga practice in attenuating insulin resistance in early postmenopausal women. Regular yoga practice has the potential to prevent the early onset of insulin resistance and the development of type 2 diabetes mellitus in this population. By addressing the hormonal imbalances associated with early post menopause, yoga could significantly decrease morbidity and mortality related to insulin resistance and type 2 diabetes mellitus in these subjects.Keywords: post menopause, insulin resistance, HOMA-IR, yoga, type 2 diabetes mellitus
Procedia PDF Downloads 67351 Electrical Decomposition of Time Series of Power Consumption
Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats
Abstract:
Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).Keywords: electrical disaggregation, DTW, general appliance modeling, event detection
Procedia PDF Downloads 75350 Laboratory Indices in Late Childhood Obesity: The Importance of DONMA Indices
Authors: Orkide Donma, Mustafa M. Donma, Muhammet Demirkol, Murat Aydin, Tuba Gokkus, Burcin Nalbantoglu, Aysin Nalbantoglu, Birol Topcu
Abstract:
Obesity in childhood establishes a ground for adulthood obesity. Especially morbid obesity is an important problem for the children because of the associated diseases such as diabetes mellitus, cancer and cardiovascular diseases. In this study, body mass index (BMI), body fat ratios, anthropometric measurements and ratios were evaluated together with different laboratory indices upon evaluation of obesity in morbidly obese (MO) children. Children with nutritional problems participated in the study. Written informed consent was obtained from the parents. Study protocol was approved by the Ethics Committee. Sixty-two MO girls aged 129.5±35.8 months and 75 MO boys aged 120.1±26.6 months were included into the scope of the study. WHO-BMI percentiles for age-and-sex were used to assess the children with those higher than 99th as morbid obesity. Anthropometric measurements of the children were recorded after their physical examination. Bio-electrical impedance analysis was performed to measure fat distribution. Anthropometric ratios, body fat ratios, Index-I and Index-II as well as insulin sensitivity indices (ISIs) were calculated. Girls as well as boys were binary grouped according to homeostasis model assessment-insulin resistance (HOMA-IR) index of <2.5 and >2.5, fasting glucose to insulin ratio (FGIR) of <6 and >6 and quantitative insulin sensitivity check index (QUICKI) of <0.33 and >0.33 as the frequently used cut-off points. They were evaluated based upon their BMIs, arms, legs, trunk, whole body fat percentages, body fat ratios such as fat mass index (FMI), trunk-to-appendicular fat ratio (TAFR), whole body fat ratio (WBFR), anthropometric measures and ratios [waist-to-hip, head-to-neck, thigh-to-arm, thigh-to-ankle, height/2-to-waist, height/2-to-hip circumference (C)]. SPSS/PASW 18 program was used for statistical analyses. p≤0.05 was accepted as statistically significance level. All of the fat percentages showed differences between below and above the specified cut-off points in girls when evaluated with HOMA-IR and QUICKI. Differences were observed only in arms fat percent for HOMA-IR and legs fat percent for QUICKI in boys (p≤ 0.05). FGIR was unable to detect any differences for the fat percentages of boys. Head-to-neck C was the only anthropometric ratio recommended to be used for all ISIs (p≤0.001 for both girls and boys in HOMA-IR, p≤0.001 for girls and p≤0.05 for boys in FGIR and QUICKI). Indices which are recommended for use in both genders were Index-I, Index-II, HOMA/BMI and log HOMA (p≤0.001). FMI was also a valuable index when evaluated with HOMA-IR and QUICKI (p≤0.001). The important point was the detection of the severe significance for HOMA/BMI and log HOMA while they were evaluated also with the other indices, FGIR and QUICKI (p≤0.001). These parameters along with Index-I were unique at this level of significance for all children. In conclusion, well-accepted ratios or indices may not be valid for the evaluation of both genders. This study has emphasized the limiting properties for boys. This is particularly important for the selection process of some ratios and/or indices during the clinical studies. Gender difference should be taken into consideration for the evaluation of the ratios or indices, which will be recommended to be used particularly within the scope of obesity studies.Keywords: anthropometry, childhood obesity, gender, insulin sensitivity index
Procedia PDF Downloads 355349 Co-Movement between Financial Assets: An Empirical Study on Effects of the Depreciation of Yen on Asia Markets
Authors: Yih-Wenn Laih
Abstract:
In recent times, the dependence and co-movement among international financial markets have become stronger than in the past, as evidenced by commentaries in the news media and the financial sections of newspapers. Studying the co-movement between returns in financial markets is an important issue for portfolio management and risk management. The realization of co-movement helps investors to identify the opportunities for international portfolio management in terms of asset allocation and pricing. Since the election of the new Prime Minister, Shinzo Abe, in November 2012, the yen has weakened against the US dollar from the 80 to the 120 level. The policies, known as “Abenomics,” are to encourage private investment through a more aggressive mix of monetary and fiscal policy. Given the close economic relations and competitions among Asia markets, it is interesting to discover the co-movement relations, affected by the depreciation of yen, between stock market of Japan and 5 major Asia stock markets, including China, Hong Kong, Korea, Singapore, and Taiwan. Specifically, we devote ourselves to measure the co-movement of stock markets between Japan and each one of the 5 Asia stock markets in terms of rank correlation coefficients. To compute the coefficients, return series of each stock market is first fitted by a skewed-t GARCH (generalized autoregressive conditional heteroscedasticity) model. Secondly, to measure the dependence structure between matched stock markets, we employ the symmetrized Joe-Clayton (SJC) copula to calculate the probability density function of paired skewed-t distributions. The joint probability density function is then utilized as the scoring scheme to optimize the sequence alignment by dynamic programming method. Finally, we compute the rank correlation coefficients (Kendall's and Spearman's ) between matched stock markets based on their aligned sequences. We collect empirical data of 6 stock indexes from Taiwan Economic Journal. The data is sampled at a daily frequency covering the period from January 1, 2013 to July 31, 2015. The empirical distributions of returns indicate fatter tails than the normal distribution. Therefore, the skewed-t distribution and SJC copula are appropriate for characterizing the data. According to the computed Kendall’s τ, Korea has the strongest co-movement relation with Japan, followed by Taiwan, China, and Singapore; the weakest is Hong Kong. On the other hand, the Spearman’s ρ reveals that the strength of co-movement between markets with Japan in decreasing order are Korea, China, Taiwan, Singapore, and Hong Kong. We explore the effects of “Abenomics” on Asia stock markets by measuring the co-movement relation between Japan and five major Asia stock markets in terms of rank correlation coefficients. The matched markets are aligned by a hybrid method consisting of GARCH, copula and sequence alignment. Empirical experiments indicate that Korea has the strongest co-movement relation with Japan. The strength of China and Taiwan are better than Singapore. The Hong Kong market has the weakest co-movement relation with Japan.Keywords: co-movement, depreciation of Yen, rank correlation, stock market
Procedia PDF Downloads 229348 Revolutionizing Product Packaging: The Impact of Transparent Graded Lanes on Ketchup and Edible Oils Containers on Consumer Behavior
Authors: Saeid Asghari
Abstract:
The growing interest in sustainability and healthy lifestyles has stimulated the development of solutions that promote mindful consumption and healthier choices. One such solution is the use of transparent graded lanes in product packaging, which enables consumers to visually track their product consumption and encourages portion control. However, the extent to which this packaging affects consumer behavior, trust, and loyalty towards a product or brand, as well as the effectiveness of messaging on the graded lanes, remains unclear. The research aims to examine the impact of transparent graded lanes on consumer behavior, trust, and loyalty towards products or brands in the context of the Janbo chain supermarket in Tehran, Iran, focusing on Ketchup and edible oils containers. A representative sample of 720 respondents is selected using quota sampling based on sex, age, and financial status. The study assesses the effect of messaging on the graded lanes in enhancing consumer recall and recognition of the product at the time of purchase, increasing repeat purchases, and fostering long-term relationships with customers. Furthermore, the potential outcomes of using transparent graded lanes, including the promotion of healthy consumption habits and the reduction of food waste, are also considered. The findings and results can inform the development of effective messaging strategies for graded lanes and suggest ways to enhance consumer engagement with product packaging. Moreover, the study's outcomes can contribute to the broader discourse on sustainable consumption and healthy lifestyles, highlighting the potential role of packaging innovations in promoting these values. We used four theories (social cognitive theory, self-perception theory, nudge theory, and marketing and consumer behavior) to examine the effect of these transparent graded lanes on consumer behavior. The conceptual model integrates the use of transparent graded lanes, consumer behavior, trust and loyalty, messaging, and promotion of healthy consumption habits. The study aims to provide insights into how transparent graded lanes can promote mindful consumption, increase consumer recognition and recall of the product, and foster long-term relationships with customers. Findings suggest that the use of transparent graded lanes on Ketchup and edible oils containers can have a positive impact on consumer behavior, trust, and loyalty towards a product or brand, as well as promote mindful consumption and healthier choices. The messaging on the graded lanes is also found to be effective in promoting recall and recognition of the product at the time of purchase and encouraging repeat purchases. However, the impact of transparent graded lanes may be limited by factors such as cultural norms, personal values, and financial status. Broadly speaking, the investigation provides valuable insights into the potential benefits and challenges of using transparent graded lanes in product packaging, as well as effective strategies for promoting healthy consumption habits and building long-term relationships with customers.Keywords: packaging customer behavior, purchase, brand loyalty, healthy consumption
Procedia PDF Downloads 251347 Algorithmic Obligations: Proactive Liability for AI-Generated Content and Copyright Compliance
Authors: Aleksandra Czubek
Abstract:
As AI systems increasingly shape content creation, existing copyright frameworks face significant challenges in determining liability for AI-generated outputs. Current legal discussions largely focus on who bears responsibility for infringing works, be it developers, users, or entities benefiting from AI outputs. This paper introduces a novel concept of algorithmic obligations, proposing that AI developers be subject to proactive duties that ensure their models prevent copyright infringement before it occurs. Building on principles of obligations law traditionally applied to human actors, the paper suggests a shift from reactive enforcement to proactive legal requirements. AI developers would be legally mandated to incorporate copyright-aware mechanisms within their systems, turning optional safeguards into enforceable standards. These obligations could vary in implementation across international, EU, UK, and U.S. legal frameworks, creating a multi-jurisdictional approach to copyright compliance. This paper explores how the EU’s existing copyright framework, exemplified by the Copyright Directive (2019/790), could evolve to impose a duty of foresight on AI developers, compelling them to embed mechanisms that prevent infringing outputs. By drawing parallels to GDPR’s “data protection by design,” a similar principle could be applied to copyright law, where AI models are designed to minimize copyright risks. In the UK, post-Brexit text and data mining exemptions are seen as pro-innovation but pose risks to copyright protections. This paper proposes a balanced approach, introducing algorithmic obligations to complement these exemptions. AI systems benefiting from text and data mining provisions should integrate safeguards that flag potential copyright violations in real time, ensuring both innovation and protection. In the U.S., where copyright law focuses on human-centric works, this paper suggests an evolution toward algorithmic due diligence. AI developers would have a duty similar to product liability, ensuring that their systems do not produce infringing outputs, even if the outputs themselves cannot be copyrighted. This framework introduces a shift from post-infringement remedies to preventive legal structures, where developers actively mitigate risks. The paper also breaks new ground by addressing obligations surrounding the training data of large language models (LLMs). Currently, training data is often treated under exceptions such as the EU’s text and data mining provisions or U.S. fair use. However, this paper proposes a proactive framework where developers are obligated to verify and document the legal status of their training data, ensuring it is licensed or otherwise cleared for use. In conclusion, this paper advocates for an obligations-centered model that shifts AI-related copyright law from reactive litigation to proactive design. By holding AI developers to a heightened standard of care, this approach aims to prevent infringement at its source, addressing both the outputs of AI systems and the training processes that underlie them.Keywords: ip, technology, copyright, data, infringement, comparative analysis
Procedia PDF Downloads 15346 A Practical Methodology for Evaluating Water, Sanitation and Hygiene Education and Training Programs
Authors: Brittany E. Coff, Tommy K. K. Ngai, Laura A. S. MacDonald
Abstract:
Many organizations in the Water, Sanitation and Hygiene (WASH) sector provide education and training in order to increase the effectiveness of their WASH interventions. A key challenge for these organizations is measuring how well their education and training activities contribute to WASH improvements. It is crucial for implementers to understand the returns of their education and training activities so that they can improve and make better progress toward the desired outcomes. This paper presents information on CAWST’s development and piloting of the evaluation methodology. The Centre for Affordable Water and Sanitation Technology (CAWST) has developed a methodology for evaluating education and training activities, so that organizations can understand the effectiveness of their WASH activities and improve accordingly. CAWST developed this methodology through a series of research partnerships, followed by staged field pilots in Nepal, Peru, Ethiopia and Haiti. During the research partnerships, CAWST collaborated with universities in the UK and Canada to: review a range of available evaluation frameworks, investigate existing practices for evaluating education activities, and develop a draft methodology for evaluating education programs. The draft methodology was then piloted in three separate studies to evaluate CAWST’s, and CAWST’s partner’s, WASH education programs. Each of the pilot studies evaluated education programs in different locations, with different objectives, and at different times within the project cycles. The evaluations in Nepal and Peru were conducted in 2013 and investigated the outcomes and impacts of CAWST’s WASH education services in those countries over the past 5-10 years. In 2014, the methodology was applied to complete a rigorous evaluation of a 3-day WASH Awareness training program in Ethiopia, one year after the training had occurred. In 2015, the methodology was applied in Haiti to complete a rapid assessment of a Community Health Promotion program, which informed the development of an improved training program. After each pilot evaluation, the methodology was reviewed and improvements were made. A key concept within the methodology is that in order for training activities to lead to improved WASH practices at the community level, it is not enough for participants to acquire new knowledge and skills; they must also apply the new skills and influence the behavior of others following the training. The steps of the methodology include: development of a Theory of Change for the education program, application of the Kirkpatrick model to develop indicators, development of data collection tools, data collection, data analysis and interpretation, and use of the findings for improvement. The methodology was applied in different ways for each pilot and was found to be practical to apply and adapt to meet the needs of each case. It was useful in gathering specific information on the outcomes of the education and training activities, and in developing recommendations for program improvement. Based on the results of the pilot studies, CAWST is developing a set of support materials to enable other WASH implementers to apply the methodology. By using this methodology, more WASH organizations will be able to understand the outcomes and impacts of their training activities, leading to higher quality education programs and improved WASH outcomes.Keywords: education and training, capacity building, evaluation, water and sanitation
Procedia PDF Downloads 309345 Photo-Fenton Degradation of Organic Compounds by Iron(II)-Embedded Composites
Authors: Marius Sebastian Secula, Andreea Vajda, Benoit Cagnon, Ioan Mamaliga
Abstract:
One of the most important classes of pollutants is represented by dyes. The synthetic character and complex molecular structure make them more stable and difficult to be biodegraded in water. The treatment of wastewaters containing dyes in order to separate/degrade dyes is of major importance. Various techniques have been employed to remove and/or degrade dyes in water. Advanced oxidation processes (AOPs) are known as among the most efficient ones towards dye degradation. The aim of this work is to investigate the efficiency of a cheap Iron-impregnated activated carbon Fenton-like catalyst in order to degrade organic compounds in aqueous solutions. In the presented study an anionic dye, Indigo Carmine, is considered as a model pollutant. Various AOPs are evaluated for the degradation of Indigo Carmine to establish the effect of the prepared catalyst. It was found that the Iron(II)-embedded activated carbon composite enhances significantly the degradation process of Indigo Carmine. Using the wet impregnation procedure, 5 g of L27 AC material were contacted with Fe(II) solutions of FeSO4 precursor at a theoretical iron content in the resulted composite of 1 %. The L27 AC was impregnated for 3h at 45°C, then filtered, washed several times with water and ethanol and dried at 55 °C for 24 h. Thermogravimetric analysis, Fourier transform infrared, X-ray diffraction, and transmission electron microscopy were employed to investigate the structural, textural, and micromorphology of the catalyst. Total iron content in the obtained composites and iron leakage were determined by spectrophotometric method using phenantroline. Photo-catalytic tests were performed using an UV - Consulting Peschl Laboratory Reactor System. UV light irradiation tests were carried out to determine the performance of the prepared Iron-impregnated composite towards the degradation of Indigo Carmine in aqueous solution using different conditions (17 W UV lamps, with and without in-situ generation of O3; different concentrations of H2O2, different initial concentrations of Indigo Carmine, different values of pH, different doses of NH4-OH enhancer). The photocatalytic tests were performed after the adsorption equilibrium has been established. The obtained results emphasize an enhancement of Indigo Carmine degradation in case of the heterogeneous photo-Fenton process conducted with an O3 generating UV lamp in the presence of hydrogen peroxide. The investigated process obeys the pseudo-first order kinetics. The photo-Fenton degradation of IC was tested at different values of initial concentration. The obtained results emphasize an enhancement of Indigo Carmine degradation in case of the heterogeneous photo-Fenton process conducted with an O3 generating UV lamp in the presence of hydrogen peroxide. Acknowledgments: This work was supported by a grant of the Romanian National Authority for Scientific Research and Innovation, CNCS - UEFISCDI, project number PN-II-RU-TE-2014-4-0405.Keywords: photodegradation, heterogeneous Fenton, anionic dye, carbonaceous composite, screening factorial design
Procedia PDF Downloads 256344 Creative Mapping Landuse and Human Activities: From the Inventories of Factories to the History of the City and Citizens
Authors: R. Tamborrino, F. Rinaudo
Abstract:
Digital technologies offer possibilities to effectively convert historical archives into instruments of knowledge able to provide a guide for the interpretation of historical phenomena. Digital conversion and management of those documents allow the possibility to add other sources in a unique and coherent model that permits the intersection of different data able to open new interpretations and understandings. Urban history uses, among other sources, the inventories that register human activities in a specific space (e.g. cadastres, censuses, etc.). The geographic localisation of that information inside cartographic supports allows for the comprehension and visualisation of specific relationships between different historical realities registering both the urban space and the peoples living there. These links that merge the different nature of data and documentation through a new organisation of the information can suggest a new interpretation of other related events. In all these kinds of analysis, the use of GIS platforms today represents the most appropriate answer. The design of the related databases is the key to realise the ad-hoc instrument to facilitate the analysis and the intersection of data of different origins. Moreover, GIS has become the digital platform where it is possible to add other kinds of data visualisation. This research deals with the industrial development of Turin at the beginning of the 20th century. A census of factories realized just prior to WWI provides the opportunity to test the potentialities of GIS platforms for the analysis of urban landscape modifications during the first industrial development of the town. The inventory includes data about location, activities, and people. GIS is shaped in a creative way linking different sources and digital systems aiming to create a new type of platform conceived as an interface integrating different kinds of data visualisation. The data processing allows linking this information to an urban space, and also visualising the growth of the city at that time. The sources, related to the urban landscape development in that period, are of a different nature. The emerging necessity to build, enlarge, modify and join different buildings to boost the industrial activities, according to their fast development, is recorded by different official permissions delivered by the municipality and now stored in the Historical Archive of the Municipality of Turin. Those documents, which are reports and drawings, contain numerous data on the buildings themselves, including the block where the plot is located, the district, and the people involved such as the owner, the investor, and the engineer or architect designing the industrial building. All these collected data offer the possibility to firstly re-build the process of change of the urban landscape by using GIS and 3D modelling technologies thanks to the access to the drawings (2D plans, sections and elevations) that show the previous and the planned situation. Furthermore, they access information for different queries of the linked dataset that could be useful for different research and targets such as economics, biographical, architectural, or demographical. By superimposing a layer of the present city, the past meets to the present-industrial heritage, and people meet urban history.Keywords: digital urban history, census, digitalisation, GIS, modelling, digital humanities
Procedia PDF Downloads 190343 Measuring Enterprise Growth: Pitfalls and Implications
Authors: N. Šarlija, S. Pfeifer, M. Jeger, A. Bilandžić
Abstract:
Enterprise growth is generally considered as a key driver of competitiveness, employment, economic development and social inclusion. As such, it is perceived to be a highly desirable outcome of entrepreneurship for scholars and decision makers. The huge academic debate resulted in the multitude of theoretical frameworks focused on explaining growth stages, determinants and future prospects. It has been widely accepted that enterprise growth is most likely nonlinear, temporal and related to the variety of factors which reflect the individual, firm, organizational, industry or environmental determinants of growth. However, factors that affect growth are not easily captured, instruments to measure those factors are often arbitrary, causality between variables and growth is elusive, indicating that growth is not easily modeled. Furthermore, in line with heterogeneous nature of the growth phenomenon, there is a vast number of measurement constructs assessing growth which are used interchangeably. Differences among various growth measures, at conceptual as well as at operationalization level, can hinder theory development which emphasizes the need for more empirically robust studies. In line with these highlights, the main purpose of this paper is twofold. Firstly, to compare structure and performance of three growth prediction models based on the main growth measures: Revenues, employment and assets growth. Secondly, to explore the prospects of financial indicators, set as exact, visible, standardized and accessible variables, to serve as determinants of enterprise growth. Finally, to contribute to the understanding of the implications on research results and recommendations for growth caused by different growth measures. The models include a range of financial indicators as lag determinants of the enterprises’ performances during the 2008-2013, extracted from the national register of the financial statements of SMEs in Croatia. The design and testing stage of the modeling used the logistic regression procedures. Findings confirm that growth prediction models based on different measures of growth have different set of predictors. Moreover, the relationship between particular predictors and growth measure is inconsistent, namely the same predictor positively related to one growth measure may exert negative effect on a different growth measure. Overall, financial indicators alone can serve as good proxy of growth and yield adequate predictive power of the models. The paper sheds light on both methodology and conceptual framework of enterprise growth by using a range of variables which serve as a proxy for the multitude of internal and external determinants, but are unlike them, accessible, available, exact and free of perceptual nuances in building up the model. Selection of the growth measure seems to have significant impact on the implications and recommendations related to growth. Furthermore, the paper points out to potential pitfalls of measuring and predicting growth. Overall, the results and the implications of the study are relevant for advancing academic debates on growth-related methodology, and can contribute to evidence-based decisions of policy makers.Keywords: growth measurement constructs, logistic regression, prediction of growth potential, small and medium-sized enterprises
Procedia PDF Downloads 252342 Economic Valuation of Emissions from Mobile Sources in the Urban Environment of Bogotá
Authors: Dayron Camilo Bermudez Mendoza
Abstract:
Road transportation is a significant source of externalities, notably in terms of environmental degradation and the emission of pollutants. These emissions adversely affect public health, attributable to criteria pollutants like particulate matter (PM2.5 and PM10) and carbon monoxide (CO), and also contribute to climate change through the release of greenhouse gases, such as carbon dioxide (CO2). It is, therefore, crucial to quantify the emissions from mobile sources and develop a methodological framework for their economic valuation, aiding in the assessment of associated costs and informing policy decisions. The forthcoming congress will shed light on the externalities of transportation in Bogotá, showcasing methodologies and findings from the construction of emission inventories and their spatial analysis within the city. This research focuses on the economic valuation of emissions from mobile sources in Bogotá, employing methods like hedonic pricing and contingent valuation. Conducted within the urban confines of Bogotá, the study leverages demographic, transportation, and emission data sourced from the Mobility Survey, official emission inventories, and tailored estimates and measurements. The use of hedonic pricing and contingent valuation methodologies facilitates the estimation of the influence of transportation emissions on real estate values and gauges the willingness of Bogotá's residents to invest in reducing these emissions. The findings are anticipated to be instrumental in the formulation and execution of public policies aimed at emission reduction and air quality enhancement. In compiling the emission inventory, innovative data sources were identified to determine activity factors, including information from automotive diagnostic centers and used vehicle sales websites. The COPERT model was utilized to ascertain emission factors, requiring diverse inputs such as data from the national transit registry (RUNT), OpenStreetMap road network details, climatological data from the IDEAM portal, and Google API for speed analysis. Spatial disaggregation employed GIS tools and publicly available official spatial data. The development of the valuation methodology involved an exhaustive systematic review, utilizing platforms like the EVRI (Environmental Valuation Reference Inventory) portal and other relevant sources. The contingent valuation method was implemented via surveys in various public settings across the city, using a referendum-style approach for a sample of 400 residents. For the hedonic price valuation, an extensive database was developed, integrating data from several official sources and basing analyses on the per-square meter property values in each city block. The upcoming conference anticipates the presentation and publication of these results, embodying a multidisciplinary knowledge integration and culminating in a master's thesis.Keywords: economic valuation, transport economics, pollutant emissions, urban transportation, sustainable mobility
Procedia PDF Downloads 57341 Culvert Blockage Evaluation Using Australian Rainfall And Runoff 2019
Authors: Rob Leslie, Taher Karimian
Abstract:
The blockage of cross drainage structures is a risk that needs to be understood and managed or lessened through the design. A blockage is a random event, influenced by site-specific factors, which needs to be quantified for design. Under and overestimation of blockage can have major impacts on flood risk and cost associated with drainage structures. The importance of this matter is heightened for those projects located within sensitive lands. It is a particularly complex problem for large linear infrastructure projects (e.g., rail corridors) located within floodplains where blockage factors can influence flooding upstream and downstream of the infrastructure. The selection of the appropriate blockage factors for hydraulic modeling has been subject to extensive research by hydraulic engineers. This paper has been prepared to review the current Australian Rainfall and Runoff 2019 (ARR 2019) methodology for blockage assessment by applying this method to a transport corridor brownfield upgrade case study in New South Wales. The results of applying the method are also validated against asset data and maintenance records. ARR 2019 – Book 6, Chapter 6 includes advice and an approach for estimating the blockage of bridges and culverts. This paper concentrates specifically on the blockage of cross drainage structures. The method has been developed to estimate the blockage level for culverts affected by sediment or debris due to flooding. The objective of the approach is to evaluate a numerical blockage factor that can be utilized in a hydraulic assessment of cross drainage structures. The project included an assessment of over 200 cross drainage structures. In order to estimate a blockage factor for use in the hydraulic model, a process has been advanced that considers the qualitative factors (e.g., Debris type, debris availability) and site-specific hydraulic factors that influence blockage. A site rating associated with the debris potential (i.e., availability, transportability, mobility) at each crossing was completed using the method outlined in ARR 2019 guidelines. The hydraulic results inputs (i.e., flow velocity, flow depth) and qualitative factors at each crossing were developed into an advanced spreadsheet where the design blockage level for cross drainage structures were determined based on the condition relating Inlet Clear Width and L10 (average length of the longest 10% of the debris reaching the site) and the Adjusted Debris Potential. Asset data, including site photos and maintenance records, were then reviewed and compared with the blockage assessment to check the validity of the results. The results of this assessment demonstrate that the estimated blockage factors at each crossing location using ARR 2019 guidelines are well-validated with the asset data. The primary finding of the study is that the ARR 2019 methodology is a suitable approach for culvert blockage assessment that has been validated against a case study spanning a large geographical area and multiple sub-catchments. The study also found that the methodology can be effectively coded within a spreadsheet or similar analytical tool to automate its application.Keywords: ARR 2019, blockage, culverts, methodology
Procedia PDF Downloads 355340 Effect of Climate Change on the Genomics of Invasiveness of the Whitefly Bemisia tabaci Species Complex by Estimating the Effective Population Size via a Coalescent Method
Authors: Samia Elfekih, Wee Tek Tay, Karl Gordon, Paul De Barro
Abstract:
Invasive species represent an increasing threat to food biosecurity, causing significant economic losses in agricultural systems. An example is the sweet potato whitefly, Bemisia tabaci, which is a complex of morphologically indistinguishable species causing average annual global damage estimated at US$2.4 billion. The Bemisia complex represents an interesting model for evolutionary studies because of their extensive distribution and potential for invasiveness and population expansion. Within this complex, two species, Middle East-Asia Minor 1 (MEAM1) and Mediterranean (MED) have invaded well beyond their home ranges whereas others, such as Indian Ocean (IO) and Australia (AUS), have not. In order to understand why some Bemisia species have become invasive, genome-wide sequence scans were used to estimate population dynamics over time and relate these to climate. The Bayesian Skyline Plot (BSP) method as implemented in BEAST was used to infer the historical effective population size. In order to overcome sampling bias, the populations were combined based on geographical origin. The datasets used for this particular analysis are genome-wide SNPs (single nucleotide polymorphisms) called separately in each of the following groups: Sub-Saharan Africa (Burkina Faso), Europe (Spain, France, Greece and Croatia), USA (Arizona), Mediterranean-Middle East (Israel, Italy), Middle East-Central Asia (Turkmenistan, Iran) and Reunion Island. The non-invasive ‘AUS’ species endemic to Australia was used as an outgroup. The main findings of this study show that the BSP for the Sub-Saharan African MED population is different from that observed in MED populations from the Mediterranean Basin, suggesting evolution under a different set of environmental conditions. For MED, the effective size of the African (Burkina Faso) population showed a rapid expansion ≈250,000-310,000 years ago (YA), preceded by a period of slower growth. The European MED populations (i.e., Spain, France, Croatia, and Greece) showed a single burst of expansion at ≈160,000-200,000 YA. The MEAM1 populations from Israel and Italy and the ones from Iran and Turkmenistan are similar as they both show the earlier expansion at ≈250,000-300,000 YA. The single IO population lacked the latter expansion but had the earlier one. This pattern is shared with the Sub-Saharan African (Burkina Faso) MED, suggesting IO also faced a similar history of environmental change, which seems plausible given their relatively close geographical distributions. In conclusion, populations within the invasive species MED and MEAM1 exhibited signatures of population expansion lacking in non-invasive species (IO and AUS) during the Pleistocene, a geological epoch marked by repeated climatic oscillations with cycles of glacial and interglacial periods. These expansions strongly suggested the potential of some Bemisia species’ genomes to affect their adaptability and invasiveness.Keywords: whitefly, RADseq, invasive species, SNP, climate change
Procedia PDF Downloads 125