Search results for: number of the processors
1261 Integrated Geophysical Surveys for Sinkhole and Subsidence Vulnerability Assessment, in the West Rand Area of Johannesburg
Authors: Ramoshweu Melvin Sethobya, Emmanuel Chirenje, Mihlali Hobo, Simon Sebothoma
Abstract:
The recent surge in residential infrastructure development around the metropolitan areas of South Africa has necessitated conditions for thorough geotechnical assessments to be conducted prior to site developments to ensure human and infrastructure safety. This paper appraises the success in the application of multi-method geophysical techniques for the delineation of sinkhole vulnerability in a residential landscape. Geophysical techniques ERT, MASW, VES, Magnetics and gravity surveys were conducted to assist in mapping sinkhole vulnerability, using an existing sinkhole as a constraint at Venterspost town, West of Johannesburg city. A combination of different geophysical techniques and results integration from those proved to be useful in the delineation of the lithologic succession around sinkhole locality, and determining the geotechnical characteristics of each layer for its contribution to the development of sinkholes, subsidence and cavities at the vicinity of the site. Study results have also assisted in the determination of the possible depth extension of the currently existing sinkhole and the location of sites where other similar karstic features and sinkholes could form. Results of the ERT, VES and MASW surveys have uncovered dolomitic bedrock at varying depths around the sites, which exhibits high resistivity values in the range 2500-8000ohm.m and corresponding high velocities in the range 1000-2400 m/s. The dolomite layer was found to be overlain by a weathered chert-poor dolomite layer, which has resistivities between the range 250-2400ohm.m, and velocities ranging from 500-600m/s, from which the large sinkhole has been found to collapse/ cave in. A compiled 2.5D high resolution Shear Wave Velocity (Vs) map of the study area was created using 2D profiles of MASW data, offering insights into the prevailing lithological setup conducive for formation various types of karstic features around the site. 3D magnetic models of the site highlighted the regions of possible subsurface interconnections between the currently existing large sinkhole and the other subsidence feature at the site. A number of depth slices were used to detail the conditions near the sinkhole as depth increases. Gravity surveys results mapped the possible formational pathways for development of new karstic features around the site. Combination and correlation of different geophysical techniques proved useful in delineation of the site geotechnical characteristics and mapping the possible depth extend of the currently existing sinkhole.Keywords: resistivity, magnetics, sinkhole, gravity, karst, delineation, VES
Procedia PDF Downloads 801260 Development of an Innovative Mobile Phone Application for Employment of Persons With Disabilities Toward the Inclusive Society
Authors: Marutani M, Kawajiri H, Usui C, Takai Y, Kawaguchi T
Abstract:
Background: To build the inclusive society, the Japanese government provides “transition support for employment system” for Persons with Disabilities (PWDs). It is, however, difficult to provide appropriate accommodations due to their changeable health conditions. Mobile phone applications (App) are useful to monitor their health conditions and their environments, and effective to improve reasonable accommodations for PWDs. Purpose: This study aimed to develop an App that PWDs input their self-assessment and make their health conditions and environment conditions visible. To attain the goal, we investigated the items of the App for the first step. Methods: Qualitative and descriptive design was used for this study. Study participants were recruited by snowball sampling in July and August 2023. They had to have had minimum of five-years of experience to support PWDs’ employment. Semi-structured interviews were conducted on their assessment regarding PWDs’ conditions of daily activities, their health conditions, and living and working environment. Verbatim transcript was created from each interview content. We extracted the following items in tree groups from each verbatim transcript: daily activities, health conditions, and living and working. Results: Fourteen participants were involved (average years of experience: 10.6 years). Based on the interviews, tree item groups were enriched. The items of daily activities were divided into fifty-five. The example items were as follows: “have meals on one’s style” “feel like slept well” “wake-up time, bedtime, and mealtime are usually fixed.” “commute to the office and work without barriers.” Thirteen items of health conditions were obtained like “feel no anxiety” “relieve stress” “focus on work and training” “have no pain” “have the physical strength to work for one day.” The items of categories of living and working environments were divided into fifteen-two. The example items were as follows: “have no barrier in home” “have supportive family members” “have time to take medication on time while at work” “commute time is just right” “people at the work understand the symptoms” “room temperature and humidity are just right” “get along well with friends in my own way.” The participants also mentioned the styles to input self-assessment like that a face scale would be preferred to number scale. Conclusion: The items were enriched existent paper-based assessment items in terms of living and working environment because those were obtained from the perspective of PWDs. We have to create the app and examine its usefulness with PWDs toward inclusive society.Keywords: occupational health, innovatiove tool, people with disability, employment
Procedia PDF Downloads 551259 Monitoring Potential Temblor Localities as a Supplemental Risk Control System
Authors: Mikhail Zimin, Svetlana Zimina, Maxim Zimin
Abstract:
Without question, the basic method of prevention of human and material losses is the provision for adequate strength of constructions. At the same time, seismic load has a stochastic character. So, at all times, there is little danger of earthquake forces exceeding the selected design load. This risk is very low, but the consequences of such events may be extremely serious. Very dangerous are also occasional mistakes in seismic zoning, soil conditions changing before temblors, and failure to take into account hazardous natural phenomena caused by earthquakes. Besides, it is known that temblors detrimentally affect the environmental situation in regions where they occur, resulting in panic and worsening various disease courses. It may lead to mistakes of personnel of hazardous production facilities like the production and distribution of gas and oil, which may provoke severe accidents. In addition, gas and oil pipelines often have long mileage and cross many perilous zones by contrast with buildings. This situation increases the risk of heavy accidents. In such cases, complex monitoring of potential earthquake localities would be relevant. Even though the number of successful real-time forecasts of earthquakes is not great, it is well in excess, such as may be under random guessing. Experimental performed time-lapse study and analysis consist of searching seismic, biological, meteorological, and light earthquake precursors, processing such data with the help of fuzzy sets, collecting weather information, utilizing a database of terrain, and computing risk of slope processes under the temblor in a given setting. Works were done in a real-time environment and broadly acceptable results took place. Observations from already in-place seismic recording systems are used. Furthermore, a look back study of precursors of known earthquakes is done. Situations before Ashkhabad, Tashkent, and Haicheng seismic events are analyzed. Fairish findings are obtained. Results of earthquake forecasts can be used for predicting dangerous natural phenomena caused by temblors such as avalanches and mudslides. They may also be utilized for prophylaxis of some diseases and their complications. Relevant software is worked out too. It should be emphasized that such control does not require serious financial expenses and can be performed by a small group of professionals. Thus, complex monitoring of potential earthquake localities, including short-term earthquake forecasts and analysis of possible hazardous consequences of temblors, may further the safety of pipeline facilities.Keywords: risk, earthquake, monitoring, forecast, precursor
Procedia PDF Downloads 221258 Application of Principal Component Analysis and Ordered Logit Model in Diabetic Kidney Disease Progression in People with Type 2 Diabetes
Authors: Mequanent Wale Mekonen, Edoardo Otranto, Angela Alibrandi
Abstract:
Diabetic kidney disease is one of the main microvascular complications caused by diabetes. Several clinical and biochemical variables are reported to be associated with diabetic kidney disease in people with type 2 diabetes. However, their interrelations could distort the effect estimation of these variables for the disease's progression. The objective of the study is to determine how the biochemical and clinical variables in people with type 2 diabetes are interrelated with each other and their effects on kidney disease progression through advanced statistical methods. First, principal component analysis was used to explore how the biochemical and clinical variables intercorrelate with each other, which helped us reduce a set of correlated biochemical variables to a smaller number of uncorrelated variables. Then, ordered logit regression models (cumulative, stage, and adjacent) were employed to assess the effect of biochemical and clinical variables on the order-level response variable (progression of kidney function) by considering the proportionality assumption for more robust effect estimation. This retrospective cross-sectional study retrieved data from a type 2 diabetic cohort in a polyclinic hospital at the University of Messina, Italy. The principal component analysis yielded three uncorrelated components. These are principal component 1, with negative loading of glycosylated haemoglobin, glycemia, and creatinine; principal component 2, with negative loading of total cholesterol and low-density lipoprotein; and principal component 3, with negative loading of high-density lipoprotein and a positive load of triglycerides. The ordered logit models (cumulative, stage, and adjacent) showed that the first component (glycosylated haemoglobin, glycemia, and creatinine) had a significant effect on the progression of kidney disease. For instance, the cumulative odds model indicated that the first principal component (linear combination of glycosylated haemoglobin, glycemia, and creatinine) had a strong and significant effect on the progression of kidney disease, with an effect or odds ratio of 0.423 (P value = 0.000). However, this effect was inconsistent across levels of kidney disease because the first principal component did not meet the proportionality assumption. To address the proportionality problem and provide robust effect estimates, alternative ordered logit models, such as the partial cumulative odds model, the partial adjacent category model, and the partial continuation ratio model, were used. These models suggested that clinical variables such as age, sex, body mass index, medication (metformin), and biochemical variables such as glycosylated haemoglobin, glycemia, and creatinine have a significant effect on the progression of kidney disease.Keywords: diabetic kidney disease, ordered logit model, principal component analysis, type 2 diabetes
Procedia PDF Downloads 391257 Computational Analysis of Thermal Degradation in Wind Turbine Spars' Equipotential Bonding Subjected to Lightning Strikes
Authors: Antonio A. M. Laudani, Igor O. Golosnoy, Ole T. Thomsen
Abstract:
Rotor blades of large, modern wind turbines are highly susceptible to downward lightning strikes, as well as to triggering upward lightning; consequently, it is necessary to equip them with an effective lightning protection system (LPS) in order to avoid any damage. The performance of existing LPSs is affected by carbon fibre reinforced polymer (CFRP) structures, which lead to lightning-induced damage in the blades, e.g. via electrical sparks. A solution to prevent internal arcing would be to electrically bond the LPS and the composite structures such that to obtain the same electric potential. Nevertheless, elevated temperatures are achieved at the joint interfaces because of high contact resistance, which melts and vaporises some of the epoxy resin matrix around the bonding. The produced high-pressure gasses open up the bonding and can ignite thermal sparks. The objective of this paper is to predict the current density distribution and the temperature field in the adhesive joint cross-section, in order to check whether the resin pyrolysis temperature is achieved and any damage is expected. The finite element method has been employed to solve both the current and heat transfer problems, which are considered weakly coupled. The mathematical model for electric current includes Maxwell-Ampere equation for induced electric field solved together with current conservation, while the thermal field is found from heat diffusion equation. In this way, the current sub-model calculates Joule heat release for a chosen bonding configuration, whereas the thermal analysis allows to determining threshold values of voltage and current density not to be exceeded in order to maintain the temperature across the joint below the pyrolysis temperature, therefore preventing the occurrence of outgassing. In addition, it provides an indication of the minimal number of bonding points. It is worth to mention that the numerical procedures presented in this study can be tailored and applied to any type of joints other than adhesive ones for wind turbine blades. For instance, they can be applied for lightning protection of aerospace bolted joints. Furthermore, they can even be customized to predict the electromagnetic response under lightning strikes of other wind turbine systems, such as nacelle and hub components.Keywords: carbon fibre reinforced polymer, equipotential bonding, finite element method, FEM, lightning protection system, LPS, wind turbine blades
Procedia PDF Downloads 1641256 Neuronal Mechanisms of Observational Motor Learning in Mice
Authors: Yi Li, Yinan Zheng, Ya Ke, Yungwing Ho
Abstract:
Motor learning is a process that frequently happens among humans and rodents, which is defined as the changes in the capability to perform a skill that is conformed to have a relatively permanent improvement through practice or experience. There are many ways to learn a behavior, among which is observational learning. Observational learning is the process of learning by watching the behaviors of others, for example, a child imitating parents, learning a new sport by watching the training videos or solving puzzles by watching the solutions. Many research explores observational learning in humans and primates. However, the neuronal mechanism of which, especially observational motor learning, was uncertain. It’s well accepted that mirror neurons are essential in the observational learning process. These neurons fire when the primate performs a goal-directed action and sees someone else demonstrating the same action, which suggests they have high firing activity both completing and watching the behavior. The mirror neurons are assumed to mediate imitation or play a critical and fundamental role in action understanding. They are distributed in many brain areas of primates, i.e., posterior parietal cortex (PPC), premotor cortex (M2), and primary motor cortex (M1) of the macaque brain. However, few researchers report the existence of mirror neurons in rodents. To verify the existence of mirror neurons and the possible role in motor learning in rodents, we performed customised string-pulling behavior combined with multiple behavior analysis methods, photometry, electrophysiology recording, c-fos staining and optogenetics in healthy mice. After five days of training, the demonstrator (demo) mice showed a significantly quicker response and shorter time to reach the string; fast, steady and accurate performance to pull down the string; and more precisely grasping the beads. During three days of observation, the mice showed more facial motions when the demo mice performed behaviors. On the first training day, the observer reduced the number of trials to find and pull the string. However, the time to find beads and pull down string were unchanged in the successful attempts on the first day and other training days, which indicated successful action understanding but failed motor learning through observation in mice. After observation, the post-hoc staining revealed that the c-fos expression was increased in the cognitive-related brain areas (medial prefrontal cortex) and motor cortices (M1, M2). In conclusion, this project indicated that the observation led to a better understanding of behaviors and activated the cognitive and motor-related brain areas, which suggested the possible existence of mirror neurons in these brain areas.Keywords: observation, motor learning, string-pulling behavior, prefrontal cortex, motor cortex, cognitive
Procedia PDF Downloads 881255 An Audit on the Role of Sentinel Node Biopsy in High-Risk Ductal Carcinoma in Situ and Intracystic Papillary Carcinoma
Authors: M. Sulieman, H. Arabiyat, H. Ali, K. Potiszil, I. Abbas, R. English, P. King, I. Brown, P. Drew
Abstract:
Introduction: The incidence of breast ductal Carcinoma in Situ (DCIS) has been increasing; it currently represents up 20-25% of all breast carcinomas. Some aspects of DCIS management are still controversial, mainly due to the heterogeneity of its clinical presentation and of its biological and pathological characteristics. In DCIS, histological diagnosis obtained preoperatively, carries the risk of sampling error if the presence of invasive cancer is subsequently diagnosed. The mammographic extent over than 4–5 cm and the presence of architectural distortion, focal asymmetric density or mass on mammography are proven important risk factors of preoperative histological under staging. Intracystic papillary cancer (IPC) is a rare form of breast carcinoma. Despite being previously compared to DCIS it has been shown to present histologically with invasion of the basement membrane and even metastasis. SLNB – Carries the risk of associated comorbidity that should be considered when planning surgery for DCIS and IPC. Objectives: The aim of this Audit was to better define a ‘high risk’ group of patients with pre-op diagnosis of non-invasive cancer undergoing breast conserving surgery, who would benefit from sentinel node biopsy. Method: Retrospective data collection of all patients with ductal carcinoma in situ over 5 years. 636 patients identified, and after exclusion criteria applied: 394 patients were included. High risk defined as: Extensive micro-calcification >40mm OR any mass forming DCIS. IPC: Winpath search from for the term ‘papillary carcinoma’ in any breast specimen for 5 years duration;.29 patients were included in this group. Results: DCIS: 188 deemed high risk due to >40mm calcification or a mass forming (radiological or palpable) 61% of those had a mastectomy and 32% BCS. Overall, in that high-risk group - the number with invasive disease was 38%. Of those high-risk DCIS pts 85% had a SLN - 80% at the time of surgery and 5% at a second operation. For the BCS patients - 42% had SLN at time of surgery and 13% (8 patients) at a second operation. 15 (7.9%) pts in the high-risk group had a positive SLNB, 11 having a mastectomy and 4 having BCS. IPC: The provisional diagnosis of encysted papillary carcinoma is upgraded to an invasive carcinoma on final histology in around a third of cases. This has may have implications when deciding whether to offer sentinel node removal at the time of therapeutic surgery. Conclusions: We have defined a ‘high risk’ group of pts with pre-op diagnosis of non-invasive cancer undergoing BCS, who would benefit from SLNB at the time of the surgery. In patients with high-risk features; the risk of invasive disease is up to 40% but the risk of nodal involvement is approximately 8%. The risk of morbidity from SLN is up to about 5% especially the risk of lymphedema.Keywords: breast ductal carcinoma in Situ (DCIS), intracystic papillary carcinoma (IPC), sentinel node biopsy (SLNB), high-risk, non-invasive, cancer disease
Procedia PDF Downloads 1111254 Influential Factors for Consumerism in Womens Western Formal Wear: An Indian Perspective
Authors: Namrata Jain, Vishaka Karnad
Abstract:
Fashion has always fascinated people through ages. Indian women’s wear in particular women's western formal wear has gone through transformational phases during the past decade. Increasing number of working women, independence in deciding financial matters, media exposure and awareness of current trends has provided a different dimension to the apparel segment. With globalization and sharing of cultures, in India formal women’s wear is no longer restricted to ethnic outfits like a sari or salwarkameez. Strong western influence has been observed in the process of designing, production and use of western formal wear by working women as consumers. The present study focuses on the psychographics parameters, consumer buying preferences and their relation to the present market scenario. Qualitative and quantitative data was gathered through a observation, consumer survey and study of brands. A questionnaire was prepared and uploaded as a google form to gather primary data from hundred consumer respondents. The respondent samples were drawn through snowball and purposive sampling technique. Consumers’ buying behavior is influenced by various aspects like age group, occupation, income and their personal preferences. Frequency of use, criteria for brand selection, styles of formal wear and motivating factors for purchase of western formals by working women were the other influential factors under consideration. It was observed that higher consumption and more popularity was indicated by women in the age group of 21-30 years. Amongst western formal wear shirts and trousers were noted to be the most preferred in Mumbai. It may be noted that consumers purchased and used branded western formal wear for reasons of comfort and value for money. Past experience in using the product and price were some of the important criteria for brand loyalty but the need for variety lured consumers to look for other brands. Fit of the garment was rated as the most important motivational factor while selecting products for purchase. With the advancement of women’s economic status, self-reliance, women role and image in the society, impulsive buying has increased with increase in consumerism. There is an ever growing demand for innovations in cuts, styles, designs, colors and fabrics. The growing fashion consciousness at the work place has turned women’s formal wear segment into a lucrative and highly evolving market thus providing space for new entrepreneurs to become a part of this developing sector.Keywords: buying behavior, consumerism, fashion, western formal wear
Procedia PDF Downloads 4671253 Research on the Optimization of Satellite Mission Scheduling
Authors: Pin-Ling Yin, Dung-Ying Lin
Abstract:
Satellites play an important role in our daily lives, from monitoring the Earth's environment and providing real-time disaster imagery to predicting extreme weather events. As technology advances and demands increase, the tasks undertaken by satellites have become increasingly complex, with more stringent resource management requirements. A common challenge in satellite mission scheduling is the limited availability of resources, including onboard memory, ground station accessibility, and satellite power. In this context, efficiently scheduling and managing the increasingly complex satellite missions under constrained resources has become a critical issue that needs to be addressed. The core of Satellite Onboard Activity Planning (SOAP) lies in optimizing the scheduling of the received tasks, arranging them on a timeline to form an executable onboard mission plan. This study aims to develop an optimization model that considers the various constraints involved in satellite mission scheduling, such as the non-overlapping execution periods for certain types of tasks, the requirement that tasks must fall within the contact range of specified types of ground stations during their execution, onboard memory capacity limits, and the collaborative constraints between different types of tasks. Specifically, this research constructs a mixed-integer programming mathematical model and solves it with a commercial optimization package. Simultaneously, as the problem size increases, the problem becomes more difficult to solve. Therefore, in this study, a heuristic algorithm has been developed to address the challenges of using commercial optimization package as the scale increases. The goal is to effectively plan satellite missions, maximizing the total number of executable tasks while considering task priorities and ensuring that tasks can be completed as early as possible without violating feasibility constraints. To verify the feasibility and effectiveness of the algorithm, test instances of various sizes were generated, and the results were validated through feedback from on-site users and compared against solutions obtained from a commercial optimization package. Numerical results show that the algorithm performs well under various scenarios, consistently meeting user requirements. The satellite mission scheduling algorithm proposed in this study can be flexibly extended to different types of satellite mission demands, achieving optimal resource allocation and enhancing the efficiency and effectiveness of satellite mission execution.Keywords: mixed-integer programming, meta-heuristics, optimization, resource management, satellite mission scheduling
Procedia PDF Downloads 251252 Application of Artificial Intelligence to Schedule Operability of Waterfront Facilities in Macro Tide Dominated Wide Estuarine Harbour
Authors: A. Basu, A. A. Purohit, M. M. Vaidya, M. D. Kudale
Abstract:
Mumbai, being traditionally the epicenter of India's trade and commerce, the existing major ports such as Mumbai and Jawaharlal Nehru Ports (JN) situated in Thane estuary are also developing its waterfront facilities. Various developments over the passage of decades in this region have changed the tidal flux entering/leaving the estuary. The intake at Pir-Pau is facing the problem of shortage of water in view of advancement of shoreline, while jetty near Ulwe faces the problem of ship scheduling due to existence of shallower depths between JN Port and Ulwe Bunder. In order to solve these problems, it is inevitable to have information about tide levels over a long duration by field measurements. However, field measurement is a tedious and costly affair; application of artificial intelligence was used to predict water levels by training the network for the measured tide data for one lunar tidal cycle. The application of two layered feed forward Artificial Neural Network (ANN) with back-propagation training algorithms such as Gradient Descent (GD) and Levenberg-Marquardt (LM) was used to predict the yearly tide levels at waterfront structures namely at Ulwe Bunder and Pir-Pau. The tide data collected at Apollo Bunder, Ulwe, and Vashi for a period of lunar tidal cycle (2013) was used to train, validate and test the neural networks. These trained networks having high co-relation coefficients (R= 0.998) were used to predict the tide at Ulwe, and Vashi for its verification with the measured tide for the year 2000 & 2013. The results indicate that the predicted tide levels by ANN give reasonably accurate estimation of tide. Hence, the trained network is used to predict the yearly tide data (2015) for Ulwe. Subsequently, the yearly tide data (2015) at Pir-Pau was predicted by using the neural network which was trained with the help of measured tide data (2000) of Apollo and Pir-Pau. The analysis of measured data and study reveals that: The measured tidal data at Pir-Pau, Vashi and Ulwe indicate that there is maximum amplification of tide by about 10-20 cm with a phase lag of 10-20 minutes with reference to the tide at Apollo Bunder (Mumbai). LM training algorithm is faster than GD and with increase in number of neurons in hidden layer and the performance of the network increases. The predicted tide levels by ANN at Pir-Pau and Ulwe provides valuable information about the occurrence of high and low water levels to plan the operation of pumping at Pir-Pau and improve ship schedule at Ulwe.Keywords: artificial neural network, back-propagation, tide data, training algorithm
Procedia PDF Downloads 4831251 Determination of Friction and Damping Coefficients of Folded Cover Mechanism Deployed by Torsion Springs
Authors: I. Yilmaz, O. Taga, F. Kosar, O. Keles
Abstract:
In this study, friction and damping coefficients of folded cover mechanism were obtained in accordance with experimental studies and data. Friction and damping coefficients are the most important inputs to accomplish a mechanism analysis. Friction and damping are two objects that change the time of deployment of mechanisms and their dynamic behaviors. Though recommended friction coefficient values exist in literature, damping is differentiating feature according to mechanic systems. So the damping coefficient should be obtained from mechanism test outputs. In this study, the folded cover mechanism use torsion springs for deploying covers that are formerly close folded position. Torsion springs provide folded covers with desirable deploying time according to variable environmental conditions. To verify all design revisions with system tests will be so costly so that some decisions are taken in accordance with numerical methods. In this study, there are two folded covers required to deploy simultaneously. Scotch-yoke and crank-rod mechanisms were combined to deploy folded covers simultaneously. The mechanism was unlocked with a pyrotechnic bolt onto scotch-yoke disc. When pyrotechnic bolt was exploded, torsion springs provided rotational movement for mechanism. Quick motion camera was recording dynamic behaviors of system during deployment case. Dynamic model of mechanism was modeled as rigid body with Adams MBD (multi body dynamics) then torque values provided by torsion springs were used as an input. A well-advised range of friction and damping coefficients were defined in Adams DOE (design of experiment) then a large number of analyses were performed until deployment time of folded covers run in with test data observed in record of quick motion camera, thus the deployment time of mechanism and dynamic behaviors were obtained. Same mechanism was tested with different torsion springs and torque values then outputs were compared with numerical models. According to comparison, it was understood that friction and damping coefficients obtained in this study can be used safely when studying on folded objects required to deploy simultaneously. In addition to model generated with Adams as rigid body the finite element model of folded mechanism was generated with Abaqus then the outputs of rigid body model and finite element model was compared. Finally, the reasonable solutions were suggested about different outputs of these solution methods.Keywords: damping, friction, pyro-technic, scotch-yoke
Procedia PDF Downloads 3221250 Pressure-Robust Approximation for the Rotational Fluid Flow Problems
Authors: Medine Demir, Volker John
Abstract:
Fluid equations in a rotating frame of reference have a broad class of important applications in meteorology and oceanography, especially in the large-scale flows considered in ocean and atmosphere, as well as many physical and industrial applications. The Coriolis and the centripetal forces, resulting from the rotation of the earth, play a crucial role in such systems. For such applications it may be required to solve the system in complex three-dimensional geometries. In recent years, the Navier--Stokes equations in a rotating frame have been investigated in a number of papers using the classical inf-sup stable mixed methods, like Taylor-Hood pairs, to contribute to the analysis and the accurate and efficient numerical simulation. Numerical analysis reveals that these classical methods introduce a pressure-dependent contribution in the velocity error bounds that is proportional to some inverse power of the viscosity. Hence, these methods are optimally convergent but small velocity errors might not be achieved for complicated pressures and small viscosity coefficients. Several approaches have been proposed for improving the pressure-robustness of pairs of finite element spaces. In this contribution, a pressure-robust space discretization of the incompressible Navier--Stokes equations in a rotating frame of reference is considered. The discretization employs divergence-free, $H^1$-conforming mixed finite element methods like Scott--Vogelius pairs. However, this approach might come with a modification of the meshes, like the use of barycentric-refined grids in case of Scott--Vogelius pairs. However, this strategy requires the finite element code to have control on the mesh generator which is not realistic in many engineering applications and might also be in conflict with the solver for the linear system. An error estimate for the velocity is derived that tracks the dependency of the error bound on the coefficients of the problem, in particular on the angular velocity. Numerical examples illustrate the theoretical results. The idea of pressure-robust method could be cast on different types of flow problems which would be considered as future studies. As another future research direction, to avoid a modification of the mesh, one may use a very simple parameter-dependent modification of the Scott-Vogelius element, the pressure-wired Stokes element, such that the inf-sup constant is independent of nearly-singular vertices.Keywords: navier-stokes equations in a rotating frame of refence, coriolis force, pressure-robust error estimate, scott-vogelius pairs of finite element spaces
Procedia PDF Downloads 661249 Analyzing Transit Network Design versus Urban Dispersion
Authors: Hugo Badia
Abstract:
This research answers which is the most suitable transit network structure to serve specific demand requirements in an increasing urban dispersion process. Two main approaches of network design are found in the literature. On the one hand, a traditional answer, widespread in our cities, that develops a high number of lines to connect most of origin-destination pairs by direct trips; an approach based on the idea that users averse to transfers. On the other hand, some authors advocate an alternative design characterized by simple networks where transfer is essential to complete most of trips. To answer which of them is the best option, we use a two-step methodology. First, by means of an analytical model, three basic network structures are compared: a radial scheme, starting point for the other two structures, a direct trip-based network, and a transfer-based one, which represent the two alternative transit network designs. The model optimizes the network configuration with regard to the total cost for each structure. For a scenario of dispersion, the best alternative is the structure with the minimum cost. This dispersion degree is defined in a simple way considering that only a central area attracts all trips. If this area is small, we have a high concentrated mobility pattern; if this area is too large, the city is highly decentralized. In this first step, we can determine the area of applicability for each structure in function to that urban dispersion degree. The analytical results show that a radial structure is suitable when the demand is so centralized, however, when this demand starts to scatter, new transit lines should be implemented to avoid transfers. If the urban dispersion advances, the introduction of more lines is no longer a good alternative, in this case, the best solution is a change of structure, from direct trips to a network based on transfers. The area of applicability of each network strategy is not constant, it depends on the characteristics of demand, city and transport technology. In the second step, we translate analytical results to a real case study by the relationship between the parameters of dispersion of the model and direct measures of dispersion in a real city. Two dimensions of the urban sprawl process are considered: concentration, defined by Gini coefficient, and centralization by area based centralization index. Once it is estimated the real dispersion degree, we are able to identify in which area of applicability the city is located. In summary, from a strategic point of view, we can obtain with this methodology which is the best network design approach for a city, comparing the theoretical results with the real dispersion degree.Keywords: analytical network design model, network structure, public transport, urban dispersion
Procedia PDF Downloads 2301248 High-Speed Particle Image Velocimetry of the Flow around a Moving Train Model with Boundary Layer Control Elements
Authors: Alexander Buhr, Klaus Ehrenfried
Abstract:
Trackside induced airflow velocities, also known as slipstream velocities, are an important criterion for the design of high-speed trains. The maximum permitted values are given by the Technical Specifications for Interoperability (TSI) and have to be checked in the approval process. For train manufactures it is of great interest to know in advance, how new train geometries would perform in TSI tests. The Reynolds number in moving model experiments is lower compared to full-scale. Especially the limited model length leads to a thinner boundary layer at the rear end. The hypothesis is that the boundary layer rolls up to characteristic flow structures in the train wake, in which the maximum flow velocities can be observed. The idea is to enlarge the boundary layer using roughness elements at the train model head so that the ratio between the boundary layer thickness and the car width at the rear end is comparable to a full-scale train. This may lead to similar flow structures in the wake and better prediction accuracy for TSI tests. In this case, the design of the roughness elements is limited by the moving model rig. Small rectangular roughness shapes are used to get a sufficient effect on the boundary layer, while the elements are robust enough to withstand the high accelerating and decelerating forces during the test runs. For this investigation, High-Speed Particle Image Velocimetry (HS-PIV) measurements on an ICE3 train model have been realized in the moving model rig of the DLR in Göttingen, the so called tunnel simulation facility Göttingen (TSG). The flow velocities within the boundary layer are analysed in a plain parallel to the ground. The height of the plane corresponds to a test position in the EN standard (TSI). Three different shapes of roughness elements are tested. The boundary layer thickness and displacement thickness as well as the momentum thickness and the form factor are calculated along the train model. Conditional sampling is used to analyse the size and dynamics of the flow structures at the time of maximum velocity in the train wake behind the train. As expected, larger roughness elements increase the boundary layer thickness and lead to larger flow velocities in the boundary layer and in the wake flow structures. The boundary layer thickness, displacement thickness and momentum thickness are increased by using larger roughness especially when applied in the height close to the measuring plane. The roughness elements also cause high fluctuations in the form factors of the boundary layer. Behind the roughness elements, the form factors rapidly are approaching toward constant values. This indicates that the boundary layer, while growing slowly along the second half of the train model, has reached a state of equilibrium.Keywords: boundary layer, high-speed PIV, ICE3, moving train model, roughness elements
Procedia PDF Downloads 3051247 Single-Case Experimental Design: Exploratory Pilot Study on the Feasibility and Effect of Virtual Reality for Pain and Anxiety Management During Care
Authors: Corbel Camille, Le Cerf Flora, Corveleyn Xavier
Abstract:
Introduction: Aging is a physiological phenomenon accompanied by anatomical and cognitive changes leading to anxiety and pain. This could have significant impacts on quality of life, life expectancy, and the progression of cognitive disorders. Virtual Reality Intervention (VRI) is increasingly recognized as a non-pharmacological approach to alleviate pain and anxiety in children and young adults. However, while recent studies have explored the feasibility of applying VRI in the older population, confirmation through studies is still required to establish its benefits in various contexts. Objective: This pilot study, following a clinical trial methodology international recommendation for VRI in healthcare, aims to evaluate the feasibility and effects of using VRI with a 101-year-old woman residing in a nursing home undergoing weekly painful and anxious wound dressing changes. Methods: Following the international recommendations, this study focused on feasibility and preliminary results. A Single Case Experimental Design protocol consists of two distinct phases: control (Phase A) and personalized VRI (Phase B), each lasting for 6 sessions. Data were collected before, during and after the care, using measures of pain (Algoplus and numerical scale), anxiety (Hospital anxiety scale and numerical scale), VRI experience (semi-structured interview) and physiological measures. Results: The results suggest that the utilization of VRI is both feasible and well-tolerated by the participant. VRI contributed to a decrease in pain and anxiety during care sessions, with a more significant impact on pain compared to anxiety, which showed a gradual and slight decrease. Physiological data, particularly those related to stress, also indicate a reduction in physiological activity during VRI. Conclusion: This pilot study confirms the feasibility and benefits of using virtual reality in managing pain and anxiety in an older adult in a nursing home. In light of these results, it is essential that future studies focus on setting up randomized controlled trials (RCTs). These studies should involve a representative number of older adults to ensure generalizable data. This rigorous, controlled methodology will enable us to assess the effectiveness of virtual reality more accurately in various care settings, measure its impact on clinical parameters such as pain and anxiety, and explore the long-term implications of this intervention.Keywords: anxiety reduction, nursing home, older adult, pain management, virtual reality
Procedia PDF Downloads 641246 Isolation and Selection of Strains Perspective for Sewage Sludge Processing
Authors: A. Zh. Aupova, A. Ulankyzy, A. Sarsenova, A. Kussayin, Sh. Turarbek, N. Moldagulova, A. Kurmanbayev
Abstract:
One of the methods of organic waste bioconversion into environmentally-friendly fertilizer is composting. Microorganisms that produce hydrolytic enzymes play a significant role in accelerating the process of organic waste composting. We studied the enzymatic potential (amylase, protease, cellulase, lipase, urease activity) of bacteria isolated from the sewage sludge of Nur-Sultan, Rudny, and Fort-Shevchenko cities, the dacha soil of Nur-Sultan city, and freshly cut grass from the dacha for processing organic waste and identifying active strains. Microorganism isolation was carried out by the cultures enrichment method on liquid nutrient media, followed by inoculating on different solid media to isolate individual colonies. As a result, sixty-one microorganisms were isolated, three of which were thermophiles (DS1, DS2, and DS3). The highest number of isolates, twenty-one and eighteen, were isolated from sewage sludge of Nur-Sultan and Rudny cities, respectively. Ten isolates were isolated from the wastewater of the sewage treatment plant in Fort-Shevchenko. From the dacha soil of Nur-Sultan city and freshly cut grass - 9 and 5 isolates were revealed, respectively. The lipolytic, proteolytic, amylolytic, cellulolytic, ureolytic, and oil-oxidizing activities of isolates were studied. According to the results of experiments, starch hydrolysis (amylolytic activity) was found in 2 isolates - CB2/2, and CB2/1. Three isolates - CB2, CB2/1, and CB1/1 were selected for the highest ability to break down casein. Among isolated 61 bacterial cultures, three isolates could break down fats - CB3, CBG1/1, and IL3. Seven strains had cellulolytic activity - DS1, DS2, IL3, IL5, P2, P5, and P3. Six isolates rapidly decomposed urea. Isolate P1 could break down casein and cellulose. Isolate DS3 was a thermophile and had cellulolytic activity. Thus, based on the conducted studies, 15 isolates were selected as a potential for sewage sludge composting - CB2, CB3, CB1/1, CB2/2, CBG1/1, CB2/1, DS1, DS2, DS3, IL3, IL5, P1, P2, P5, P3. Selected strains were identified on a mass spectrometer (Maldi-TOF). The isolate - CB 3 was referred to the genus Rhodococcus rhodochrous; two isolates CB2 and CB1 / 1 - to Bacillus cereus, CB 2/2 - to Cryseobacterium arachidis, CBG 1/1 - to Pseudoxanthomonas sp., CB2/1 - to Bacillus megaterium, DS1 - to Pediococcus acidilactici, DS2 - to Paenibacillus residui, DS3 - to Brevibacillus invocatus, three strains IL3, P5, P3 - to Enterobacter cloacae, two strains IL5, P2 - to Ochrobactrum intermedium, and P1 - Bacillus lichenoformis. Hence, 60 isolates were isolated from the wastewater of the cities of Nur-Sultan, Rudny, Fort-Shevchenko, the dacha soil of Nur-Sultan city, and freshly cut grass from the dacha. Based on the highest enzymatic activity, 15 active isolates were selected and identified. These strains may become the candidates for bio preparation for sewage sludge processing.Keywords: sewage sludge, composting, bacteria, enzymatic activity
Procedia PDF Downloads 1021245 Sweepline Algorithm for Voronoi Diagram of Polygonal Sites
Authors: Dmitry A. Koptelov, Leonid M. Mestetskiy
Abstract:
Voronoi Diagram (VD) of finite set of disjoint simple polygons, called sites, is a partition of plane into loci (for each site at the locus) – regions, consisting of points that are closer to a given site than to all other. Set of polygons is a universal model for many applications in engineering, geoinformatics, design, computer vision, and graphics. VD of polygons construction usually done with a reduction to task of constructing VD of segments, for which there are effective O(n log n) algorithms for n segments. Preprocessing – constructing segments from polygons’ sides, and postprocessing – polygon’s loci construction by merging the loci of the sides of each polygon are also included in reduction. This approach doesn’t take into account two specific properties of the resulting segment sites. Firstly, all this segments are connected in pairs in the vertices of the polygons. Secondly, on the one side of each segment lies the interior of the polygon. The polygon is obviously included in its locus. Using this properties in the algorithm for VD construction is a resource to reduce computations. The article proposes an algorithm for the direct construction of VD of polygonal sites. Algorithm is based on sweepline paradigm, allowing to effectively take into account these properties. The solution is performed based on reduction. Preprocessing is the constructing of set of sites from vertices and edges of polygons. Each site has an orientation such that the interior of the polygon lies to the left of it. Proposed algorithm constructs VD for set of oriented sites with sweepline paradigm. Postprocessing is a selecting of edges of this VD formed by the centers of empty circles touching different polygons. Improving the efficiency of the proposed sweepline algorithm in comparison with the general Fortune algorithm is achieved due to the following fundamental solutions: 1. Algorithm constructs only such VD edges, which are on the outside of polygons. Concept of oriented sites allowed to avoid construction of VD edges located inside the polygons. 2. The list of events in sweepline algorithm has a special property: the majority of events are connected with “medium” polygon vertices, where one incident polygon side lies behind the sweepline and the other in front of it. The proposed algorithm processes such events in constant time and not in logarithmic time, as in the general Fortune algorithm. The proposed algorithm is fully implemented and tested on a large number of examples. The high reliability and efficiency of the algorithm is also confirmed by computational experiments with complex sets of several thousand polygons. It should be noted that, despite the considerable time that has passed since the publication of Fortune's algorithm in 1986, a full-scale implementation of this algorithm for an arbitrary set of segment sites has not been made. The proposed algorithm fills this gap for an important special case - a set of sites formed by polygons.Keywords: voronoi diagram, sweepline, polygon sites, fortunes' algorithm, segment sites
Procedia PDF Downloads 1771244 Assessment of Microclimate in Abu Dhabi Neighborhoods: On the Utilization of Native Landscape in Enhancing Thermal Comfort
Authors: Maryam Al Mheiri, Khaled Al Awadi
Abstract:
Urban population is continuously increasing worldwide and the speed at which cities urbanize creates major challenges, particularly in terms of creating sustainable urban environments. Rapid urbanization often leads to negative environmental impacts and changes in the urban microclimates. Moreover, when rapid urbanization is paired with limited landscape elements, the effects on human health due to the increased pollution, and thermal comfort due to Urban Heat Island effects are increased. Urban Heat Island (UHI) describes the increase of urban temperatures in urban areas in comparison to its rural surroundings, and, as we discuss in this paper, it impacts on pedestrian comfort, reducing the number of walking trips and public space use. It is thus very necessary to investigate the quality of outdoor built environments in order to improve the quality of life incites. The main objective of this paper is to address the morphology of Emirati neighborhoods, setting a quantitative baseline by which to assess and compare spatial characteristics and microclimate performance of existing typologies in Abu Dhabi. This morphological mapping and analysis will help to understand the built landscape of Emirati neighborhoods in this city, whose form has changed and evolved across different periods. This will eventually help to model the use of different design strategies, such as landscaping, to mitigate UHI effects and enhance outdoor urban comfort. Further, the impact of different native plants types and native species in reducing UHI effects and enhancing outdoor urban comfort, allowing for the assessment of the impact of increasing landscaped areas in these neighborhoods. This study uses ENVI-met, an analytical, three-dimensional, high-resolution microclimate modeling software. This micro-scale urban climate model will be used to evaluate existing conditions and generate scenarios in different residential areas, with different vegetation surfaces and landscaping, and examine their impact on surface temperatures during summer and autumn. In parallel to these simulations, field measurement will be included to calibrate the Envi-met model. This research therefore takes an experimental approach, using simulation software, and a case study strategy for the evaluation of a sample of residential neighborhoods. A comparison of the results of these scenarios constitute a first step towards making recommendations about what constitutes sustainable landscapes for Abu Dhabi neighborhoods.Keywords: landscape, microclimate, native plants, sustainable neighborhoods, thermal comfort, urban heat island
Procedia PDF Downloads 3101243 Application of Mesenchymal Stem Cells in Diabetic Therapy
Authors: K. J. Keerthi, Vasundhara Kamineni, A. Ravi Shanker, T. Rammurthy, A. Vijaya Lakshmi, Q. Hasan
Abstract:
Pancreatic β-cells are the predominant insulin-producing cell types within the Islets of Langerhans and insulin is the primary hormone which regulates carbohydrate and fat metabolism. Apoptosis of β-cells or insufficient insulin production leads to Diabetes Mellitus (DM). Current therapy for diabetes includes either medical management or insulin replacement and regular monitoring. Replacement of β- cells is an attractive treatment option for both Type-1 and Type-2 DM in view of the recent paper which indicates that β-cells apoptosis is the common underlying cause for both the Types of DM. With the development of Edmonton protocol, pancreatic β-cells allo-transplantation became possible, but this is still not considered as standard of care due to subsequent requirement of lifelong immunosuppression and the scarcity of suitable healthy organs to retrieve pancreatic β-cell. Fetal pancreatic cells from abortuses were developed as a possible therapeutic option for Diabetes, however, this posed several ethical issues. Hence, in the present study Mesenchymal stem cells (MSCs) were differentiated into insulin producing cells which were isolated from Human Umbilical cord (HUC) tissue. MSCs have already made their mark in the growing field of regenerative medicine, and their therapeutic worth has already been validated for a number of conditions. HUC samples were collected with prior informed consent as approved by the Institutional ethical committee. HUC (n=26) were processed using a combination of both mechanical and enzymatic (collagenase-II, 100 U/ml, Gibco ) methods to obtain MSCs which were cultured in-vitro in L-DMEM (Low glucose Dulbecco's Modified Eagle's Medium, Sigma, 4.5 mM glucose/L), 10% FBS in 5% CO2 incubator at 37°C. After reaching 80-90% confluency, MSCs were characterized with Flowcytometry and Immunocytochemistry for specific cell surface antigens. Cells expressed CD90+, CD73+, CD105+, CD34-, CD45-, HLA-DR-/Low and Vimentin+. These cells were differentiated to β-cells by using H-DMEM (High glucose Dulbecco's Modified Eagle's Medium,25 mM glucose/L, Gibco), β-Mercaptoethanol (0.1mM, Hi-Media), basic Fibroblast growth factor (10 µg /L,Gibco), and Nicotinamide (10 mmol/L, Hi-Media). Pancreatic β-cells were confirmed by positive Dithizone staining and were found to be functionally active as they released 8 IU/ml insulin on glucose stimulation. Isolating MSCs from usually discarded, abundantly available HUC tissue, expanding and differentiating to β-cells may be the most feasible cell therapy option for the millions of people suffering from DM globally.Keywords: diabetes mellitus, human umbilical cord, mesenchymal stem cells, differentiation
Procedia PDF Downloads 2591242 Magnetic Navigation of Nanoparticles inside a 3D Carotid Model
Authors: E. G. Karvelas, C. Liosis, A. Theodorakakos, T. E. Karakasidis
Abstract:
Magnetic navigation of the drug inside the human vessels is a very important concept since the drug is delivered to the desired area. Consequently, the quantity of the drug required to reach therapeutic levels is being reduced while the drug concentration at targeted sites is increased. Magnetic navigation of drug agents can be achieved with the use of magnetic nanoparticles where anti-tumor agents are loaded on the surface of the nanoparticles. The magnetic field that is required to navigate the particles inside the human arteries is produced by a magnetic resonance imaging (MRI) device. The main factors which influence the efficiency of the usage of magnetic nanoparticles for biomedical applications in magnetic driving are the size and the magnetization of the biocompatible nanoparticles. In this study, a computational platform for the simulation of the optimal gradient magnetic fields for the navigation of magnetic nanoparticles inside a carotid artery is presented. For the propulsion model of the particles, seven major forces are considered, i.e., the magnetic force from MRIs main magnet static field as well as the magnetic field gradient force from the special propulsion gradient coils. The static field is responsible for the aggregation of nanoparticles, while the magnetic gradient contributes to the navigation of the agglomerates that are formed. Moreover, the contact forces among the aggregated nanoparticles and the wall and the Stokes drag force for each particle are considered, while only spherical particles are used in this study. In addition, gravitational forces due to gravity and the force due to buoyancy are included. Finally, Van der Walls force and Brownian motion are taken into account in the simulation. The OpenFoam platform is used for the calculation of the flow field and the uncoupled equations of particles' motion. To verify the optimal gradient magnetic fields, a covariance matrix adaptation evolution strategy (CMAES) is used in order to navigate the particles into the desired area. A desired trajectory is inserted into the computational geometry, which the particles are going to be navigated in. Initially, the CMAES optimization strategy provides the OpenFOAM program with random values of the gradient magnetic field. At the end of each simulation, the computational platform evaluates the distance between the particles and the desired trajectory. The present model can simulate the motion of particles when they are navigated by the magnetic field that is produced by the MRI device. Under the influence of fluid flow, the model investigates the effect of different gradient magnetic fields in order to minimize the distance of particles from the desired trajectory. In addition, the platform can navigate the particles into the desired trajectory with an efficiency between 80-90%. On the other hand, a small number of particles are stuck to the walls and remains there for the rest of the simulation.Keywords: artery, drug, nanoparticles, navigation
Procedia PDF Downloads 1071241 Mikrophonie I (1964) by Karlheinz Stockhausen - Between Idea and Auditory Image
Authors: Justyna Humięcka-Jakubowska
Abstract:
1. Background in music analysis. Traditionally, when we think about a composer’s sketches, the chances are that we are thinking in terms of the working out of detail, rather than the evolution of an overall concept. Since music is a “time art’, it follows that questions of a form cannot be entirely detached from considerations of time. One could say that composers tend to regard time either as a place gradually and partially intuitively filled, or they can look for a specific strategy to occupy it. In my opinion, one thing that sheds light on Stockhausen's compositional thinking is his frequent use of 'form schemas', that is often a single-page representation of the entire structure of a piece. 2. Background in music technology. Sonic Visualiser is a program used to study a musical recording. It is an open source application for viewing, analysing, and annotating music audio files. It contains a number of visualisation tools, which are designed with useful default parameters for musical analysis. Additionally, the Vamp plugin format of SV supports to provide analysis such as for example structural segmentation. 3. Aims. The aim of my paper is to show how SV may be used to obtain a better understanding of the specific musical work, and how the compositional strategy does impact on musical structures and musical surfaces. I want to show that ‘traditional” music analytic methods don’t allow to indicate interrelationships between musical surface (which is perceived) and underlying musical/acoustical structure. 4. Main Contribution. Stockhausen had dealt with the most diverse musical problems by the most varied methods. A characteristic which he had never ceased to be placed at the center of his thought and works, it was the quest for a new balance founded upon an acute connection between speculation and intuition. In the case with Mikrophonie I (1964) for tam-tam and 6 players Stockhausen makes a distinction between the "connection scheme", which indicates the ground rules underlying all versions, and the form scheme, which is associated with a particular version. The preface to the published score includes both the connection scheme, and a single instance of a "form scheme", which is what one can hear on the CD recording. In the current study, the insight into the compositional strategy chosen by Stockhausen was been compared with auditory image, that is, with the perceived musical surface. Stockhausen's musical work is analyzed both in terms of melodic/voice and timbre evolution. 5. Implications The current study shows how musical structures have determined of musical surface. My general assumption is this, that while listening to music we can extract basic kinds of musical information from musical surfaces. It is shown that an interactive strategies of musical structure analysis can offer a very fruitful way of looking directly into certain structural features of music.Keywords: automated analysis, composer's strategy, mikrophonie I, musical surface, stockhausen
Procedia PDF Downloads 2971240 Effects of Conversion of Indigenous Forest to Plantation Forest on the Diversity of Macro-Fungi in Kereita Forest, Kikuyu Escarpment, Kenya
Authors: Susan Mwai, Mary Muchane, Peter Wachira, Sheila Okoth, Muchai Muchane, Halima Saado
Abstract:
Tropical forests harbor a wide range of biodiversity and rich macro-fungi diversity compared to the temperate regions in the World. However, biodiversity is facing the threat of extinction following the rate of forest loss taking place before proper study and documentation of macrofungi is achieved. The present study was undertaken to determine the effect of converting indigenous habitat to plantation forest on macrofungi diversity. To achieve the objective of this study, an inventory focusing on macro-fungi diversity was conducted within Kereita block in Kikuyu Escarpment forest which is on the southern side of Aberdare mountain range. The macrofungi diversity was conducted in the indigenous forest and in more than 15 year old Patula plantation forest , during the wet (long rain season, December 2014) and dry (Short rain season, May, 2015). In each forest type, 15 permanent (20m x 20m) sampling plots distributed across three (3) forest blocks were used. Both field and laboratory methods involved recording abundance of fruiting bodies, taxonomic identity of species and analysis of diversity indices and measures in terms of species richness, density and diversity. R statistical program was used to analyze for species diversity and Canoco 4.5 software for species composition. A total number of 76 genera in 28 families and 224 species were encountered in both forest types. The most represented taxa belonged to the Agaricaceae (16%), Polyporaceae (12%), Marasmiaceae, Mycenaceae (7%) families respectively. Most of the recorded macro-fungi were saprophytic, mostly colonizing the litter 38% and wood 34% based substrates, which was followed by soil organic dwelling species (17%). Ecto-mycorrhiza fungi (5%) and parasitic fungi (2%) were the least encountered. The data established that indigenous forests (native ecosystems) hosts a wide range of macrofungi assemblage in terms of density (2.6 individual fruit bodies / m2), species richness (8.3 species / plot) and species diversity (1.49/ plot level) compared to the plantation forest. The Conversion of native forest to plantation forest also interfered with species composition though did not alter species diversity. Seasonality was also shown to significantly affect the diversity of macro-fungi and 61% of the total species being present during the wet season. Based on the present findings, forested ecosystems in Kenya hold diverse macro-fungi community which warrants conservation measures.Keywords: diversity, Indigenous forest, macro-fungi, plantation forest, season
Procedia PDF Downloads 2141239 An Evolutionary Approach for QAOA for Max-Cut
Authors: Francesca Schiavello
Abstract:
This work aims to create a hybrid algorithm, combining Quantum Approximate Optimization Algorithm (QAOA) with an Evolutionary Algorithm (EA) in the place of traditional gradient based optimization processes. QAOA’s were first introduced in 2014, where, at the time, their algorithm performed better than the traditional best known classical algorithm for Max-cut graphs. Whilst classical algorithms have improved since then and have returned to being faster and more efficient, this was a huge milestone for quantum computing, and their work is often used as a benchmarking tool and a foundational tool to explore variants of QAOA’s. This, alongside with other famous algorithms like Grover’s or Shor’s, highlights to the world the potential that quantum computing holds. It also presents the reality of a real quantum advantage where, if the hardware continues to improve, this could constitute a revolutionary era. Given that the hardware is not there yet, many scientists are working on the software side of things in the hopes of future progress. Some of the major limitations holding back quantum computing are the quality of qubits and the noisy interference they generate in creating solutions, the barren plateaus that effectively hinder the optimization search in the latent space, and the availability of number of qubits limiting the scale of the problem that can be solved. These three issues are intertwined and are part of the motivation for using EAs in this work. Firstly, EAs are not based on gradient or linear optimization methods for the search in the latent space, and because of their freedom from gradients, they should suffer less from barren plateaus. Secondly, given that this algorithm performs a search in the solution space through a population of solutions, it can also be parallelized to speed up the search and optimization problem. The evaluation of the cost function, like in many other algorithms, is notoriously slow, and the ability to parallelize it can drastically improve the competitiveness of QAOA’s with respect to purely classical algorithms. Thirdly, because of the nature and structure of EA’s, solutions can be carried forward in time, making them more robust to noise and uncertainty. Preliminary results show that the EA algorithm attached to QAOA can perform on par with the traditional QAOA with a Cobyla optimizer, which is a linear based method, and in some instances, it can even create a better Max-Cut. Whilst the final objective of the work is to create an algorithm that can consistently beat the original QAOA, or its variants, due to either speedups or quality of the solution, this initial result is promising and show the potential of EAs in this field. Further tests need to be performed on an array of different graphs with the parallelization aspect of the work commencing in October 2023 and tests on real hardware scheduled for early 2024.Keywords: evolutionary algorithm, max cut, parallel simulation, quantum optimization
Procedia PDF Downloads 601238 Differences in Assessing Hand-Written and Typed Student Exams: A Corpus-Linguistic Study
Authors: Jutta Ransmayr
Abstract:
The digital age has long arrived at Austrian schools, so both society and educationalists demand that digital means should be integrated accordingly to day-to-day school routines. Therefore, the Austrian school-leaving exam (A-levels) can now be written either by hand or by using a computer. However, the choice of writing medium (pen and paper or computer) for written examination papers, which are considered 'high-stakes' exams, raises a number of questions that have not yet been adequately investigated and answered until recently, such as: What effects do the different conditions of text production in the written German A-levels have on the component of normative linguistic accuracy? How do the spelling skills of German A-level papers written with a pen differ from those that the students wrote on the computer? And how is the teacher's assessment related to this? Which practical desiderata for German didactics can be derived from this? In a trilateral pilot project of the Austrian Center for Digital Humanities (ACDH) of the Austrian Academy of Sciences and the University of Vienna in cooperation with the Austrian Ministry of Education and the Council for German Orthography, these questions were investigated. A representative Austrian learner corpus, consisting of around 530 German A-level papers from all over Austria (pen and computer written), was set up in order to subject it to a quantitative (corpus-linguistic and statistical) and qualitative investigation with regard to the spelling and punctuation performance of the high school graduates and the differences between pen- and computer-written papers and their assessments. Relevant studies are currently available mainly from the Anglophone world. These have shown that writing on the computer increases the motivation to write, has positive effects on the length of the text, and, in some cases, also on the quality of the text. Depending on the writing situation and other technical aids, better results in terms of spelling and punctuation could also be found in the computer-written texts as compared to the handwritten ones. Studies also point towards a tendency among teachers to rate handwritten texts better than computer-written texts. In this paper, the first comparable results from the German-speaking area are to be presented. Research results have shown that, on the one hand, there are significant differences between handwritten and computer-written work with regard to performance in orthography and punctuation. On the other hand, the corpus linguistic investigation and the subsequent statistical analysis made it clear that not only the teachers' assessments of the students’ spelling performance vary enormously but also the overall assessments of the exam papers – the factor of the production medium (pen and paper or computer) also seems to play a decisive role.Keywords: exam paper assessment, pen and paper or computer, learner corpora, linguistics
Procedia PDF Downloads 1701237 Evaluation of Electrophoretic and Electrospray Deposition Methods for Preparing Graphene and Activated Carbon Modified Nano-Fibre Electrodes for Hydrogen/Vanadium Flow Batteries and Supercapacitors
Authors: Barun Chakrabarti, Evangelos Kalamaras, Vladimir Yufit, Xinhua Liu, Billy Wu, Nigel Brandon, C. T. John Low
Abstract:
In this work, we perform electrophoretic deposition of activated carbon on a number of substrates to prepare symmetrical coin cells for supercapacitor applications. From several recipes that involve the evaluation of a few solvents such as isopropyl alcohol, N-Methyl-2-pyrrolidone (NMP), or acetone to binders such as polyvinylidene fluoride (PVDF) and charging agents such as magnesium chloride, we display a working means for achieving supercapacitors that can achieve 100 F/g in a consistent manner. We then adapt this EPD method to deposit reduced graphene oxide on SGL 10AA carbon paper to achieve cathodic materials for testing in a hydrogen/vanadium flow battery. In addition, a self-supported hierarchical carbon nano-fibre is prepared by means of electrospray deposition of an iron phthalocyanine solution onto a temporary substrate followed by carbonisation to remove heteroatoms. This process also induces a degree of nitrogen doping on the carbon nano-fibres (CNFs), which allows its catalytic performance to improve significantly as detailed in other publications. The CNFs are then used as catalysts by attaching them to graphite felt electrodes facing the membrane inside an all-vanadium flow battery (Scribner cell using serpentine flow distribution channels) and efficiencies as high as 60% is noted at high current densities of 150 mA/cm². About 20 charge and discharge cycling show that the CNF catalysts consistently perform better than pristine graphite felt electrodes. Following this, we also test the CNF as an electro-catalyst in the hydrogen/vanadium flow battery (cathodic side as mentioned briefly in the first paragraph) facing the membrane, based upon past studies from our group. Once again, we note consistently good efficiencies of 85% and above for CNF modified graphite felt electrodes in comparison to 60% for pristine felts at low current density of 50 mA/cm² (this reports 20 charge and discharge cycles of the battery). From this preliminary investigation, we conclude that the CNFs may be used as catalysts for other systems such as vanadium/manganese, manganese/manganese and manganese/hydrogen flow batteries in the future. We are generating data for such systems at present, and further publications are expected.Keywords: electrospinning, carbon nano-fibres, all-vanadium redox flow battery, hydrogen-vanadium fuel cell, electrocatalysis
Procedia PDF Downloads 2911236 A Feature Clustering-Based Sequential Selection Approach for Color Texture Classification
Authors: Mohamed Alimoussa, Alice Porebski, Nicolas Vandenbroucke, Rachid Oulad Haj Thami, Sana El Fkihi
Abstract:
Color and texture are highly discriminant visual cues that provide an essential information in many types of images. Color texture representation and classification is therefore one of the most challenging problems in computer vision and image processing applications. Color textures can be represented in different color spaces by using multiple image descriptors which generate a high dimensional set of texture features. In order to reduce the dimensionality of the feature set, feature selection techniques can be used. The goal of feature selection is to find a relevant subset from an original feature space that can improve the accuracy and efficiency of a classification algorithm. Traditionally, feature selection is focused on removing irrelevant features, neglecting the possible redundancy between relevant ones. This is why some feature selection approaches prefer to use feature clustering analysis to aid and guide the search. These techniques can be divided into two categories. i) Feature clustering-based ranking algorithm uses feature clustering as an analysis that comes before feature ranking. Indeed, after dividing the feature set into groups, these approaches perform a feature ranking in order to select the most discriminant feature of each group. ii) Feature clustering-based subset search algorithms can use feature clustering following one of three strategies; as an initial step that comes before the search, binded and combined with the search or as the search alternative and replacement. In this paper, we propose a new feature clustering-based sequential selection approach for the purpose of color texture representation and classification. Our approach is a three step algorithm. First, irrelevant features are removed from the feature set thanks to a class-correlation measure. Then, introducing a new automatic feature clustering algorithm, the feature set is divided into several feature clusters. Finally, a sequential search algorithm, based on a filter model and a separability measure, builds a relevant and non redundant feature subset: at each step, a feature is selected and features of the same cluster are removed and thus not considered thereafter. This allows to significantly speed up the selection process since large number of redundant features are eliminated at each step. The proposed algorithm uses the clustering algorithm binded and combined with the search. Experiments using a combination of two well known texture descriptors, namely Haralick features extracted from Reduced Size Chromatic Co-occurence Matrices (RSCCMs) and features extracted from Local Binary patterns (LBP) image histograms, on five color texture data sets, Outex, NewBarktex, Parquet, Stex and USPtex demonstrate the efficiency of our method compared to seven of the state of the art methods in terms of accuracy and computation time.Keywords: feature selection, color texture classification, feature clustering, color LBP, chromatic cooccurrence matrix
Procedia PDF Downloads 1371235 Variation in Wood Anatomical Properties of Acacia seyal var. seyal Tree Species Growing in Different Zones in Sudan
Authors: Hanadi Mohamed Shawgi Gamal, Ashraf Mohamed Ahmed Abdalla
Abstract:
Sudan is endowed by a great diversity of tree species; nevertheless, the utilization of wood resources has traditionally concentrated on a few number of species. With the great variation in the climatic zones of Sudan, great variations are expected in the anatomical properties between and within species. This variation needs to be fully explored in order to suggest the best uses for the species. Modern research on wood has substantiated that the climatic condition where the species grow has significant effect on wood properties. Understanding the extent of variability of wood is important because the uses for each kind of wood are related to its characteristics; furthermore, the suitability or quality of wood for a particular purpose is determined by the variability of one or more of these characteristics. The present study demonstrates the effect of rainfall zones in some anatomical properties of Acacia seyal var. seyal growing in Sudan. For this purpose, twenty healthy trees were collected randomly from two zones (ten trees per zone). One zone with relatively low rainfall (273mm annually) which represented by North Kordofan state and White Nile state and the second with relatively high rainfall (701 mm annually) represented by Blue Nile state and South Kordofan state. From each sampled tree, a stem disc (3 cm thick) was cut at 10% from stem height. One radius was obtained in central stem dices. Two representative samples were taken from each disc, one at 10% distance from pith to bark, the second at 90% in order to represent the juvenile and mature wood. The investigated anatomical properties were fibers length, fibers and vessels diameter, lumen diameter, and wall thickness as well as cell proportions. The result of the current study reveals significant differences between zones in mature wood vessels diameter and wall thickness, as well as juvenile wood vessels, wall thickness. The higher values were detected in the drier zone. Significant differences were also observed in juvenile wood fiber length, diameter as well as wall thickness. Contrary to vessels diameter and wall thickness, the fiber length, diameter as well as wall thickness were decreased in the drier zone. No significant differences have been detected in cell proportions of juvenile and mature wood. The significant differences in some fiber and vessels dimension lead to expect significant differences in wood density. From these results, Acacia seyal var. seyal seems to be well adapted with the change in rainfall and may survive in any rainfall zone.Keywords: Acacia seyal var. seyal, anatomical properties, rainfall zones, variation
Procedia PDF Downloads 1481234 Attitude in Academic Writing (CAAW): Corpus Compilation and Annotation
Authors: Hortènsia Curell, Ana Fernández-Montraveta
Abstract:
This paper presents the creation, development, and analysis of a corpus designed to study the presence of attitude markers and author’s stance in research articles in two different areas of linguistics (theoretical linguistics and sociolinguistics). These two disciplines are expected to behave differently in this respect, given the disparity in their discursive conventions. Attitude markers in this work are understood as the linguistic elements (adjectives, nouns and verbs) used to convey the writer's stance towards the content presented in the article, and are crucial in understanding writer-reader interaction and the writer's position. These attitude markers are divided into three broad classes: assessment, significance, and emotion. In addition to them, we also consider first-person singular and plural pronouns and possessives, modal verbs, and passive constructions, which are other linguistic elements expressing the author’s stance. The corpus, Corpus of Attitude in Academic Writing (CAAW), comprises a collection of 21 articles, collected from six journals indexed in JCR. These articles were originally written in English by a single native-speaker author from the UK or USA and were published between 2022 and 2023. The total number of words in the corpus is approximately 222,400, with 106,422 from theoretical linguistics (Lingua, Linguistic Inquiry and Journal of Linguistics) and 116,022 from sociolinguistics journals (International Journal of the Sociology of Language, Language in Society and Journal of Sociolinguistics). Together with the corpus, we present the tool created for the creation and storage of the corpus, along with a tool for automatic annotation. The steps followed in the compilation of the corpus are as follows. First, the articles were selected according to the parameters explained above. Second, they were downloaded and converted to txt format. Finally, examples, direct quotes, section titles and references were eliminated, since they do not involve the author’s stance. The resulting texts were the input for the annotation of the linguistic features related to stance. As for the annotation, two articles (one from each subdiscipline) were annotated manually by the two researchers. An existing list was used as a baseline, and other attitude markers were identified, together with the other elements mentioned above. Once a consensus was reached, the rest of articles were annotated automatically using the tool created for this purpose. The annotated corpus will serve as a resource for scholars working in discourse analysis (both in linguistics and communication) and related fields, since it offers new insights into the expression of attitude. The tools created for the compilation and annotation of the corpus will be useful to study author’s attitude and stance in articles from any academic discipline: new data can be uploaded and the list of markers can be enlarged. Finally, the tool can be expanded to other languages, which will allow cross-linguistic studies of author’s stance.Keywords: academic writing, attitude, corpus, english
Procedia PDF Downloads 741233 Deep Learning for Image Correction in Sparse-View Computed Tomography
Authors: Shubham Gogri, Lucia Florescu
Abstract:
Medical diagnosis and radiotherapy treatment planning using Computed Tomography (CT) rely on the quantitative accuracy and quality of the CT images. At the same time, requirements for CT imaging include reducing the radiation dose exposure to patients and minimizing scanning time. A solution to this is the sparse-view CT technique, based on a reduced number of projection views. This, however, introduces a new problem— the incomplete projection data results in lower quality of the reconstructed images. To tackle this issue, deep learning methods have been applied to enhance the quality of the sparse-view CT images. A first approach involved employing Mir-Net, a dedicated deep neural network designed for image enhancement. This showed promise, utilizing an intricate architecture comprising encoder and decoder networks, along with the incorporation of the Charbonnier Loss. However, this approach was computationally demanding. Subsequently, a specialized Generative Adversarial Network (GAN) architecture, rooted in the Pix2Pix framework, was implemented. This GAN framework involves a U-Net-based Generator and a Discriminator based on Convolutional Neural Networks. To bolster the GAN's performance, both Charbonnier and Wasserstein loss functions were introduced, collectively focusing on capturing minute details while ensuring training stability. The integration of the perceptual loss, calculated based on feature vectors extracted from the VGG16 network pretrained on the ImageNet dataset, further enhanced the network's ability to synthesize relevant images. A series of comprehensive experiments with clinical CT data were conducted, exploring various GAN loss functions, including Wasserstein, Charbonnier, and perceptual loss. The outcomes demonstrated significant image quality improvements, confirmed through pertinent metrics such as Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) between the corrected images and the ground truth. Furthermore, learning curves and qualitative comparisons added evidence of the enhanced image quality and the network's increased stability, while preserving pixel value intensity. The experiments underscored the potential of deep learning frameworks in enhancing the visual interpretation of CT scans, achieving outcomes with SSIM values close to one and PSNR values reaching up to 76.Keywords: generative adversarial networks, sparse view computed tomography, CT image correction, Mir-Net
Procedia PDF Downloads 1611232 Introducing Information and Communication Technologies in Prison: A Proposal in Favor of Social Reintegration
Authors: Carmen Rocio Fernandez Diaz
Abstract:
This paper focuses on the relevance of information and communication technologies (hereinafter referred as ‘ICTs’) as an essential part of the day-to-day life of all societies nowadays, as they offer the scenario where an immense number of behaviors are performed that previously took place in the physical world. In this context, areas of reality that have remained outside the so-called ‘information society’ are hardly imaginable. Nevertheless, it is possible to identify a means that continue to be behind this reality, and it is the penitentiary area regarding inmates rights, as security aspects in prison have already be improved by new technologies. Introducing ICTs in prisons is still a matter subject to great rejections. The study of comparative penitentiary systems worldwide shows that most of them use ICTs only regarding educational aspects of life in prison and that communications with the outside world are generally based on traditional ways. These are only two examples of the huge range of activities where ICTs can carry positive results within the prison. Those positive results have to do with the social reintegration of persons serving a prison sentence. Deprivation of liberty entails contact with the prison subculture and the harmful effects of it, causing in cases of long-term sentences the so-called phenomenon of ‘prisonization’. This negative effect of imprisonment could be reduced if ICTs were used inside prisons in the different areas where they can have an impact, and which are treated in this research, as (1) access to information and culture, (2) basic and advanced training, (3) employment, (4) communication with the outside world, (5) treatment or (6) leisure and entertainment. The content of all of these areas could be improved if ICTs were introduced in prison, as it is shown by the experience of some prisons of Belgium, United Kingdom or The United States. However, rejections to introducing ICTs in prisons obey to the fact that it could carry also risks concerning security and the commission of new offences. Considering these risks, the scope of this paper is to offer a real proposal to introduce ICTs in prison, trying to avoid those risks. This enterprise would be done to take advantage of the possibilities that ICTs offer to all inmates in order to start to build a life outside which is far from delinquency, but mainly to those inmates who are close to release. Reforming prisons in this sense is considered by the author of this paper an opportunity to offer inmates a progressive resettlement to live in freedom with a higher possibility to obey the law and to escape from recidivism. The value that new technologies would add to education, employment, communications or treatment to a person deprived of liberty constitutes a way of humanization of prisons in the 21st century.Keywords: deprivation of freedom, information and communication technologies, imprisonment, social reintegration
Procedia PDF Downloads 165