Search results for: Dufour’s number
1252 Determination of Friction and Damping Coefficients of Folded Cover Mechanism Deployed by Torsion Springs
Authors: I. Yilmaz, O. Taga, F. Kosar, O. Keles
Abstract:
In this study, friction and damping coefficients of folded cover mechanism were obtained in accordance with experimental studies and data. Friction and damping coefficients are the most important inputs to accomplish a mechanism analysis. Friction and damping are two objects that change the time of deployment of mechanisms and their dynamic behaviors. Though recommended friction coefficient values exist in literature, damping is differentiating feature according to mechanic systems. So the damping coefficient should be obtained from mechanism test outputs. In this study, the folded cover mechanism use torsion springs for deploying covers that are formerly close folded position. Torsion springs provide folded covers with desirable deploying time according to variable environmental conditions. To verify all design revisions with system tests will be so costly so that some decisions are taken in accordance with numerical methods. In this study, there are two folded covers required to deploy simultaneously. Scotch-yoke and crank-rod mechanisms were combined to deploy folded covers simultaneously. The mechanism was unlocked with a pyrotechnic bolt onto scotch-yoke disc. When pyrotechnic bolt was exploded, torsion springs provided rotational movement for mechanism. Quick motion camera was recording dynamic behaviors of system during deployment case. Dynamic model of mechanism was modeled as rigid body with Adams MBD (multi body dynamics) then torque values provided by torsion springs were used as an input. A well-advised range of friction and damping coefficients were defined in Adams DOE (design of experiment) then a large number of analyses were performed until deployment time of folded covers run in with test data observed in record of quick motion camera, thus the deployment time of mechanism and dynamic behaviors were obtained. Same mechanism was tested with different torsion springs and torque values then outputs were compared with numerical models. According to comparison, it was understood that friction and damping coefficients obtained in this study can be used safely when studying on folded objects required to deploy simultaneously. In addition to model generated with Adams as rigid body the finite element model of folded mechanism was generated with Abaqus then the outputs of rigid body model and finite element model was compared. Finally, the reasonable solutions were suggested about different outputs of these solution methods.Keywords: damping, friction, pyro-technic, scotch-yoke
Procedia PDF Downloads 3221251 Pressure-Robust Approximation for the Rotational Fluid Flow Problems
Authors: Medine Demir, Volker John
Abstract:
Fluid equations in a rotating frame of reference have a broad class of important applications in meteorology and oceanography, especially in the large-scale flows considered in ocean and atmosphere, as well as many physical and industrial applications. The Coriolis and the centripetal forces, resulting from the rotation of the earth, play a crucial role in such systems. For such applications it may be required to solve the system in complex three-dimensional geometries. In recent years, the Navier--Stokes equations in a rotating frame have been investigated in a number of papers using the classical inf-sup stable mixed methods, like Taylor-Hood pairs, to contribute to the analysis and the accurate and efficient numerical simulation. Numerical analysis reveals that these classical methods introduce a pressure-dependent contribution in the velocity error bounds that is proportional to some inverse power of the viscosity. Hence, these methods are optimally convergent but small velocity errors might not be achieved for complicated pressures and small viscosity coefficients. Several approaches have been proposed for improving the pressure-robustness of pairs of finite element spaces. In this contribution, a pressure-robust space discretization of the incompressible Navier--Stokes equations in a rotating frame of reference is considered. The discretization employs divergence-free, $H^1$-conforming mixed finite element methods like Scott--Vogelius pairs. However, this approach might come with a modification of the meshes, like the use of barycentric-refined grids in case of Scott--Vogelius pairs. However, this strategy requires the finite element code to have control on the mesh generator which is not realistic in many engineering applications and might also be in conflict with the solver for the linear system. An error estimate for the velocity is derived that tracks the dependency of the error bound on the coefficients of the problem, in particular on the angular velocity. Numerical examples illustrate the theoretical results. The idea of pressure-robust method could be cast on different types of flow problems which would be considered as future studies. As another future research direction, to avoid a modification of the mesh, one may use a very simple parameter-dependent modification of the Scott-Vogelius element, the pressure-wired Stokes element, such that the inf-sup constant is independent of nearly-singular vertices.Keywords: navier-stokes equations in a rotating frame of refence, coriolis force, pressure-robust error estimate, scott-vogelius pairs of finite element spaces
Procedia PDF Downloads 671250 Analyzing Transit Network Design versus Urban Dispersion
Authors: Hugo Badia
Abstract:
This research answers which is the most suitable transit network structure to serve specific demand requirements in an increasing urban dispersion process. Two main approaches of network design are found in the literature. On the one hand, a traditional answer, widespread in our cities, that develops a high number of lines to connect most of origin-destination pairs by direct trips; an approach based on the idea that users averse to transfers. On the other hand, some authors advocate an alternative design characterized by simple networks where transfer is essential to complete most of trips. To answer which of them is the best option, we use a two-step methodology. First, by means of an analytical model, three basic network structures are compared: a radial scheme, starting point for the other two structures, a direct trip-based network, and a transfer-based one, which represent the two alternative transit network designs. The model optimizes the network configuration with regard to the total cost for each structure. For a scenario of dispersion, the best alternative is the structure with the minimum cost. This dispersion degree is defined in a simple way considering that only a central area attracts all trips. If this area is small, we have a high concentrated mobility pattern; if this area is too large, the city is highly decentralized. In this first step, we can determine the area of applicability for each structure in function to that urban dispersion degree. The analytical results show that a radial structure is suitable when the demand is so centralized, however, when this demand starts to scatter, new transit lines should be implemented to avoid transfers. If the urban dispersion advances, the introduction of more lines is no longer a good alternative, in this case, the best solution is a change of structure, from direct trips to a network based on transfers. The area of applicability of each network strategy is not constant, it depends on the characteristics of demand, city and transport technology. In the second step, we translate analytical results to a real case study by the relationship between the parameters of dispersion of the model and direct measures of dispersion in a real city. Two dimensions of the urban sprawl process are considered: concentration, defined by Gini coefficient, and centralization by area based centralization index. Once it is estimated the real dispersion degree, we are able to identify in which area of applicability the city is located. In summary, from a strategic point of view, we can obtain with this methodology which is the best network design approach for a city, comparing the theoretical results with the real dispersion degree.Keywords: analytical network design model, network structure, public transport, urban dispersion
Procedia PDF Downloads 2301249 High-Speed Particle Image Velocimetry of the Flow around a Moving Train Model with Boundary Layer Control Elements
Authors: Alexander Buhr, Klaus Ehrenfried
Abstract:
Trackside induced airflow velocities, also known as slipstream velocities, are an important criterion for the design of high-speed trains. The maximum permitted values are given by the Technical Specifications for Interoperability (TSI) and have to be checked in the approval process. For train manufactures it is of great interest to know in advance, how new train geometries would perform in TSI tests. The Reynolds number in moving model experiments is lower compared to full-scale. Especially the limited model length leads to a thinner boundary layer at the rear end. The hypothesis is that the boundary layer rolls up to characteristic flow structures in the train wake, in which the maximum flow velocities can be observed. The idea is to enlarge the boundary layer using roughness elements at the train model head so that the ratio between the boundary layer thickness and the car width at the rear end is comparable to a full-scale train. This may lead to similar flow structures in the wake and better prediction accuracy for TSI tests. In this case, the design of the roughness elements is limited by the moving model rig. Small rectangular roughness shapes are used to get a sufficient effect on the boundary layer, while the elements are robust enough to withstand the high accelerating and decelerating forces during the test runs. For this investigation, High-Speed Particle Image Velocimetry (HS-PIV) measurements on an ICE3 train model have been realized in the moving model rig of the DLR in Göttingen, the so called tunnel simulation facility Göttingen (TSG). The flow velocities within the boundary layer are analysed in a plain parallel to the ground. The height of the plane corresponds to a test position in the EN standard (TSI). Three different shapes of roughness elements are tested. The boundary layer thickness and displacement thickness as well as the momentum thickness and the form factor are calculated along the train model. Conditional sampling is used to analyse the size and dynamics of the flow structures at the time of maximum velocity in the train wake behind the train. As expected, larger roughness elements increase the boundary layer thickness and lead to larger flow velocities in the boundary layer and in the wake flow structures. The boundary layer thickness, displacement thickness and momentum thickness are increased by using larger roughness especially when applied in the height close to the measuring plane. The roughness elements also cause high fluctuations in the form factors of the boundary layer. Behind the roughness elements, the form factors rapidly are approaching toward constant values. This indicates that the boundary layer, while growing slowly along the second half of the train model, has reached a state of equilibrium.Keywords: boundary layer, high-speed PIV, ICE3, moving train model, roughness elements
Procedia PDF Downloads 3051248 Single-Case Experimental Design: Exploratory Pilot Study on the Feasibility and Effect of Virtual Reality for Pain and Anxiety Management During Care
Authors: Corbel Camille, Le Cerf Flora, Corveleyn Xavier
Abstract:
Introduction: Aging is a physiological phenomenon accompanied by anatomical and cognitive changes leading to anxiety and pain. This could have significant impacts on quality of life, life expectancy, and the progression of cognitive disorders. Virtual Reality Intervention (VRI) is increasingly recognized as a non-pharmacological approach to alleviate pain and anxiety in children and young adults. However, while recent studies have explored the feasibility of applying VRI in the older population, confirmation through studies is still required to establish its benefits in various contexts. Objective: This pilot study, following a clinical trial methodology international recommendation for VRI in healthcare, aims to evaluate the feasibility and effects of using VRI with a 101-year-old woman residing in a nursing home undergoing weekly painful and anxious wound dressing changes. Methods: Following the international recommendations, this study focused on feasibility and preliminary results. A Single Case Experimental Design protocol consists of two distinct phases: control (Phase A) and personalized VRI (Phase B), each lasting for 6 sessions. Data were collected before, during and after the care, using measures of pain (Algoplus and numerical scale), anxiety (Hospital anxiety scale and numerical scale), VRI experience (semi-structured interview) and physiological measures. Results: The results suggest that the utilization of VRI is both feasible and well-tolerated by the participant. VRI contributed to a decrease in pain and anxiety during care sessions, with a more significant impact on pain compared to anxiety, which showed a gradual and slight decrease. Physiological data, particularly those related to stress, also indicate a reduction in physiological activity during VRI. Conclusion: This pilot study confirms the feasibility and benefits of using virtual reality in managing pain and anxiety in an older adult in a nursing home. In light of these results, it is essential that future studies focus on setting up randomized controlled trials (RCTs). These studies should involve a representative number of older adults to ensure generalizable data. This rigorous, controlled methodology will enable us to assess the effectiveness of virtual reality more accurately in various care settings, measure its impact on clinical parameters such as pain and anxiety, and explore the long-term implications of this intervention.Keywords: anxiety reduction, nursing home, older adult, pain management, virtual reality
Procedia PDF Downloads 641247 Isolation and Selection of Strains Perspective for Sewage Sludge Processing
Authors: A. Zh. Aupova, A. Ulankyzy, A. Sarsenova, A. Kussayin, Sh. Turarbek, N. Moldagulova, A. Kurmanbayev
Abstract:
One of the methods of organic waste bioconversion into environmentally-friendly fertilizer is composting. Microorganisms that produce hydrolytic enzymes play a significant role in accelerating the process of organic waste composting. We studied the enzymatic potential (amylase, protease, cellulase, lipase, urease activity) of bacteria isolated from the sewage sludge of Nur-Sultan, Rudny, and Fort-Shevchenko cities, the dacha soil of Nur-Sultan city, and freshly cut grass from the dacha for processing organic waste and identifying active strains. Microorganism isolation was carried out by the cultures enrichment method on liquid nutrient media, followed by inoculating on different solid media to isolate individual colonies. As a result, sixty-one microorganisms were isolated, three of which were thermophiles (DS1, DS2, and DS3). The highest number of isolates, twenty-one and eighteen, were isolated from sewage sludge of Nur-Sultan and Rudny cities, respectively. Ten isolates were isolated from the wastewater of the sewage treatment plant in Fort-Shevchenko. From the dacha soil of Nur-Sultan city and freshly cut grass - 9 and 5 isolates were revealed, respectively. The lipolytic, proteolytic, amylolytic, cellulolytic, ureolytic, and oil-oxidizing activities of isolates were studied. According to the results of experiments, starch hydrolysis (amylolytic activity) was found in 2 isolates - CB2/2, and CB2/1. Three isolates - CB2, CB2/1, and CB1/1 were selected for the highest ability to break down casein. Among isolated 61 bacterial cultures, three isolates could break down fats - CB3, CBG1/1, and IL3. Seven strains had cellulolytic activity - DS1, DS2, IL3, IL5, P2, P5, and P3. Six isolates rapidly decomposed urea. Isolate P1 could break down casein and cellulose. Isolate DS3 was a thermophile and had cellulolytic activity. Thus, based on the conducted studies, 15 isolates were selected as a potential for sewage sludge composting - CB2, CB3, CB1/1, CB2/2, CBG1/1, CB2/1, DS1, DS2, DS3, IL3, IL5, P1, P2, P5, P3. Selected strains were identified on a mass spectrometer (Maldi-TOF). The isolate - CB 3 was referred to the genus Rhodococcus rhodochrous; two isolates CB2 and CB1 / 1 - to Bacillus cereus, CB 2/2 - to Cryseobacterium arachidis, CBG 1/1 - to Pseudoxanthomonas sp., CB2/1 - to Bacillus megaterium, DS1 - to Pediococcus acidilactici, DS2 - to Paenibacillus residui, DS3 - to Brevibacillus invocatus, three strains IL3, P5, P3 - to Enterobacter cloacae, two strains IL5, P2 - to Ochrobactrum intermedium, and P1 - Bacillus lichenoformis. Hence, 60 isolates were isolated from the wastewater of the cities of Nur-Sultan, Rudny, Fort-Shevchenko, the dacha soil of Nur-Sultan city, and freshly cut grass from the dacha. Based on the highest enzymatic activity, 15 active isolates were selected and identified. These strains may become the candidates for bio preparation for sewage sludge processing.Keywords: sewage sludge, composting, bacteria, enzymatic activity
Procedia PDF Downloads 1021246 Sweepline Algorithm for Voronoi Diagram of Polygonal Sites
Authors: Dmitry A. Koptelov, Leonid M. Mestetskiy
Abstract:
Voronoi Diagram (VD) of finite set of disjoint simple polygons, called sites, is a partition of plane into loci (for each site at the locus) – regions, consisting of points that are closer to a given site than to all other. Set of polygons is a universal model for many applications in engineering, geoinformatics, design, computer vision, and graphics. VD of polygons construction usually done with a reduction to task of constructing VD of segments, for which there are effective O(n log n) algorithms for n segments. Preprocessing – constructing segments from polygons’ sides, and postprocessing – polygon’s loci construction by merging the loci of the sides of each polygon are also included in reduction. This approach doesn’t take into account two specific properties of the resulting segment sites. Firstly, all this segments are connected in pairs in the vertices of the polygons. Secondly, on the one side of each segment lies the interior of the polygon. The polygon is obviously included in its locus. Using this properties in the algorithm for VD construction is a resource to reduce computations. The article proposes an algorithm for the direct construction of VD of polygonal sites. Algorithm is based on sweepline paradigm, allowing to effectively take into account these properties. The solution is performed based on reduction. Preprocessing is the constructing of set of sites from vertices and edges of polygons. Each site has an orientation such that the interior of the polygon lies to the left of it. Proposed algorithm constructs VD for set of oriented sites with sweepline paradigm. Postprocessing is a selecting of edges of this VD formed by the centers of empty circles touching different polygons. Improving the efficiency of the proposed sweepline algorithm in comparison with the general Fortune algorithm is achieved due to the following fundamental solutions: 1. Algorithm constructs only such VD edges, which are on the outside of polygons. Concept of oriented sites allowed to avoid construction of VD edges located inside the polygons. 2. The list of events in sweepline algorithm has a special property: the majority of events are connected with “medium” polygon vertices, where one incident polygon side lies behind the sweepline and the other in front of it. The proposed algorithm processes such events in constant time and not in logarithmic time, as in the general Fortune algorithm. The proposed algorithm is fully implemented and tested on a large number of examples. The high reliability and efficiency of the algorithm is also confirmed by computational experiments with complex sets of several thousand polygons. It should be noted that, despite the considerable time that has passed since the publication of Fortune's algorithm in 1986, a full-scale implementation of this algorithm for an arbitrary set of segment sites has not been made. The proposed algorithm fills this gap for an important special case - a set of sites formed by polygons.Keywords: voronoi diagram, sweepline, polygon sites, fortunes' algorithm, segment sites
Procedia PDF Downloads 1771245 Assessment of Microclimate in Abu Dhabi Neighborhoods: On the Utilization of Native Landscape in Enhancing Thermal Comfort
Authors: Maryam Al Mheiri, Khaled Al Awadi
Abstract:
Urban population is continuously increasing worldwide and the speed at which cities urbanize creates major challenges, particularly in terms of creating sustainable urban environments. Rapid urbanization often leads to negative environmental impacts and changes in the urban microclimates. Moreover, when rapid urbanization is paired with limited landscape elements, the effects on human health due to the increased pollution, and thermal comfort due to Urban Heat Island effects are increased. Urban Heat Island (UHI) describes the increase of urban temperatures in urban areas in comparison to its rural surroundings, and, as we discuss in this paper, it impacts on pedestrian comfort, reducing the number of walking trips and public space use. It is thus very necessary to investigate the quality of outdoor built environments in order to improve the quality of life incites. The main objective of this paper is to address the morphology of Emirati neighborhoods, setting a quantitative baseline by which to assess and compare spatial characteristics and microclimate performance of existing typologies in Abu Dhabi. This morphological mapping and analysis will help to understand the built landscape of Emirati neighborhoods in this city, whose form has changed and evolved across different periods. This will eventually help to model the use of different design strategies, such as landscaping, to mitigate UHI effects and enhance outdoor urban comfort. Further, the impact of different native plants types and native species in reducing UHI effects and enhancing outdoor urban comfort, allowing for the assessment of the impact of increasing landscaped areas in these neighborhoods. This study uses ENVI-met, an analytical, three-dimensional, high-resolution microclimate modeling software. This micro-scale urban climate model will be used to evaluate existing conditions and generate scenarios in different residential areas, with different vegetation surfaces and landscaping, and examine their impact on surface temperatures during summer and autumn. In parallel to these simulations, field measurement will be included to calibrate the Envi-met model. This research therefore takes an experimental approach, using simulation software, and a case study strategy for the evaluation of a sample of residential neighborhoods. A comparison of the results of these scenarios constitute a first step towards making recommendations about what constitutes sustainable landscapes for Abu Dhabi neighborhoods.Keywords: landscape, microclimate, native plants, sustainable neighborhoods, thermal comfort, urban heat island
Procedia PDF Downloads 3101244 Application of Mesenchymal Stem Cells in Diabetic Therapy
Authors: K. J. Keerthi, Vasundhara Kamineni, A. Ravi Shanker, T. Rammurthy, A. Vijaya Lakshmi, Q. Hasan
Abstract:
Pancreatic β-cells are the predominant insulin-producing cell types within the Islets of Langerhans and insulin is the primary hormone which regulates carbohydrate and fat metabolism. Apoptosis of β-cells or insufficient insulin production leads to Diabetes Mellitus (DM). Current therapy for diabetes includes either medical management or insulin replacement and regular monitoring. Replacement of β- cells is an attractive treatment option for both Type-1 and Type-2 DM in view of the recent paper which indicates that β-cells apoptosis is the common underlying cause for both the Types of DM. With the development of Edmonton protocol, pancreatic β-cells allo-transplantation became possible, but this is still not considered as standard of care due to subsequent requirement of lifelong immunosuppression and the scarcity of suitable healthy organs to retrieve pancreatic β-cell. Fetal pancreatic cells from abortuses were developed as a possible therapeutic option for Diabetes, however, this posed several ethical issues. Hence, in the present study Mesenchymal stem cells (MSCs) were differentiated into insulin producing cells which were isolated from Human Umbilical cord (HUC) tissue. MSCs have already made their mark in the growing field of regenerative medicine, and their therapeutic worth has already been validated for a number of conditions. HUC samples were collected with prior informed consent as approved by the Institutional ethical committee. HUC (n=26) were processed using a combination of both mechanical and enzymatic (collagenase-II, 100 U/ml, Gibco ) methods to obtain MSCs which were cultured in-vitro in L-DMEM (Low glucose Dulbecco's Modified Eagle's Medium, Sigma, 4.5 mM glucose/L), 10% FBS in 5% CO2 incubator at 37°C. After reaching 80-90% confluency, MSCs were characterized with Flowcytometry and Immunocytochemistry for specific cell surface antigens. Cells expressed CD90+, CD73+, CD105+, CD34-, CD45-, HLA-DR-/Low and Vimentin+. These cells were differentiated to β-cells by using H-DMEM (High glucose Dulbecco's Modified Eagle's Medium,25 mM glucose/L, Gibco), β-Mercaptoethanol (0.1mM, Hi-Media), basic Fibroblast growth factor (10 µg /L,Gibco), and Nicotinamide (10 mmol/L, Hi-Media). Pancreatic β-cells were confirmed by positive Dithizone staining and were found to be functionally active as they released 8 IU/ml insulin on glucose stimulation. Isolating MSCs from usually discarded, abundantly available HUC tissue, expanding and differentiating to β-cells may be the most feasible cell therapy option for the millions of people suffering from DM globally.Keywords: diabetes mellitus, human umbilical cord, mesenchymal stem cells, differentiation
Procedia PDF Downloads 2591243 Magnetic Navigation of Nanoparticles inside a 3D Carotid Model
Authors: E. G. Karvelas, C. Liosis, A. Theodorakakos, T. E. Karakasidis
Abstract:
Magnetic navigation of the drug inside the human vessels is a very important concept since the drug is delivered to the desired area. Consequently, the quantity of the drug required to reach therapeutic levels is being reduced while the drug concentration at targeted sites is increased. Magnetic navigation of drug agents can be achieved with the use of magnetic nanoparticles where anti-tumor agents are loaded on the surface of the nanoparticles. The magnetic field that is required to navigate the particles inside the human arteries is produced by a magnetic resonance imaging (MRI) device. The main factors which influence the efficiency of the usage of magnetic nanoparticles for biomedical applications in magnetic driving are the size and the magnetization of the biocompatible nanoparticles. In this study, a computational platform for the simulation of the optimal gradient magnetic fields for the navigation of magnetic nanoparticles inside a carotid artery is presented. For the propulsion model of the particles, seven major forces are considered, i.e., the magnetic force from MRIs main magnet static field as well as the magnetic field gradient force from the special propulsion gradient coils. The static field is responsible for the aggregation of nanoparticles, while the magnetic gradient contributes to the navigation of the agglomerates that are formed. Moreover, the contact forces among the aggregated nanoparticles and the wall and the Stokes drag force for each particle are considered, while only spherical particles are used in this study. In addition, gravitational forces due to gravity and the force due to buoyancy are included. Finally, Van der Walls force and Brownian motion are taken into account in the simulation. The OpenFoam platform is used for the calculation of the flow field and the uncoupled equations of particles' motion. To verify the optimal gradient magnetic fields, a covariance matrix adaptation evolution strategy (CMAES) is used in order to navigate the particles into the desired area. A desired trajectory is inserted into the computational geometry, which the particles are going to be navigated in. Initially, the CMAES optimization strategy provides the OpenFOAM program with random values of the gradient magnetic field. At the end of each simulation, the computational platform evaluates the distance between the particles and the desired trajectory. The present model can simulate the motion of particles when they are navigated by the magnetic field that is produced by the MRI device. Under the influence of fluid flow, the model investigates the effect of different gradient magnetic fields in order to minimize the distance of particles from the desired trajectory. In addition, the platform can navigate the particles into the desired trajectory with an efficiency between 80-90%. On the other hand, a small number of particles are stuck to the walls and remains there for the rest of the simulation.Keywords: artery, drug, nanoparticles, navigation
Procedia PDF Downloads 1071242 Mikrophonie I (1964) by Karlheinz Stockhausen - Between Idea and Auditory Image
Authors: Justyna Humięcka-Jakubowska
Abstract:
1. Background in music analysis. Traditionally, when we think about a composer’s sketches, the chances are that we are thinking in terms of the working out of detail, rather than the evolution of an overall concept. Since music is a “time art’, it follows that questions of a form cannot be entirely detached from considerations of time. One could say that composers tend to regard time either as a place gradually and partially intuitively filled, or they can look for a specific strategy to occupy it. In my opinion, one thing that sheds light on Stockhausen's compositional thinking is his frequent use of 'form schemas', that is often a single-page representation of the entire structure of a piece. 2. Background in music technology. Sonic Visualiser is a program used to study a musical recording. It is an open source application for viewing, analysing, and annotating music audio files. It contains a number of visualisation tools, which are designed with useful default parameters for musical analysis. Additionally, the Vamp plugin format of SV supports to provide analysis such as for example structural segmentation. 3. Aims. The aim of my paper is to show how SV may be used to obtain a better understanding of the specific musical work, and how the compositional strategy does impact on musical structures and musical surfaces. I want to show that ‘traditional” music analytic methods don’t allow to indicate interrelationships between musical surface (which is perceived) and underlying musical/acoustical structure. 4. Main Contribution. Stockhausen had dealt with the most diverse musical problems by the most varied methods. A characteristic which he had never ceased to be placed at the center of his thought and works, it was the quest for a new balance founded upon an acute connection between speculation and intuition. In the case with Mikrophonie I (1964) for tam-tam and 6 players Stockhausen makes a distinction between the "connection scheme", which indicates the ground rules underlying all versions, and the form scheme, which is associated with a particular version. The preface to the published score includes both the connection scheme, and a single instance of a "form scheme", which is what one can hear on the CD recording. In the current study, the insight into the compositional strategy chosen by Stockhausen was been compared with auditory image, that is, with the perceived musical surface. Stockhausen's musical work is analyzed both in terms of melodic/voice and timbre evolution. 5. Implications The current study shows how musical structures have determined of musical surface. My general assumption is this, that while listening to music we can extract basic kinds of musical information from musical surfaces. It is shown that an interactive strategies of musical structure analysis can offer a very fruitful way of looking directly into certain structural features of music.Keywords: automated analysis, composer's strategy, mikrophonie I, musical surface, stockhausen
Procedia PDF Downloads 2971241 Effects of Conversion of Indigenous Forest to Plantation Forest on the Diversity of Macro-Fungi in Kereita Forest, Kikuyu Escarpment, Kenya
Authors: Susan Mwai, Mary Muchane, Peter Wachira, Sheila Okoth, Muchai Muchane, Halima Saado
Abstract:
Tropical forests harbor a wide range of biodiversity and rich macro-fungi diversity compared to the temperate regions in the World. However, biodiversity is facing the threat of extinction following the rate of forest loss taking place before proper study and documentation of macrofungi is achieved. The present study was undertaken to determine the effect of converting indigenous habitat to plantation forest on macrofungi diversity. To achieve the objective of this study, an inventory focusing on macro-fungi diversity was conducted within Kereita block in Kikuyu Escarpment forest which is on the southern side of Aberdare mountain range. The macrofungi diversity was conducted in the indigenous forest and in more than 15 year old Patula plantation forest , during the wet (long rain season, December 2014) and dry (Short rain season, May, 2015). In each forest type, 15 permanent (20m x 20m) sampling plots distributed across three (3) forest blocks were used. Both field and laboratory methods involved recording abundance of fruiting bodies, taxonomic identity of species and analysis of diversity indices and measures in terms of species richness, density and diversity. R statistical program was used to analyze for species diversity and Canoco 4.5 software for species composition. A total number of 76 genera in 28 families and 224 species were encountered in both forest types. The most represented taxa belonged to the Agaricaceae (16%), Polyporaceae (12%), Marasmiaceae, Mycenaceae (7%) families respectively. Most of the recorded macro-fungi were saprophytic, mostly colonizing the litter 38% and wood 34% based substrates, which was followed by soil organic dwelling species (17%). Ecto-mycorrhiza fungi (5%) and parasitic fungi (2%) were the least encountered. The data established that indigenous forests (native ecosystems) hosts a wide range of macrofungi assemblage in terms of density (2.6 individual fruit bodies / m2), species richness (8.3 species / plot) and species diversity (1.49/ plot level) compared to the plantation forest. The Conversion of native forest to plantation forest also interfered with species composition though did not alter species diversity. Seasonality was also shown to significantly affect the diversity of macro-fungi and 61% of the total species being present during the wet season. Based on the present findings, forested ecosystems in Kenya hold diverse macro-fungi community which warrants conservation measures.Keywords: diversity, Indigenous forest, macro-fungi, plantation forest, season
Procedia PDF Downloads 2141240 An Evolutionary Approach for QAOA for Max-Cut
Authors: Francesca Schiavello
Abstract:
This work aims to create a hybrid algorithm, combining Quantum Approximate Optimization Algorithm (QAOA) with an Evolutionary Algorithm (EA) in the place of traditional gradient based optimization processes. QAOA’s were first introduced in 2014, where, at the time, their algorithm performed better than the traditional best known classical algorithm for Max-cut graphs. Whilst classical algorithms have improved since then and have returned to being faster and more efficient, this was a huge milestone for quantum computing, and their work is often used as a benchmarking tool and a foundational tool to explore variants of QAOA’s. This, alongside with other famous algorithms like Grover’s or Shor’s, highlights to the world the potential that quantum computing holds. It also presents the reality of a real quantum advantage where, if the hardware continues to improve, this could constitute a revolutionary era. Given that the hardware is not there yet, many scientists are working on the software side of things in the hopes of future progress. Some of the major limitations holding back quantum computing are the quality of qubits and the noisy interference they generate in creating solutions, the barren plateaus that effectively hinder the optimization search in the latent space, and the availability of number of qubits limiting the scale of the problem that can be solved. These three issues are intertwined and are part of the motivation for using EAs in this work. Firstly, EAs are not based on gradient or linear optimization methods for the search in the latent space, and because of their freedom from gradients, they should suffer less from barren plateaus. Secondly, given that this algorithm performs a search in the solution space through a population of solutions, it can also be parallelized to speed up the search and optimization problem. The evaluation of the cost function, like in many other algorithms, is notoriously slow, and the ability to parallelize it can drastically improve the competitiveness of QAOA’s with respect to purely classical algorithms. Thirdly, because of the nature and structure of EA’s, solutions can be carried forward in time, making them more robust to noise and uncertainty. Preliminary results show that the EA algorithm attached to QAOA can perform on par with the traditional QAOA with a Cobyla optimizer, which is a linear based method, and in some instances, it can even create a better Max-Cut. Whilst the final objective of the work is to create an algorithm that can consistently beat the original QAOA, or its variants, due to either speedups or quality of the solution, this initial result is promising and show the potential of EAs in this field. Further tests need to be performed on an array of different graphs with the parallelization aspect of the work commencing in October 2023 and tests on real hardware scheduled for early 2024.Keywords: evolutionary algorithm, max cut, parallel simulation, quantum optimization
Procedia PDF Downloads 601239 Differences in Assessing Hand-Written and Typed Student Exams: A Corpus-Linguistic Study
Authors: Jutta Ransmayr
Abstract:
The digital age has long arrived at Austrian schools, so both society and educationalists demand that digital means should be integrated accordingly to day-to-day school routines. Therefore, the Austrian school-leaving exam (A-levels) can now be written either by hand or by using a computer. However, the choice of writing medium (pen and paper or computer) for written examination papers, which are considered 'high-stakes' exams, raises a number of questions that have not yet been adequately investigated and answered until recently, such as: What effects do the different conditions of text production in the written German A-levels have on the component of normative linguistic accuracy? How do the spelling skills of German A-level papers written with a pen differ from those that the students wrote on the computer? And how is the teacher's assessment related to this? Which practical desiderata for German didactics can be derived from this? In a trilateral pilot project of the Austrian Center for Digital Humanities (ACDH) of the Austrian Academy of Sciences and the University of Vienna in cooperation with the Austrian Ministry of Education and the Council for German Orthography, these questions were investigated. A representative Austrian learner corpus, consisting of around 530 German A-level papers from all over Austria (pen and computer written), was set up in order to subject it to a quantitative (corpus-linguistic and statistical) and qualitative investigation with regard to the spelling and punctuation performance of the high school graduates and the differences between pen- and computer-written papers and their assessments. Relevant studies are currently available mainly from the Anglophone world. These have shown that writing on the computer increases the motivation to write, has positive effects on the length of the text, and, in some cases, also on the quality of the text. Depending on the writing situation and other technical aids, better results in terms of spelling and punctuation could also be found in the computer-written texts as compared to the handwritten ones. Studies also point towards a tendency among teachers to rate handwritten texts better than computer-written texts. In this paper, the first comparable results from the German-speaking area are to be presented. Research results have shown that, on the one hand, there are significant differences between handwritten and computer-written work with regard to performance in orthography and punctuation. On the other hand, the corpus linguistic investigation and the subsequent statistical analysis made it clear that not only the teachers' assessments of the students’ spelling performance vary enormously but also the overall assessments of the exam papers – the factor of the production medium (pen and paper or computer) also seems to play a decisive role.Keywords: exam paper assessment, pen and paper or computer, learner corpora, linguistics
Procedia PDF Downloads 1701238 Evaluation of Electrophoretic and Electrospray Deposition Methods for Preparing Graphene and Activated Carbon Modified Nano-Fibre Electrodes for Hydrogen/Vanadium Flow Batteries and Supercapacitors
Authors: Barun Chakrabarti, Evangelos Kalamaras, Vladimir Yufit, Xinhua Liu, Billy Wu, Nigel Brandon, C. T. John Low
Abstract:
In this work, we perform electrophoretic deposition of activated carbon on a number of substrates to prepare symmetrical coin cells for supercapacitor applications. From several recipes that involve the evaluation of a few solvents such as isopropyl alcohol, N-Methyl-2-pyrrolidone (NMP), or acetone to binders such as polyvinylidene fluoride (PVDF) and charging agents such as magnesium chloride, we display a working means for achieving supercapacitors that can achieve 100 F/g in a consistent manner. We then adapt this EPD method to deposit reduced graphene oxide on SGL 10AA carbon paper to achieve cathodic materials for testing in a hydrogen/vanadium flow battery. In addition, a self-supported hierarchical carbon nano-fibre is prepared by means of electrospray deposition of an iron phthalocyanine solution onto a temporary substrate followed by carbonisation to remove heteroatoms. This process also induces a degree of nitrogen doping on the carbon nano-fibres (CNFs), which allows its catalytic performance to improve significantly as detailed in other publications. The CNFs are then used as catalysts by attaching them to graphite felt electrodes facing the membrane inside an all-vanadium flow battery (Scribner cell using serpentine flow distribution channels) and efficiencies as high as 60% is noted at high current densities of 150 mA/cm². About 20 charge and discharge cycling show that the CNF catalysts consistently perform better than pristine graphite felt electrodes. Following this, we also test the CNF as an electro-catalyst in the hydrogen/vanadium flow battery (cathodic side as mentioned briefly in the first paragraph) facing the membrane, based upon past studies from our group. Once again, we note consistently good efficiencies of 85% and above for CNF modified graphite felt electrodes in comparison to 60% for pristine felts at low current density of 50 mA/cm² (this reports 20 charge and discharge cycles of the battery). From this preliminary investigation, we conclude that the CNFs may be used as catalysts for other systems such as vanadium/manganese, manganese/manganese and manganese/hydrogen flow batteries in the future. We are generating data for such systems at present, and further publications are expected.Keywords: electrospinning, carbon nano-fibres, all-vanadium redox flow battery, hydrogen-vanadium fuel cell, electrocatalysis
Procedia PDF Downloads 2911237 A Feature Clustering-Based Sequential Selection Approach for Color Texture Classification
Authors: Mohamed Alimoussa, Alice Porebski, Nicolas Vandenbroucke, Rachid Oulad Haj Thami, Sana El Fkihi
Abstract:
Color and texture are highly discriminant visual cues that provide an essential information in many types of images. Color texture representation and classification is therefore one of the most challenging problems in computer vision and image processing applications. Color textures can be represented in different color spaces by using multiple image descriptors which generate a high dimensional set of texture features. In order to reduce the dimensionality of the feature set, feature selection techniques can be used. The goal of feature selection is to find a relevant subset from an original feature space that can improve the accuracy and efficiency of a classification algorithm. Traditionally, feature selection is focused on removing irrelevant features, neglecting the possible redundancy between relevant ones. This is why some feature selection approaches prefer to use feature clustering analysis to aid and guide the search. These techniques can be divided into two categories. i) Feature clustering-based ranking algorithm uses feature clustering as an analysis that comes before feature ranking. Indeed, after dividing the feature set into groups, these approaches perform a feature ranking in order to select the most discriminant feature of each group. ii) Feature clustering-based subset search algorithms can use feature clustering following one of three strategies; as an initial step that comes before the search, binded and combined with the search or as the search alternative and replacement. In this paper, we propose a new feature clustering-based sequential selection approach for the purpose of color texture representation and classification. Our approach is a three step algorithm. First, irrelevant features are removed from the feature set thanks to a class-correlation measure. Then, introducing a new automatic feature clustering algorithm, the feature set is divided into several feature clusters. Finally, a sequential search algorithm, based on a filter model and a separability measure, builds a relevant and non redundant feature subset: at each step, a feature is selected and features of the same cluster are removed and thus not considered thereafter. This allows to significantly speed up the selection process since large number of redundant features are eliminated at each step. The proposed algorithm uses the clustering algorithm binded and combined with the search. Experiments using a combination of two well known texture descriptors, namely Haralick features extracted from Reduced Size Chromatic Co-occurence Matrices (RSCCMs) and features extracted from Local Binary patterns (LBP) image histograms, on five color texture data sets, Outex, NewBarktex, Parquet, Stex and USPtex demonstrate the efficiency of our method compared to seven of the state of the art methods in terms of accuracy and computation time.Keywords: feature selection, color texture classification, feature clustering, color LBP, chromatic cooccurrence matrix
Procedia PDF Downloads 1371236 Variation in Wood Anatomical Properties of Acacia seyal var. seyal Tree Species Growing in Different Zones in Sudan
Authors: Hanadi Mohamed Shawgi Gamal, Ashraf Mohamed Ahmed Abdalla
Abstract:
Sudan is endowed by a great diversity of tree species; nevertheless, the utilization of wood resources has traditionally concentrated on a few number of species. With the great variation in the climatic zones of Sudan, great variations are expected in the anatomical properties between and within species. This variation needs to be fully explored in order to suggest the best uses for the species. Modern research on wood has substantiated that the climatic condition where the species grow has significant effect on wood properties. Understanding the extent of variability of wood is important because the uses for each kind of wood are related to its characteristics; furthermore, the suitability or quality of wood for a particular purpose is determined by the variability of one or more of these characteristics. The present study demonstrates the effect of rainfall zones in some anatomical properties of Acacia seyal var. seyal growing in Sudan. For this purpose, twenty healthy trees were collected randomly from two zones (ten trees per zone). One zone with relatively low rainfall (273mm annually) which represented by North Kordofan state and White Nile state and the second with relatively high rainfall (701 mm annually) represented by Blue Nile state and South Kordofan state. From each sampled tree, a stem disc (3 cm thick) was cut at 10% from stem height. One radius was obtained in central stem dices. Two representative samples were taken from each disc, one at 10% distance from pith to bark, the second at 90% in order to represent the juvenile and mature wood. The investigated anatomical properties were fibers length, fibers and vessels diameter, lumen diameter, and wall thickness as well as cell proportions. The result of the current study reveals significant differences between zones in mature wood vessels diameter and wall thickness, as well as juvenile wood vessels, wall thickness. The higher values were detected in the drier zone. Significant differences were also observed in juvenile wood fiber length, diameter as well as wall thickness. Contrary to vessels diameter and wall thickness, the fiber length, diameter as well as wall thickness were decreased in the drier zone. No significant differences have been detected in cell proportions of juvenile and mature wood. The significant differences in some fiber and vessels dimension lead to expect significant differences in wood density. From these results, Acacia seyal var. seyal seems to be well adapted with the change in rainfall and may survive in any rainfall zone.Keywords: Acacia seyal var. seyal, anatomical properties, rainfall zones, variation
Procedia PDF Downloads 1481235 Attitude in Academic Writing (CAAW): Corpus Compilation and Annotation
Authors: Hortènsia Curell, Ana Fernández-Montraveta
Abstract:
This paper presents the creation, development, and analysis of a corpus designed to study the presence of attitude markers and author’s stance in research articles in two different areas of linguistics (theoretical linguistics and sociolinguistics). These two disciplines are expected to behave differently in this respect, given the disparity in their discursive conventions. Attitude markers in this work are understood as the linguistic elements (adjectives, nouns and verbs) used to convey the writer's stance towards the content presented in the article, and are crucial in understanding writer-reader interaction and the writer's position. These attitude markers are divided into three broad classes: assessment, significance, and emotion. In addition to them, we also consider first-person singular and plural pronouns and possessives, modal verbs, and passive constructions, which are other linguistic elements expressing the author’s stance. The corpus, Corpus of Attitude in Academic Writing (CAAW), comprises a collection of 21 articles, collected from six journals indexed in JCR. These articles were originally written in English by a single native-speaker author from the UK or USA and were published between 2022 and 2023. The total number of words in the corpus is approximately 222,400, with 106,422 from theoretical linguistics (Lingua, Linguistic Inquiry and Journal of Linguistics) and 116,022 from sociolinguistics journals (International Journal of the Sociology of Language, Language in Society and Journal of Sociolinguistics). Together with the corpus, we present the tool created for the creation and storage of the corpus, along with a tool for automatic annotation. The steps followed in the compilation of the corpus are as follows. First, the articles were selected according to the parameters explained above. Second, they were downloaded and converted to txt format. Finally, examples, direct quotes, section titles and references were eliminated, since they do not involve the author’s stance. The resulting texts were the input for the annotation of the linguistic features related to stance. As for the annotation, two articles (one from each subdiscipline) were annotated manually by the two researchers. An existing list was used as a baseline, and other attitude markers were identified, together with the other elements mentioned above. Once a consensus was reached, the rest of articles were annotated automatically using the tool created for this purpose. The annotated corpus will serve as a resource for scholars working in discourse analysis (both in linguistics and communication) and related fields, since it offers new insights into the expression of attitude. The tools created for the compilation and annotation of the corpus will be useful to study author’s attitude and stance in articles from any academic discipline: new data can be uploaded and the list of markers can be enlarged. Finally, the tool can be expanded to other languages, which will allow cross-linguistic studies of author’s stance.Keywords: academic writing, attitude, corpus, english
Procedia PDF Downloads 741234 Deep Learning for Image Correction in Sparse-View Computed Tomography
Authors: Shubham Gogri, Lucia Florescu
Abstract:
Medical diagnosis and radiotherapy treatment planning using Computed Tomography (CT) rely on the quantitative accuracy and quality of the CT images. At the same time, requirements for CT imaging include reducing the radiation dose exposure to patients and minimizing scanning time. A solution to this is the sparse-view CT technique, based on a reduced number of projection views. This, however, introduces a new problem— the incomplete projection data results in lower quality of the reconstructed images. To tackle this issue, deep learning methods have been applied to enhance the quality of the sparse-view CT images. A first approach involved employing Mir-Net, a dedicated deep neural network designed for image enhancement. This showed promise, utilizing an intricate architecture comprising encoder and decoder networks, along with the incorporation of the Charbonnier Loss. However, this approach was computationally demanding. Subsequently, a specialized Generative Adversarial Network (GAN) architecture, rooted in the Pix2Pix framework, was implemented. This GAN framework involves a U-Net-based Generator and a Discriminator based on Convolutional Neural Networks. To bolster the GAN's performance, both Charbonnier and Wasserstein loss functions were introduced, collectively focusing on capturing minute details while ensuring training stability. The integration of the perceptual loss, calculated based on feature vectors extracted from the VGG16 network pretrained on the ImageNet dataset, further enhanced the network's ability to synthesize relevant images. A series of comprehensive experiments with clinical CT data were conducted, exploring various GAN loss functions, including Wasserstein, Charbonnier, and perceptual loss. The outcomes demonstrated significant image quality improvements, confirmed through pertinent metrics such as Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) between the corrected images and the ground truth. Furthermore, learning curves and qualitative comparisons added evidence of the enhanced image quality and the network's increased stability, while preserving pixel value intensity. The experiments underscored the potential of deep learning frameworks in enhancing the visual interpretation of CT scans, achieving outcomes with SSIM values close to one and PSNR values reaching up to 76.Keywords: generative adversarial networks, sparse view computed tomography, CT image correction, Mir-Net
Procedia PDF Downloads 1621233 Introducing Information and Communication Technologies in Prison: A Proposal in Favor of Social Reintegration
Authors: Carmen Rocio Fernandez Diaz
Abstract:
This paper focuses on the relevance of information and communication technologies (hereinafter referred as ‘ICTs’) as an essential part of the day-to-day life of all societies nowadays, as they offer the scenario where an immense number of behaviors are performed that previously took place in the physical world. In this context, areas of reality that have remained outside the so-called ‘information society’ are hardly imaginable. Nevertheless, it is possible to identify a means that continue to be behind this reality, and it is the penitentiary area regarding inmates rights, as security aspects in prison have already be improved by new technologies. Introducing ICTs in prisons is still a matter subject to great rejections. The study of comparative penitentiary systems worldwide shows that most of them use ICTs only regarding educational aspects of life in prison and that communications with the outside world are generally based on traditional ways. These are only two examples of the huge range of activities where ICTs can carry positive results within the prison. Those positive results have to do with the social reintegration of persons serving a prison sentence. Deprivation of liberty entails contact with the prison subculture and the harmful effects of it, causing in cases of long-term sentences the so-called phenomenon of ‘prisonization’. This negative effect of imprisonment could be reduced if ICTs were used inside prisons in the different areas where they can have an impact, and which are treated in this research, as (1) access to information and culture, (2) basic and advanced training, (3) employment, (4) communication with the outside world, (5) treatment or (6) leisure and entertainment. The content of all of these areas could be improved if ICTs were introduced in prison, as it is shown by the experience of some prisons of Belgium, United Kingdom or The United States. However, rejections to introducing ICTs in prisons obey to the fact that it could carry also risks concerning security and the commission of new offences. Considering these risks, the scope of this paper is to offer a real proposal to introduce ICTs in prison, trying to avoid those risks. This enterprise would be done to take advantage of the possibilities that ICTs offer to all inmates in order to start to build a life outside which is far from delinquency, but mainly to those inmates who are close to release. Reforming prisons in this sense is considered by the author of this paper an opportunity to offer inmates a progressive resettlement to live in freedom with a higher possibility to obey the law and to escape from recidivism. The value that new technologies would add to education, employment, communications or treatment to a person deprived of liberty constitutes a way of humanization of prisons in the 21st century.Keywords: deprivation of freedom, information and communication technologies, imprisonment, social reintegration
Procedia PDF Downloads 1651232 Characterization of Mycoplasma Pneumoniae Causing Exacerbation of Asthma: A Prototypical Finding from Sri Lanka
Authors: Lakmini Wijesooriya, Vicki Chalker, Jessica Day, Priyantha Perera, N. P. Sunil-Chandra
Abstract:
M. pneumoniae has been identified as an etiology for exacerbation of asthma (EQA), although viruses play a major role in EOA. M. pneumoniae infection is treated empirically with macrolides, and its antibiotic sensitivity is not detected routinely. Characterization of the organism by genotyping and determination of macrolide resistance is important epidemiologically as it guides the empiric antibiotic treatment. To date, there is no such characterization of M. pneumoniae performed in Sri Lanka. The present study describes the characterization of M. pneumoniae detected from a child with EOA following a screening of 100 children with EOA. Of the hundred children with EOA, M. pneumoniae was identified only in one child by Real-Time polymerase chain reaction (PCR) test for identifying the community-acquired respiratory distress syndrome (CARDS) toxin nucleotide sequences. The M. pneumoniae identified from this patient underwent detection of macrolide resistance via conventional PCR, amplifying and sequencing the region of the 23S rDNA gene that contains single nucleotide polymorphisms that confer resistance. Genotyping of the isolate was performed via nested Multilocus Sequence Typing (MLST) in which eight (8) housekeeping genes (ppa, pgm, gyrB, gmk, glyA, atpA, arcC, and adk) were amplified via nested PCR followed by gene sequencing and analysis. As per MLST analysis, the M. pneumoniae was identified as sequence type 14 (ST14), and no mutations that confer resistance were detected. Resistance to macrolides in M. pneumoniae is an increasing problem globally. Establishing surveillance systems is the key to informing local prescriptions. In the absence of local surveillance data, antibiotics are started empirically. If the relevant microbiological samples are not obtained before antibiotic therapy, as in most occasions in children, the course of antibiotic is completed without a microbiological diagnosis. This happens more frequently in therapy for M. pneumoniae which is treated with a macrolide in most patients. Hence, it is important to understand the macrolide sensitivity of M. pneumoniae in the setting. The M. pneumoniae detected in the present study was macrolide sensitive. Further studies are needed to examine a larger dataset in Sri Lanka to determine macrolide resistance levels to inform the use of macrolides in children with EOA. The MLST type varies in different geographical settings, and it also provides a clue to the existence of macrolide resistance. The present study enhances the database of the global distribution of different genotypes of M. pneumoniae as this is the first such characterization performed with the increased number of samples to determine macrolide resistance level in Sri Lanka. M. pneumoniae detected from a child with exacerbation of asthma in Sri Lanka was characterized as ST14 by MLST and no mutations that confer resistance were detected.Keywords: mycoplasma pneumoniae, Sri Lanka, characterization, macrolide resistance
Procedia PDF Downloads 1861231 Noninvasive Technique for Measurement of Heartbeat in Zebrafish Embryos Exposed to Electromagnetic Fields at 27 GHz
Authors: Sara Ignoto, Elena M. Scalisi, Carmen Sica, Martina Contino, Greta Ferruggia, Antonio Salvaggio, Santi C. Pavone, Gino Sorbello, Loreto Di Donato, Roberta Pecoraro, Maria V. Brundo
Abstract:
The new fifth generation technology (5G), which should favor high data-rate connections (1Gbps) and latency times lower than the current ones (<1ms), has the characteristic of working on different frequency bands of the radio wave spectrum (700 MHz, 3.6-3.8 GHz and 26.5-27.5 GHz), thus also exploiting higher frequencies than previous mobile radio generations (1G-4G). The higher frequency waves, however, have a lower capacity to propagate in free space and therefore, in order to guarantee the capillary coverage of the territory for high reliability applications, it will be necessary to install a large number of repeaters. Following the introduction of this new technology, there has been growing concern in recent years about the possible harmful effects on human health and several studies were published using several animal models. This study aimed to observe the possible short-term effects induced by 5G-millimeter waves on heartbeat of early life stages of Danio rerio using DanioScope software (Noldus). DanioScope is the complete toolbox for measurements on zebrafish embryos and larvae. The effect of substances can be measured on the developing zebrafish embryo by a range of parameters: earliest activity of the embryo’s tail, activity of the developing heart, speed of blood flowing through the vein, length and diameters of body parts. Activity measurements, cardiovascular data, blood flow data and morphometric parameters can be combined in one single tool. Obtained data are elaborate and provided by the software both numerical as well as graphical. The experiments were performed at 27 GHz by a no commercial high gain pyramidal horn antenna. According to OECD guidelines, exposure to 5G-millimeter waves was tested by fish embryo toxicity test within 96 hours post fertilization, Observations were recorded every 24h, until the end of the short-term test (96h). The results have showed an increase of heartbeat rate on exposed embryos at 48h hpf than control group, but this increase has not been shown at 72-96 h hpf. Nowadays, there is a scant of literature data about this topic, so these results could be useful to approach new studies and also to evaluate potential cardiotoxic effects of mobile radiofrequency.Keywords: Danio rerio, DanioScope, cardiotoxicity, millimeter waves.
Procedia PDF Downloads 1641230 Journal Bearing with Controllable Radial Clearance, Design and Analysis
Authors: Majid Rashidi, Shahrbanoo Farkhondeh Biabnavi
Abstract:
The hydrodynamic instability phenomenon in a journal bearing may occur by either a reduction in the load carried by journal bearing, by an increase in the journal speed, by change in the lubricant viscosity, or a combination of these factors. The previous research and development work done to overcome the instability issue of journal bearings, operating in hydrodynamic lubricate regime, can be categorized as follows: A) Actively controlling the bearing sleeve by using piezo actuator, b) Inclusion of strategically located and shaped internal grooves within inner surface of the bearing sleeve, c) Actively controlling the bearing sleeve using an electromagnetic actuator, d)Actively and externally pressurizing the lubricant within a journal bearing set, and e)Incorporating tilting pads within the inner surface of the bearing sleeve that assume different equilibrium angular position in response to changes in the bearing design parameter such as speed and load. This work presents an innovative design concept for a 'smart journal bearing' set to operate in a stable hydrodynamic lubrication regime, despite variations in bearing speed, load, and its lubricant viscosity. The proposed bearing design allows adjusting its radial clearance for an attempt to maintain a stable bearing operation under those conditions that may cause instability for a bearing with a fixed radial clearance. The design concept allows adjusting the radial clearance at small increments in the order of 0.00254 mm. This is achieved by axially moving two symmetric conical rigid cavities that are in close contact with the conically shaped outer shell of a sleeve bearing. The proposed work includes a 3D model of the bearing that depicts the structural interactions of the bearing components. The 3D model is employed to conduct finite element Analyses to simulate the mechanical behavior of the bearing from a structural point of view. The concept of controlling of the radial clearance, as presented in this work, is original and has not been proposed and discuss in previous research. A typical journal bearing was analyzed under a set of design parameters, namely r =1.27 cm (journal radius), c = 0.0254 mm (radial clearance), L=1.27 cm (bearing length), w = 445N (bearing load), μ = 0.028 Pascale (lubricant viscosity). A shaft speed as 3600 r.p.m was considered, and the mass supported by the bearing, m, is set to be 4.38kg. The Summerfield Number associated with the above bearing design parameters turn to be, S=0.3. These combinations resulted in stable bearing operation. Subsequently, the speed was postulated to increase from 3600 r.p.mto 7200 r.p.m; the bearing was found to be unstable under the new increased speed. In order to regain stability, the radial clearance was increased from c = 0.0254 mm to0.0358mm. The change in the radial clearance was shown to bring the bearing back to stable an operating condition.Keywords: adjustable clearance, bearing, hydrodynamic, instability, journal
Procedia PDF Downloads 2831229 Predictive Modelling of Curcuminoid Bioaccessibility as a Function of Food Formulation and Associated Properties
Authors: Kevin De Castro Cogle, Mirian Kubo, Maria Anastasiadi, Fady Mohareb, Claire Rossi
Abstract:
Background: The bioaccessibility of bioactive compounds is a critical determinant of the nutritional quality of various food products. Despite its importance, there is a limited number of comprehensive studies aimed at assessing how the composition of a food matrix influences the bioaccessibility of a compound of interest. This knowledge gap has prompted a growing need to investigate the intricate relationship between food matrix formulations and the bioaccessibility of bioactive compounds. One such class of bioactive compounds that has attracted considerable attention is curcuminoids. These naturally occurring phytochemicals, extracted from the roots of Curcuma longa, have gained popularity owing to their purported health benefits and also well known for their poor bioaccessibility Project aim: The primary objective of this research project is to systematically assess the influence of matrix composition on the bioaccessibility of curcuminoids. Additionally, this study aimed to develop a series of predictive models for bioaccessibility, providing valuable insights for optimising the formula for functional foods and provide more descriptive nutritional information to potential consumers. Methods: Food formulations enriched with curcuminoids were subjected to in vitro digestion simulation, and their bioaccessibility was characterized with chromatographic and spectrophotometric techniques. The resulting data served as the foundation for the development of predictive models capable of estimating bioaccessibility based on specific physicochemical properties of the food matrices. Results: One striking finding of this study was the strong correlation observed between the concentration of macronutrients within the food formulations and the bioaccessibility of curcuminoids. In fact, macronutrient content emerged as a very informative explanatory variable of bioaccessibility and was used, alongside other variables, as predictors in a Bayesian hierarchical model that predicted curcuminoid bioaccessibility accurately (optimisation performance of 0.97 R2) for the majority of cross-validated test formulations (LOOCV of 0.92 R2). These preliminary results open the door to further exploration, enabling researchers to investigate a broader spectrum of food matrix types and additional properties that may influence bioaccessibility. Conclusions: This research sheds light on the intricate interplay between food matrix composition and the bioaccessibility of curcuminoids. This study lays a foundation for future investigations, offering a promising avenue for advancing our understanding of bioactive compound bioaccessibility and its implications for the food industry and informed consumer choices.Keywords: bioactive bioaccessibility, food formulation, food matrix, machine learning, probabilistic modelling
Procedia PDF Downloads 681228 The Potential for Maritime Tourism: An African Perspective
Authors: Lynn C. Jonas
Abstract:
The African continent is rich in coastal history, heritage, and culture, presenting immense potential for the development of maritime tourism. Shipping and its related components are generally associated with the maritime industry, and tourism’s link is to the various forms of nautical tourism. Activities may include cruising, yachting, visits to lighthouses, ports, harbors, and excursions to related sites of cultural, historical, or ecological significance. There have been hundreds of years of explorers leaving a string of shipwrecks along the various coastal areas on the continent in their pursuit of establishing trade routes between Europe, Africa, and the Far East. These shipwrecks present diving opportunities in artificial reefs and marine heritage to be explored in various ways in the maritime cultural zones. Along the South African coast, for example, six Portuguese shipwrecks highlight the Bartolomeu Dias legacy of exploration, and there are a number of warships in Tanzanian waters. Furthermore, decades of African countries being under colonized rule have left the continent with an intricate cultural heritage that is enmeshed in European language architecture interlinked with, in many instances, hard-fought independent littoral states. There is potential for coastal trails to be developed to follow these historical events as, at one point in history, France had colonized 35 African states, and subsequently, 32 African states were colonized by Britain. Countries such as Cameroon still have the legacy of Francophone versus Anglophone as a result of this shift in colonizers. Further to the colonized history of the African continent, there is an uncomfortable heritage of the slave trade history. To a certain extent, these coastal slave trade posts are being considered attractive to a niche tourism audience; however, there is potential for education and interpretive measures to grow this as a tourism product. Notwithstanding these potential opportunities, there are numerous challenges to consider, such as poor maritime infrastructure, maritime security concerns with issues such as piracy, transnational crimes including weapons and migrant smuggling, drug, and human trafficking. These and related maritime issues contribute to the concerns over the porous nature of African ocean gateways, adding to the security concerns for tourists. This theoretical paper will consider these trends and how they may contribute to the growth and development of maritime tourism on the African continent. African considerations of the growth potential of tourism in coastal and marine spaces are needed, particularly with a focus on embracing the continent's tumultuous past as part of its heritage. This has the potential to contribute to the creation of a sense of ownership of opportunities.Keywords: coastal trade routes, maritime tourism, shipwrecks, slave trade routes
Procedia PDF Downloads 191227 A Non-Invasive Method for Assessing the Adrenocortical Function in the Roan Antelope (Hippotragus equinus)
Authors: V. W. Kamgang, A. Van Der Goot, N. C. Bennett, A. Ganswindt
Abstract:
The roan antelope (Hippotragus equinus) is the second largest antelope species in Africa. These past decades, populations of roan antelope are declining drastically throughout Africa. This situation resulted in the development of intensive breeding programmes for this species in Southern African, where they are popular game ranching herbivores in with increasing numbers in captivity. Nowadays, avoidance of stress is important when managing wildlife to ensure animal welfare. In this regard, a non-invasive approach to monitor the adrenocortical function as a measure of stress would be preferable, since animals are not disturbed during sample collection. However, to date, a non-invasive method has not been established for the roan antelope. In this study, we validated a non-invasive technique to monitor the adrenocortical function in this species. Herein, we performed an adrenocorticotropic hormone (ACTH) stimulation test at Lapalala reserve Wilderness, South Africa, using adult captive roan antelopes to determine the stress-related physiological responses. Two individually housed roan antelope (a male and a female) received an intramuscular injection with Synacthen depot (Norvatis) loaded into a 3ml syringe (Pneu-Dart) at an estimated dose of 1 IU/kg. A total number of 86 faecal samples (male: 46, female: 40) were collected 5 days before and 3 days post-injection. All samples were then lyophilised, pulverized and extracted with 80% ethanol (0,1g/3ml) and the resulting faecal extracts were analysed for immunoreactive faecal glucocorticoid metabolite (fGCM) concentrations using five enzyme immunoassays (EIAs); (i) 11-oxoaetiocholanolone I (detecting 11,17 dioxoandrostanes), (ii) 11-oxoaetiocholanolone II (detecting fGCM with a 5α-pregnane-3α-ol-11one structure), (iii) a 5α-pregnane-3β-11β,21-triol-20-one (measuring 3β,11β-diol CM), (iv) a cortisol and (v) a corticosterone. In both animals, all EIAs detected an increase in fGCM concentration 100% post-ACTH administration. However, the 11-oxoaetiocholanolone I EIA performed best, with a 20-fold increase in the male (baseline: 0.384 µg/g, DW; peak: 8,585 µg/g DW) and a 17-fold in the female (baseline: 0.323 µg/g DW, peak: 7,276 µg/g DW), measured 17 hours and 12 hours post-administration respectively. These results are important as the ability to assess adrenocortical function non-invasively in roan can now be used as an essential prerequisite to evaluate the effects of stressful circumstances; such as variation of environmental conditions or reproduction in other to improve management strategies for the conservation of this iconic antelope species.Keywords: adrenocorticotropic hormone challenge, adrenocortical function, captive breeding, non-invasive method, roan antelope
Procedia PDF Downloads 1451226 Differences in Patient Satisfaction Observed between Female Japanese Breast Cancer Patients Who Receive Breast-Conserving Surgery or Total Mastectomy
Authors: Keiko Yamauchi, Motoyuki Nakao, Yoko Ishihara
Abstract:
The increase in the number of women with breast cancer in Japan has required hospitals to provide a higher quality of medicine so that patients are satisfied with the treatment they receive. However, patients’ satisfaction following breast cancer treatment has not been sufficiently studied. Hence, we investigated the factors influencing patient satisfaction following breast cancer treatment among Japanese women. These women underwent either breast-conserving surgery (BCS) (n = 380) or total mastectomy (TM) (n = 247). In March 2016, we conducted a cross-sectional internet survey of Japanese women with breast cancer in Japan. We assessed the following factors: socioeconomic status, cancer-related information, the role of medical decision-making, the degree of satisfaction regarding the treatments received, and the regret arising from the medical decision-making processes. We performed logistic regression analyses with the following dependent variables: extreme satisfaction with the treatments received, and regret regarding the medical decision-making process. For both types of surgery, the odds ratio (OR) of being extremely satisfied with the cancer treatment was significantly higher among patients who did not have any regrets compared to patients who had. Also, the OR tended to be higher among patients who chose to play a wanted role in the medical decision-making process, compared with patients who did not. In the BCS group, the OR of being extremely satisfied with the treatment was higher if, at diagnosis, the patient’s youngest child was older than 19 years, compared with patients with no children. The OR was also higher if patient considered the stage and characteristics of their cancer significant. The OR of being extremely satisfied with the treatments was lower among patients who were not employed on full-time basis, and among patients who considered the second medical opinions and medical expenses to be significant. These associations were not observed in the TM group. The OR of having regrets regarding the medical decision-making process was higher among patients who chose to play a role in the decision-making process as they preferred, and was also higher in patients who were employed on either a part-time or contractual basis. For both types of surgery, the OR was higher among patients who considered a second medical opinion to be significant. Regardless of surgical type, regret regarding the medical decision-making process decreases treatment satisfaction. Patients who received breast-conserving surgery were more likely to have regrets concerning the medical decision-making process if they could not play a role in the process as they preferred. In addition, factors associated with the satisfaction with treatment in BCS group but not TM group included the second medical opinion, medical expenses, employment status, and age of the youngest child at diagnosis.Keywords: medical decision making, breast-conserving surgery, total mastectomy, Japanese
Procedia PDF Downloads 1471225 Bringing the World to Net Zero Carbon Dioxide by Sequestering Biomass Carbon
Authors: Jeffrey A. Amelse
Abstract:
Many corporations aspire to become Net Zero Carbon Carbon Dioxide by 2035-2050. This paper examines what it will take to achieve those goals. Achieving Net Zero CO₂ requires an understanding of where energy is produced and consumed, the magnitude of CO₂ generation, and proper understanding of the Carbon Cycle. The latter leads to the distinction between CO₂ and biomass carbon sequestration. Short reviews are provided for prior technologies proposed for reducing CO₂ emissions from fossil fuels or substitution by renewable energy, to focus on their limitations and to show that none offer a complete solution. Of these, CO₂ sequestration is poised to have the largest impact. It will just cost money, scale-up is a huge challenge, and it will not be a complete solution. CO₂ sequestration is still in the demonstration and semi-commercial scale. Transportation accounts for only about 30% of total U.S. energy demand, and renewables account for only a small fraction of that sector. Yet, bioethanol production consumes 40% of U.S. corn crop, and biodiesel consumes 30% of U.S. soybeans. It is unrealistic to believe that biofuels can completely displace fossil fuels in the transportation market. Bioethanol is traced through its Carbon Cycle and shown to be both energy inefficient and inefficient use of biomass carbon. Both biofuels and CO₂ sequestration reduce future CO₂ emissions from continued use of fossil fuels. They will not remove CO₂ already in the atmosphere. Planting more trees has been proposed as a way to reduce atmospheric CO₂. Trees are a temporary solution. When they complete their Carbon Cycle, they die and release their carbon as CO₂ to the atmosphere. Thus, planting more trees is just 'kicking the can down the road.' The only way to permanently remove CO₂ already in the atmosphere is to break the Carbon Cycle by growing biomass from atmospheric CO₂ and sequestering biomass carbon. Sequestering tree leaves is proposed as a solution. Unlike wood, leaves have a short Carbon Cycle time constant. They renew and decompose every year. Allometric equations from the USDA indicate that theoretically, sequestrating only a fraction of the world’s tree leaves can get the world to Net Zero CO₂ without disturbing the underlying forests. How can tree leaves be permanently sequestered? It may be as simple as rethinking how landfills are designed to discourage instead of encouraging decomposition. In traditional landfills, municipal waste undergoes rapid initial aerobic decomposition to CO₂, followed by slow anaerobic decomposition to methane and CO₂. The latter can take hundreds to thousands of years. The first step in anaerobic decomposition is hydrolysis of cellulose to release sugars, which those who have worked on cellulosic ethanol know is challenging for a number of reasons. The key to permanent leaf sequestration may be keeping the landfills dry and exploiting known inhibitors for anaerobic bacteria.Keywords: carbon dioxide, net zero, sequestration, biomass, leaves
Procedia PDF Downloads 1281224 Weed Out the Bad Seeds: The Impact of Strategic Portfolio Management on Patent Quality
Authors: A. Lefebre, M. Willekens, K. Debackere
Abstract:
Since the 1990s, patent applications have been booming, especially in the field of telecommunications. However, this increase in patent filings has been associated with an (alleged) decrease in patent quality. The plethora of low-quality patents devalues the high-quality ones, thus weakening the incentives for inventors to patent inventions. Despite the rich literature on strategic patenting, previous research has neglected to emphasize the importance of patent portfolio management and its impact on patent quality. In this paper, we compare related patent portfolios vs. nonrelated patents and investigate whether the patent quality and innovativeness differ between the two types. In the analyses, patent quality is proxied by five individual proxies (number of inventors, claims, renewal years, designated states, and grant lag), and these proxies are then aggregated into a quality index. Innovativeness is proxied by two measures: the originality and radicalness index. Results suggest that related patent portfolios have, on average, a lower patent quality compared to nonrelated patents, thus suggesting that firms use them for strategic purposes rather than for the extended protection they could offer. Even upon testing the individual proxies as a dependent variable, we find evidence that related patent portfolios are of lower quality compared to nonrelated patents, although not all results show significant coefficients. Furthermore, these proxies provide evidence of the importance of adding fixed effects to the model. Since prior research has found that these proxies are inherently flawed and never fully capture the concept of patent quality, we have chosen to run the analyses with individual proxies as supplementary analyses; however, we stick with the comprehensive index as our main model. This ensures that the results are not dependent upon one certain proxy but allows for multiple views of the concept. The presence of divisional applications might be linked to the level of innovativeness of the underlying invention. It could be the case that the parent application is so important that firms are going through the administrative burden of filing for divisional applications to ensure the protection of the invention and the preemption of competition. However, it could also be the case that the preempting is a result of divisional applications being used strategically as a backup plan and prolonging strategy, thus negatively impacting the innovation in the portfolio. Upon testing the level of novelty and innovation in the related patent portfolios by means of the originality and radicalness index, we find evidence for a significant negative association with related patent portfolios. The minimum innovation that has been brought on by the patents in the related patent portfolio is lower compared to the minimum innovation that can be found in nonrelated portfolios, providing evidence for the second argument.Keywords: patent portfolio management, patent quality, related patent portfolios, strategic patenting
Procedia PDF Downloads 941223 Microstructural Interactions of Ag and Sc Alloying Additions during Casting and Artificial Ageing to a T6 Temper in a A356 Aluminium Alloy
Authors: Dimitrios Bakavos, Dimitrios Tsivoulas, Chaowalit Limmaneevichitr
Abstract:
Aluminium cast alloys, of the Al-Si system, are widely used for shape castings. Their microstructures can be further improved on one hand, by alloying modification and on the other, by optimised artificial ageing. In this project four hypoeutectic Al-alloys, the A356, A356+ Ag, A356+Sc, and A356+Ag+Sc have been studied. The interactions of Ag and Sc during solidification and artificial ageing at 170°C to a T6 temper have been investigated in details. The evolution of the eutectic microstructure is studied by thermal analysis and interrupted solidification. The ageing kinetics of the alloys has been identified by hardness measurements. The precipitate phases, number density, and chemical composition has been analysed by means of transmission electron microscopy (TEM) and EDS analysis. Furthermore, the SHT effect onto the Si eutectic particles for the four alloys has been investigated by means of optical microscopy, image analysis, and the UTS strength has been compared with the UTS of the alloys after casting. The results suggest that the Ag additions, significantly enhance the ageing kinetics of the A356 alloy. The formation of β” precipitates were kinetically accelerated and an increase of 8% and 5% in peak hardness strength has been observed compared to the base A356 and A356-Sc alloy. The EDS analysis demonstrates that Ag is present on the β” precipitate composition. After prolonged ageing 100 hours at 170°C, the A356-Ag exhibits 17% higher hardness strength compared to the other three alloys. During solidification, Sc additions change the macroscopic eutectic growth mode to the propagation of a defined eutectic front from the mold walls opposite to the heat flux direction. In contrast, Ag has no significance effect on the solidification mode revealing a macroscopic eutectic growth similar to A356 base alloy. However, the mechanical strength of the as cast A356-Ag, A356-Sc, and A356+Ag+Sc additions has increased by 5, 30, and 35 MPa, respectively. The outcome is a tribute to the refining of the eutectic Si that takes place which it is strong in the A356-Sc alloy and more profound when silver and scandium has been combined. Moreover after SHT the Al alloy with the highest mechanical strength, is the one with Ag additions, in contrast to the as-cast condition where the Sc and Sc+Ag alloy was the strongest. The increase of strength is mainly attributed to the dissolution of grain boundary precipitates the increase of the solute content into the matrix, the spherodisation, and coarsening of the eutectic Si. Therefore, we could safely conclude for an A356 hypoeutectic alloy additions of: Ag exhibits a refining effect on the Si eutectic which is improved when is combined with Sc. In addition Ag enhance, the ageing kinetics increases the hardness and retains its strength at prolonged artificial ageing in a Al-7Si 0.3Mg hypoeutectic alloy. Finally the addition of Sc is beneficial due to the refinement of the α-Al grain and modification-refinement of the eutectic Si increasing the strength of the as-cast product.Keywords: ageing, casting, mechanical strength, precipitates
Procedia PDF Downloads 498