Search results for: dye absorption capacity
686 RA-Apriori: An Efficient and Faster MapReduce-Based Algorithm for Frequent Itemset Mining on Apache Flink
Authors: Sanjay Rathee, Arti Kashyap
Abstract:
Extraction of useful information from large datasets is one of the most important research problems. Association rule mining is one of the best methods for this purpose. Finding possible associations between items in large transaction based datasets (finding frequent patterns) is most important part of the association rule mining. There exist many algorithms to find frequent patterns but Apriori algorithm always remains a preferred choice due to its ease of implementation and natural tendency to be parallelized. Many single-machine based Apriori variants exist but massive amount of data available these days is above capacity of a single machine. Therefore, to meet the demands of this ever-growing huge data, there is a need of multiple machines based Apriori algorithm. For these types of distributed applications, MapReduce is a popular fault-tolerant framework. Hadoop is one of the best open-source software frameworks with MapReduce approach for distributed storage and distributed processing of huge datasets using clusters built from commodity hardware. However, heavy disk I/O operation at each iteration of a highly iterative algorithm like Apriori makes Hadoop inefficient. A number of MapReduce-based platforms are being developed for parallel computing in recent years. Among them, two platforms, namely, Spark and Flink have attracted a lot of attention because of their inbuilt support to distributed computations. Earlier we proposed a reduced- Apriori algorithm on Spark platform which outperforms parallel Apriori, one because of use of Spark and secondly because of the improvement we proposed in standard Apriori. Therefore, this work is a natural sequel of our work and targets on implementing, testing and benchmarking Apriori and Reduced-Apriori and our new algorithm ReducedAll-Apriori on Apache Flink and compares it with Spark implementation. Flink, a streaming dataflow engine, overcomes disk I/O bottlenecks in MapReduce, providing an ideal platform for distributed Apriori. Flink's pipelining based structure allows starting a next iteration as soon as partial results of earlier iteration are available. Therefore, there is no need to wait for all reducers result to start a next iteration. We conduct in-depth experiments to gain insight into the effectiveness, efficiency and scalability of the Apriori and RA-Apriori algorithm on Flink.Keywords: apriori, apache flink, Mapreduce, spark, Hadoop, R-Apriori, frequent itemset mining
Procedia PDF Downloads 294685 Risk Mapping of Road Traffic Incidents in Greater Kampala Metropolitan Area for Planning of Emergency Medical Services
Authors: Joseph Kimuli Balikuddembe
Abstract:
Road traffic incidents (RTIs) continue to be a serious public health and development burden around the globe. Compared to high-income countries (HICs), the low and middle-income countries (LMICs) bear the heaviest brunt of RTIs. Like other LMICs, Uganda, a country located in Eastern Africa, has been experiencing a worryingly high burden of RTIs and their associated impacts. Over the years, the highest number of all the total registered RTIs in Uganda has taken place in the Greater Kampala Metropolitan Area (GKMA). This places a tremendous demand on the few existing emergency medical services (EMS) to adequately respond to those affected. In this regard, the overall objective of the study was to risk map RTIs in the GKMA so as to help in the better planning of EMS for the victims of RTIs. Other objectives included: (i) identifying the factors affecting the exposure, vulnerability and EMS capacity for the victims of RTIs; (ii) identifying the RTI prone-areas and estimating their associated risk factors; (iii) identifying the weaknesses and capacities which affect the EMS systems for RTIs; and (iv) determining the strategies and priority actions that can help to improve the EMS response for RTI victims in the GKMA. To achieve these objectives, a mixed methodological approach was used in four phrases for approximately 15 months. It employed a systematic review based on the preferred reporting items for systematic reviews and meta-data analysis guidelines; a Delphi panel technique; retrospective data analysis; and a cross-sectional method. With Uganda progressing forward as envisaged in its 'Vision 2040', the GKMA, which is the country’s political and socioeconomic epicenter, is experiencing significant changes in terms of population growth, urbanization, infrastructure development, rapid motorization and other factors. Unless appropriate actions are taken, these changes are likely to worsen the already alarming rate of RTIs in Uganda, and in turn also to put pressure on the few existing EMS and facilities to render care for those affected. Therefore, road safety vis-à-vis injury prevention measures, which are needed to reduce the burden of RTIs, should be multifaceted in nature so that they closely correlate with the ongoing dynamics that contribute to RTIs, particularly in the GKMA and Uganda as a whole.Keywords: emergency medical services, Kampala, risk mapping, road traffic incidents
Procedia PDF Downloads 121684 Emoji, the Language of the Future: An Analysis of the Usage and Understanding of Emoji across User-Groups
Authors: Sakshi Bhalla
Abstract:
On the one hand, given their seemingly simplistic, near universal usage and understanding, emoji are discarded as a potential step back in the evolution of communication. On the other, their effectiveness, pervasiveness, and adaptability across and within contexts are undeniable. In this study, the responses of 40 people (categorized by age) were recorded based on a uniform two-part questionnaire where they were required to a) identify the meaning of 15 emoji when placed in isolation, and b) interpret the meaning of the same 15 emoji when placed in a context-defining posting on Twitter. Their responses were studied on the basis of deviation from their responses that identified the emoji in isolation, as well as the originally intended meaning ascribed to the emoji. Based on an analysis of these results, it was discovered that each of the five age categories uses, understands and perceives emoji differently, which could be attributed to the degree of exposure they have undergone. For example, in the case of the youngest category (aged < 20), it was observed that they were the least accurate at correctly identifying emoji in isolation (~55%). Further, their proclivity to change their response with respect to the context was also the least (~31%). However, an analysis of each of their individual responses showed that these first-borns of social media seem to have reached a point where emojis no longer inspire their most literal meanings to them. The meaning and implication of these emoji have evolved to imply their context-derived meanings, even when placed in isolation. These trends carry forward meaningfully for the other four groups as well. In the case of the oldest category (aged > 35), however, the trends indicated inaccuracy and therefore, a higher incidence of a proclivity to change their responses. When studied in a continuum, the responses indicate that slowly and steadily, emoji are evolving from pictograms to ideograms. That is to suggest that they do not just indicate a one-to-one relation between a singular form and singular meaning. In fact, they communicate increasingly complicated ideas. This is much like the evolution of ancient hieroglyphics on papyrus reed or cuneiform on Sumerian clay tablets, which evolved from simple pictograms to progressively more complex ideograms. This evolution within communication is parallel to and contingent on the simultaneous evolution of communication. What’s astounding is the capacity of humans to leverage different platforms to facilitate such changes. Twiterese, as it is now called, is one of the instances where language is adapting to the demands of the digital world. That it does not have a spoken component, an ostensible grammar, and lacks standardization of use and meaning, as some might suggest, may seem like impediments in qualifying it as the 'language' of the digital world. However, that kind of a declarative remains a function of time, and time alone.Keywords: communication, emoji, language, Twitter
Procedia PDF Downloads 95683 Learnings From Sri Lanka: Theorizing of Grassroots Women’s Participation in NGO Peacebuilding Activism Against Transnational and Third-World Feminist Perspectives
Authors: Piumi L. Denagamage, Vibusha Madanayake
Abstract:
At the end of a 30-year civil war in Sri Lanka in 2009, Non-Governmental Organizations (NGOs) played a prominent role in post-war development and peacebuilding. Women were a major “beneficiary” of NGO activities on socio-economic empowerment, capacity building for advocacy, and grassroots participation in activism. Undoubtedly, their contribution to Sri Lanka’s post-war transition is tremendous. As development practitioners and researchers who have worked closely with several international and national NGOs in Sri Lanka’s post-war setting, the authors, while practicing self-reflexivity, intend to theorize the grey literature prepared by NGOs against the theoretical frameworks of Transnational and Third World feminisms. Using examples of the grassroots activities conducted by the NGOs with war-affected women, the paper questions whether Colombo-based feminism represents the lived realities of grassroots women at the transnational level. It argues that Colombo-based feminists use their power and exposure to Western feminist approaches to portray diverse forms of oppression women face at grassroots levels, their needs for advocacy, and different modes of resistance on the ground. Many NGOs depend on international donor funding for their grassroots work, which also contributes to their utilization of Western-led knowledge. Despite their efforts to “save marginalized women from oppression,” these modes of intervention are often rejected by the public, including women at local levels. This has also resulted in the rejection of feminism entirely as a culturally root-less alien Western ideology. The analysis connects with the Transnational and Third World theoretical feminist perspectives to problematize the power relations between Western knowledge systems and the lived experiences of grassroots women in the peacebuilding process through NGO activism in Sri Lanka. It also emphasizes that the infiltration of Western knowledge through NGOs has led to the participation of grassroots women only through adjustments of their lived experiences to match the alien knowledge rather than theorizing based on their own lived realities. While sharing a concern that NGOs’ power to adopt Western knowledge systems is often unchecked and unmitigated, the paper signifies the importance of adopting the methods of alternative theorizing to ensure meaningful participation of Third World women in peacebuilding.Keywords: alternative theorizing, colombo-based feminism, grassroots women in peacebuilding, NGO activism, transnational and third world feminisms
Procedia PDF Downloads 55682 Empirical Superpave Mix-Design of Rubber-Modified Hot-Mix Asphalt in Railway Sub-Ballast
Authors: Fernando M. Soto, Gaetano Di Mino
Abstract:
The design of an unmodified bituminous mixture and three rubber-aggregate mixtures containing rubber-aggregate by a dry process (RUMAC) was evaluated, using an empirical-analytical approach based on experimental findings obtained in the laboratory with the volumetric mix design by gyratory compaction. A reference dense-graded bituminous sub-ballast mixture (3% of air voids and a bitumen 4% over the total weight of the mix), and three rubberized mixtures by dry process (1,5 to 3% of rubber by total weight and 5-7% of binder) were used applying the Superpave mix-design for a level 3 (high-traffic) design rail lines. The railway trackbed section analyzed was a granular layer of 19 cm compacted, while for the sub-ballast a thickness of 12 cm has been used. In order to evaluate the effect of increasing the specimen density (as a percent of its theoretical maximum specific gravity), in this article, are illustrated the results obtained after different comparative analysis into the influence of varying the binder-rubber percentages under the sub-ballast layer mix-design. This work demonstrates that rubberized blends containing crumb and ground rubber in bituminous asphalt mixtures behave at least similar or better than conventional asphalt materials. By using the same methodology of volumetric compaction, the densification curves resulting from each mixture have been studied. The purpose is to obtain an optimum empirical parameter multiplier of the number of gyrations necessary to reach the same compaction energy as in conventional mixtures. It has provided some experimental parameters adopting an empirical-analytical method, evaluating the results obtained from the gyratory-compaction of bituminous mixtures with an HMA and rubber-aggregate blends. An extensive integrated research has been carried out to assess the suitability of rubber-modified hot mix asphalt mixtures as a sub-ballast layer in railway underlayment trackbed. Design optimization of the mixture was conducted for each mixture and the volumetric properties analyzed. Also, an improved and complete manufacturing process, compaction and curing of these blends are provided. By adopting this increase-parameters of compaction, called 'beta' factor, mixtures modified with rubber with uniform densification and workability are obtained that in the conventional mixtures. It is found that considering the usual bearing capacity requirements in rail track, the optimal rubber content is 2% (by weight) or 3.95% (by volumetric substitution) and a binder content of 6%.Keywords: empirical approach, rubber-asphalt, sub-ballast, superpave mix-design
Procedia PDF Downloads 368681 Handling, Exporting and Archiving Automated Mineralogy Data Using TESCAN TIMA
Authors: Marek Dosbaba
Abstract:
Within the mining sector, SEM-based Automated Mineralogy (AM) has been the standard application for quickly and efficiently handling mineral processing tasks. Over the last decade, the trend has been to analyze larger numbers of samples, often with a higher level of detail. This has necessitated a shift from interactive sample analysis performed by an operator using a SEM, to an increased reliance on offline processing to analyze and report the data. In response to this trend, TESCAN TIMA Mineral Analyzer is designed to quickly create a virtual copy of the studied samples, thereby preserving all the necessary information. Depending on the selected data acquisition mode, TESCAN TIMA can perform hyperspectral mapping and save an X-ray spectrum for each pixel or segment, respectively. This approach allows the user to browse through elemental distribution maps of all elements detectable by means of energy dispersive spectroscopy. Re-evaluation of the existing data for the presence of previously unconsidered elements is possible without the need to repeat the analysis. Additional tiers of data such as a secondary electron or cathodoluminescence images can also be recorded. To take full advantage of these information-rich datasets, TIMA utilizes a new archiving tool introduced by TESCAN. The dataset size can be reduced for long-term storage and all information can be recovered on-demand in case of renewed interest. TESCAN TIMA is optimized for network storage of its datasets because of the larger data storage capacity of servers compared to local drives, which also allows multiple users to access the data remotely. This goes hand in hand with the support of remote control for the entire data acquisition process. TESCAN also brings a newly extended open-source data format that allows other applications to extract, process and report AM data. This offers the ability to link TIMA data to large databases feeding plant performance dashboards or geometallurgical models. The traditional tabular particle-by-particle or grain-by-grain export process is preserved and can be customized with scripts to include user-defined particle/grain properties.Keywords: Tescan, electron microscopy, mineralogy, SEM, automated mineralogy, database, TESCAN TIMA, open format, archiving, big data
Procedia PDF Downloads 109680 Cr (VI) Adsorption on Ce0.25Zr0.75O2.nH2O-Kinetics and Thermodynamics
Authors: Carlos Alberto Rivera-corredor, Angie Dayana Vargas-Ceballos, Edison Gilpavas, Izabela Dobrosz-Gómez, Miguel Ángel Gómez-García
Abstract:
Hexavalent chromium, Cr (VI) is present in the effluents from different industries such as electroplating, mining, leather tanning, etc. This compound is of great academic and industrial concern because of its toxic and carcinogenic behavior. Its dumping to both environmental and public health for animals and humans causes serious problems in water sources. The amount of Cr (VI) in industrial wastewaters ranges from 0.5 to 270,000 mgL-1. According to the Colombian standard for water quality (NTC-813-2010), the maximum allowed concentration for the Cr (VI) in drinking water is 0.05 mg L-1. To comply with this limit, it is essential that industries treat their effluent to reduce the Cr (VI) to acceptable levels. Numerous methods have been reported for the treatment removing metal ions from aqueous solutions such as: reduction, ion exchange, electrodialysis, etc. Adsorption has become a promising method for the purification of metal ions in water, since its application corresponds with an economic and efficient technology. The absorbent selection and the kinetic and thermodynamic study of the adsorption conditions are key to the development of a suitable adsorption technology. The Ce0.25Zr0.75O2.nH2O presents higher adsorption capacity between a series of hydrated mixed oxides Ce1-xZrxO2 (x = 0, 0.25, 0.5, 0.75, 1). This work presents the kinetic and thermodynamic study of Cr (VI) adsorption on Ce0.25Zr0.75O2.nH2O. Experiments were performed under the following experimental conditions: initial Cr (VI) concentration = 25, 50 and 100 mgL-1, pH = 2, adsorbent charge = 4 gL-1, stirring time = 60 min, temperature=20, 28 and 40 °C. The Cr (VI) concentration was spectrophotometrically estimated by the method of difenilcarbazide with monitoring the absorbance at 540 nm. The Cr (VI) adsorption over hydrated Ce0.25Zr0.75O2.nH2O models was analyzed using pseudo-first and pseudo-second order kinetics. The Langmuir and Freundlich models were used to model the experimental data. The convergence between the experimental values and those predicted by the model, is expressed as a linear regression correlation coefficient (R2) and was employed as the model selection criterion. The adsorption process followed the pseudo-second order kinetic model and obeyed the Langmuir isotherm model. The thermodynamic parameters were calculated as: ΔH°=9.04 kJmol-1,ΔS°=0.03 kJmol-1 K-1, ΔG°=-0.35 kJmol-1 and indicated the endothermic and spontaneous nature of the adsorption process, governed by physisorption interactions.Keywords: adsorption, hexavalent chromium, kinetics, thermodynamics
Procedia PDF Downloads 299679 Resilience and Urban Transformation: A Review of Recent Interventions in Europe and Turkey
Authors: Bilge Ozel
Abstract:
Cities are high-complex living organisms and are subjects to continuous transformations produced by the stress that derives from changing conditions. Today the metropolises are seen like “development engines” of the countries and accordingly they become the centre of better living conditions that encourages demographic growth which constitutes the main reason of the changes. Indeed, the potential for economic advancement of the cities directly represents the economic status of their countries. The term of “resilience”, which sees the changes as natural processes and represents the flexibility and adaptability of the systems in the face of changing conditions, becomes a key concept for the development of urban transformation policies. The term of “resilience” derives from the Latin word ‘resilire’, which means ‘bounce’, ‘jump back’, refers to the ability of a system to withstand shocks and still maintain the basic characteristics. A resilient system does not only survive the potential risks and threats but also takes advantage of the positive outcomes of the perturbations and ensures adaptation to the new external conditions. When this understanding is taken into the urban context - or rather “urban resilience” - it delineates the capacity of cities to anticipate upcoming shocks and changes without undergoing major alterations in its functional, physical, socio-economic systems. Undoubtedly, the issue of coordinating the urban systems in a “resilient” form is a multidisciplinary and complex process as the cities are multi-layered and dynamic structures. The concept of “urban transformation” is first launched in Europe just after World War II. It has been applied through different methods such as renovation, revitalization, improvement and gentrification. These methods have been in continuous advancement by acquiring new meanings and trends over years. With the effects of neoliberal policies in the 1980s, the concept of urban transformation has been associated with economic objectives. Subsequently this understanding has been improved over time and had new orientations such as providing more social justice and environmental sustainability. The aim of this research is to identify the most applied urban transformation methods in Turkey and its main reasons of being selected. Moreover, investigating the lacking and limiting points of the urban transformation policies in the context of “urban resilience” in a comparative way with European interventions. The emblematic examples, which symbolize the breaking points of the recent evolution of urban transformation concepts in Europe and Turkey, are chosen and reviewed in a critical way.Keywords: resilience, urban dynamics, urban resilience, urban transformation
Procedia PDF Downloads 265678 Prescription of Maintenance Fluids in the Emergency Department
Authors: Adrian Craig, Jonathan Easaw, Rose Jordan, Ben Hall
Abstract:
The prescription of intravenous fluids is a fundamental component of inpatient management, but it is one which usually lacks thought. Fluids are a drug, which like any other can cause harm when prescribed inappropriately or wrongly. However, it is well recognised that it is poorly done, especially in the acute portals. The National Institute for Health and Care Excellence (NICE) recommends 1mmol/kg of potassium, sodium, and chloride per day. With various options of fluids, clinicians tend to face difficulty in choosing the most appropriate maintenance fluid, and there is a reluctance to prescribe potassium as part of an intravenous maintenance fluid regime. The aim was to prospectively audit the prescription of the first bag of intravenous maintenance fluids, the use of urea and electrolytes results to guide the choice of fluid and the use of fluid prescription charts, in a busy emergency department of a major trauma centre in Stoke-on-Trent, United Kingdom. This was undertaken over a week in early November 2016. Of those prescribed maintenance fluid only 8.9% were prescribed a fluid which was most appropriate for their daily electrolyte requirements. This audit has helped to highlight further the issues that are faced in busy Emergency Departments within hospitals that are stretched and lack capacity for prompt transfer to a ward. It has supported the findings of NICE, that emergency admission portals such as Emergency Departments poorly prescribed intravenous fluid therapy. The findings have enabled simple steps to be taken to educate clinicians about their fluid of choice. This has included: posters to remind clinicians to consider the urea and electrolyte values before prescription, suggesting the inclusion of a suggested intravenous fluid of choice in the prescription chart of the trust and the inclusion of a session within the introduction programme revising intravenous fluid therapy and daily electrolyte requirements. Moving forward, once the interventions have been implemented then, the data will be reaudited in six months to note any improvement in maintenance fluid choice. Alongside this, an audit of the rate of intravenous maintenance fluid therapy would be proposed to further increase patient safety by avoiding unintentional fluid overload which may cause unnecessary harm to patients within the hospital. In conclusion, prescription of maintenance fluid therapy was poor within the Emergency Department, and there is a great deal of opportunity for improvement. Therefore, the measures listed above will be implemented and the data reaudited.Keywords: chloride, electrolyte, emergency department, emergency medicine, fluid, fluid therapy, intravenous, maintenance, major trauma, potassium, sodium, trauma
Procedia PDF Downloads 322677 Assessment of Genetic Variability of Potato Genotypes for Proline Under Salt Stress Conditions
Authors: Elchin Hajiyev, Afet Memmedova Dadash, Sabina Hajiyeva, Aynur Karimova, Ramiz Aliyev
Abstract:
Although potatoes have a wide distribution range, the yield potential of varieties varies greatly depending on the region. Our country is made up of agricultural regions with very different environmental characteristics.In this case, we cannot expect the introduced varieties to show the same adaptation to the different conditions of our country. For this reason, in our country, varieties with high general adaptability should be used, rather than varieties with special adaptability in certain areas. Soil salinization has become a global problem.Increased salinity has a serious impact on food security by reducing plant productivity. Plants have protective mechanisms of adaptation to salt stress, such as the synthesis of physiologically active substances, resistance to antioxidant stress and oxidation of membrane lipids. One of these substances is free proline. Our study revealed genetic variation in proline accumulation among samples exposed to stress factors.Changes in proline content under stress conditions were studied in 50 samples. There was wide variation across all treatments.The amount of proline varied between 7.2–37.7 μM/g under salinity conditions.The lowest rate was in the SF33 genotype (1.5 times more than the control (2.5 μM/g)).The highest level of proline under the influence of salt stress was in the SF45 genotype (7.25 times higher than the control (32.5 μM/g)). Our studies have found that the protective system reacts differently to the influence of stress factors. According to the results obtained on the amount of proline, adaptation mechanisms must be more actively activated to maintain metabolism and ensure viability in sensitive forms under the influence of stress factors. At high doses of the salt stressor, a tenfold increase in proline compared to the control indicates significant damage to the plant organism as a result of stress.To prevent damage to the body, the antioxidant system needs to quickly mobilize and work at full capacity in adverse conditions. An increase in the dose of the stress factor salt in our study caused a greater increase in the amount of free proline in plant tissues. Considering the functions of proline as an osmoprotector and antioxidant, it was found that increasing its amount is aimed at protecting the plant from the acute effects of stressors.Keywords: genetic variability, potato, genotypes, proline, stress
Procedia PDF Downloads 49676 Standardizing and Achieving Protocol Objectives for ChestWall Radiotherapy Treatment Planning Process using an O-ring Linac in High-, Low- and Middle-income Countries
Authors: Milton Ixquiac, Erick Montenegro, Francisco Reynoso, Matthew Schmidt, Thomas Mazur, Tianyu Zhao, Hiram Gay, Geoffrey Hugo, Lauren Henke, Jeff Michael Michalski, Angel Velarde, Vicky de Falla, Franky Reyes, Osmar Hernandez, Edgar Aparicio Ruiz, Baozhou Sun
Abstract:
Purpose: Radiotherapy departments in low- and middle-income countries (LMICs) like Guatemala have recently introduced intensity-modulated radiotherapy (IMRT). IMRT has become the standard of care in high-income countries (HIC) due to reduced toxicity and improved outcomes in some cancers. The purpose of this work is to show the agreement between the dosimetric results shown in the Dose Volume Histograms (DVH) to the objectives proposed in the adopted protocol. This is the initial experience with an O-ring Linac. Methods and Materials: An O-Linac Linac was installed at our clinic in Guatemala in 2019 and has been used to treat approximately 90 patients daily with IMRT. This Linac is a completely Image Guided Device since to deliver each radiotherapy session must take a Mega Voltage Cone Beam Computerized Tomography (MVCBCT). In each MVCBCT, the Linac deliver 9 UM, and they are taken into account while performing the planning. To start the standardization, the TG263 was employed in the nomenclature and adopted a hypofractionated protocol to treat ChestWall, including supraclavicular nodes achieving 40.05Gy in 15 fractions. The planning was developed using 4 semiarcs from 179-305 degrees. The planner must create optimization volumes for targets and Organs at Risk (OARs); the difficulty for the planner was the dose base due to the MVCBCT. To evaluate the planning modality, we used 30 chestwall cases. Results: The plans created manually achieve the protocol objectives. The protocol objectives are the same as the RTOG1005, and the DHV curves look clinically acceptable. Conclusions: Despite the O-ring Linac doesn´t have the capacity to obtain kv images, the cone beam CT was created using MV energy, the dose delivered by the daily image setup process still without affect the dosimetric quality of the plans, and the dose distribution is acceptable achieving the protocol objectives.Keywords: hypofrationation, VMAT, chestwall, radiotherapy planning
Procedia PDF Downloads 118675 Effect of Cutting Tools and Working Conditions on the Machinability of Ti-6Al-4V Using Vegetable Oil-Based Cutting Fluids
Authors: S. Gariani, I. Shyha
Abstract:
Cutting titanium alloys are usually accompanied with low productivity, poor surface quality, short tool life and high machining costs. This is due to the excessive generation of heat at the cutting zone and difficulties in heat dissipation due to relatively low heat conductivity of this metal. The cooling applications in machining processes are crucial as many operations cannot be performed efficiently without cooling. Improving machinability, increasing productivity, enhancing surface integrity and part accuracy are the main advantages of cutting fluids. Conventional fluids such as mineral oil-based, synthetic and semi-synthetic are the most common cutting fluids in the machining industry. Although, these cutting fluids are beneficial in the industries, they pose a great threat to human health and ecosystem. Vegetable oils (VOs) are being investigated as a potential source of environmentally favourable lubricants, due to a combination of biodegradability, good lubricous properties, low toxicity, high flash points, low volatility, high viscosity indices and thermal stability. Fatty acids of vegetable oils are known to provide thick, strong, and durable lubricant films. These strong lubricating films give the vegetable oil base stock a greater capability to absorb pressure and high load carrying capacity. This paper details preliminary experimental results when turning Ti-6Al-4V. The impact of various VO-based cutting fluids, cutting tool materials, working conditions was investigated. The full factorial experimental design was employed involving 24 tests to evaluate the influence of process variables on average surface roughness (Ra), tool wear and chip formation. In general, Ra varied between 0.5 and 1.56 µm and Vasco1000 cutting fluid presented comparable performance with other fluids in terms of surface roughness while uncoated coarse grain WC carbide tool achieved lower flank wear at all cutting speeds. On the other hand, all tools tips were subjected to uniform flank wear during whole cutting trails. Additionally, formed chip thickness ranged between 0.1 and 0.14 mm with a noticeable decrease in chip size when higher cutting speed was used.Keywords: cutting fluids, turning, Ti-6Al-4V, vegetable oils, working conditions
Procedia PDF Downloads 279674 Enhancing Access to Microfinance for Housing Provision in the Informal Sector of North East Nigeria
Authors: Wilfred Emmannuel Dzasu, Sani Usman Kunya, Inuwa Yusuf Mohammed, Moses Jonathan Gambo
Abstract:
The research aimed at investigating and identifying the strategies for enhancing access to microfinance for housing provision in the informal sector of North East Nigeria, with a focus on addressing the critical issue of housing poverty and lack of access to affordable housing finance among low-income households in the informal sector. The study employed an exploratory sequential mixed method design, combining both qualitative and quantitative data collection and analysis techniques. In the qualitative phase, 12 participants from microfinance institutions (MFIs) in four selected states (Adamawa, Bauchi, Gombe, and Taraba) were interviewed. The interviews were conducted using an interview guide with open-ended questions and were recorded with the consent of the respondents. In the quantitative phase, a survey strategy was adopted to collect data from 500 questionnaires distributed to informal sector workers (ISWs) in the study area. A total of 350 questionnaires were returned, representing a 70.0% response rate. The most preferred strategy for improving access to housing microfinance among ISWs is aggressive awareness of housing financing options by MFIs, with a mean score of 4.213; the most important strategy for improving access to housing microfinance among MFIs is close monitoring and adequate supervision of housing loan beneficiaries by MFIs, with a mean score of 4.675. The study identified several government-related strategies that are necessary for enhancing access to housing microfinance, including the provision of grants and subsidized intervention funds for housing, improvement in infrastructures to aid housing developments, and adequate measures for checking inflation/price fluctuation of building materials. The study also identified several MFI-related strategies that are necessary for enhancing access to housing microfinance, including deliberate expansion in the capital bases of MFIs, adequate training and capacity development of MFIs staff on relevant skills in housing micro-financing, and introduction of loan products that suit the incremental building needs of informal sector workers. Overall, the study highlights the need for a combination of government-related and MFI-related strategies to enhance access to microfinance for housing provision in the informal sector of North East Nigeria.Keywords: finanace, microfinance, housing, North East Nigeria
Procedia PDF Downloads 26673 Using Hemicellulosic Liquor from Sugarcane Bagasse to Produce Second Generation Lactic Acid
Authors: Regiane A. Oliveira, Carlos E. Vaz Rossell, Rubens Maciel Filho
Abstract:
Lactic acid, besides a valuable chemical may be considered a platform for other chemicals. In fact, the feasibility of hemicellulosic sugars as feedstock for lactic acid production process, may represent the drop of some of the barriers for the second generation bioproducts, especially bearing in mind the 5-carbon sugars from the pre-treatment of sugarcane bagasse. Bearing this in mind, the purpose of this study was to use the hemicellulosic liquor from sugarcane bagasse as a substrate to produce lactic acid by fermentation. To release of sugars from hemicellulose it was made a pre-treatment with a diluted sulfuric acid in order to obtain a xylose's rich liquor with low concentration of inhibiting compounds for fermentation (≈ 67% of xylose, ≈ 21% of glucose, ≈ 10% of cellobiose and arabinose, and around 1% of inhibiting compounds as furfural, hydroxymethilfurfural and acetic acid). The hemicellulosic sugars associated with 20 g/L of yeast extract were used in a fermentation process with Lactobacillus plantarum to produce lactic acid. The fermentation process pH was controlled with automatic injection of Ca(OH)2 to keep pH at 6.00. The lactic acid concentration remained stable from the time when the glucose was depleted (48 hours of fermentation), with no further production. While lactic acid is produced occurs the concomitant consumption of xylose and glucose. The yield of fermentation was 0.933 g lactic acid /g sugars. Besides, it was not detected the presence of by-products, what allows considering that the microorganism uses a homolactic fermentation to produce its own energy using pentose-phosphate pathway. Through facultative heterofermentative metabolism the bacteria consume pentose, as is the case of L. plantarum, but the energy efficiency for the cell is lower than during the hexose consumption. This implies both in a slower cell growth, as in a reduction in lactic acid productivity compared with the use of hexose. Also, L. plantarum had shown to have a capacity for lactic acid production from hemicellulosic hydrolysate without detoxification, which is very attractive in terms of robustness for an industrial process. Xylose from hydrolyzed bagasse and without detoxification is consumed, although the hydrolyzed bagasse inhibitors (especially aromatic inhibitors) affect productivity and yield of lactic acid. The use of sugars and the lack of need for detoxification of the C5 liquor from sugarcane bagasse hydrolyzed is a crucial factor for the economic viability of second generation processes. Taking this information into account, the production of second generation lactic acid using sugars from hemicellulose appears to be a good alternative to the complete utilization of sugarcane plant, directing molasses and cellulosic carbohydrates to produce 2G-ethanol, and hemicellulosic carbohydrates to produce 2G-lactic acid.Keywords: fermentation, lactic acid, hemicellulosic sugars, sugarcane
Procedia PDF Downloads 373672 Segmented Pupil Phasing with Deep Learning
Authors: Dumont Maxime, Correia Carlos, Sauvage Jean-François, Schwartz Noah, Gray Morgan
Abstract:
Context: The concept of the segmented telescope is unavoidable to build extremely large telescopes (ELT) in the quest for spatial resolution, but it also allows one to fit a large telescope within a reduced volume of space (JWST) or into an even smaller volume (Standard Cubesat). Cubesats have tight constraints on the computational burden available and the small payload volume allowed. At the same time, they undergo thermal gradients leading to large and evolving optical aberrations. The pupil segmentation comes nevertheless with an obvious difficulty: to co-phase the different segments. The CubeSat constraints prevent the use of a dedicated wavefront sensor (WFS), making the focal-plane images acquired by the science detector the most practical alternative. Yet, one of the challenges for the wavefront sensing is the non-linearity between the image intensity and the phase aberrations. Plus, for Earth observation, the object is unknown and unrepeatable. Recently, several studies have suggested Neural Networks (NN) for wavefront sensing; especially convolutional NN, which are well known for being non-linear and image-friendly problem solvers. Aims: We study in this paper the prospect of using NN to measure the phasing aberrations of a segmented pupil from the focal-plane image directly without a dedicated wavefront sensing. Methods: In our application, we take the case of a deployable telescope fitting in a CubeSat for Earth observations which triples the aperture size (compared to the 10cm CubeSat standard) and therefore triples the angular resolution capacity. In order to reach the diffraction-limited regime in the visible wavelength, typically, a wavefront error below lambda/50 is required. The telescope focal-plane detector, used for imaging, will be used as a wavefront-sensor. In this work, we study a point source, i.e. the Point Spread Function [PSF] of the optical system as an input of a VGG-net neural network, an architecture designed for image regression/classification. Results: This approach shows some promising results (about 2nm RMS, which is sub lambda/50 of residual WFE with 40-100nm RMS of input WFE) using a relatively fast computational time less than 30 ms which translates a small computation burder. These results allow one further study for higher aberrations and noise.Keywords: wavefront sensing, deep learning, deployable telescope, space telescope
Procedia PDF Downloads 104671 Optimal Allocation of Battery Energy Storage Considering Stiffness Constraints
Authors: Felipe Riveros, Ricardo Alvarez, Claudia Rahmann, Rodrigo Moreno
Abstract:
Around the world, many countries have committed to a decarbonization of their electricity system. Under this global drive, converter-interfaced generators (CIG) such as wind and photovoltaic generation appear as cornerstones to achieve these energy targets. Despite its benefits, an increasing use of CIG brings several technical challenges in power systems, especially from a stability viewpoint. Among the key differences are limited short circuit current capacity, inertia-less characteristic of CIG, and response times within the electromagnetic timescale. Along with the integration of CIG into the power system, one enabling technology for the energy transition towards low-carbon power systems is battery energy storage systems (BESS). Because of the flexibility that BESS provides in power system operation, its integration allows for mitigating the variability and uncertainty of renewable energies, thus optimizing the use of existing assets and reducing operational costs. Another characteristic of BESS is that they can also support power system stability by injecting reactive power during the fault, providing short circuit currents, and delivering fast frequency response. However, most methodologies for sizing and allocating BESS in power systems are based on economic aspects and do not exploit the benefits that BESSs can offer to system stability. In this context, this paper presents a methodology for determining the optimal allocation of battery energy storage systems (BESS) in weak power systems with high levels of CIG. Unlike traditional economic approaches, this methodology incorporates stability constraints to allocate BESS, aiming to mitigate instability issues arising from weak grid conditions with low short-circuit levels. The proposed methodology offers valuable insights for power system engineers and planners seeking to maintain grid stability while harnessing the benefits of renewable energy integration. The methodology is validated in the reduced Chilean electrical system. The results show that integrating BESS into a power system with high levels of CIG with stability criteria contributes to decarbonizing and strengthening the network in a cost-effective way while sustaining system stability. This paper potentially lays the foundation for understanding the benefits of integrating BESS in electrical power systems and coordinating their placements in future converter-dominated power systems.Keywords: battery energy storage, power system stability, system strength, weak power system
Procedia PDF Downloads 61670 Stems of Prunus avium: An Unexplored By-product with Great Bioactive Potential
Authors: Luís R. Silva, Fábio Jesus, Catarina Bento, Ana C. Gonçalves
Abstract:
Over the last few years, the traditional medicine has gained ground at nutritional and pharmacological level. The natural products and their derivatives have great importance in several drugs used in modern therapeutics. Plant-based systems continue to play an essential role in primary healthcare. Additionally, the utilization of their plant parts, such as leaves, stems and flowers as nutraceutical and pharmaceutical products, can add a high value in the natural products market, not just by the nutritional value due to the significant levels of phytochemicals, but also by to the high benefit for the producers and manufacturers business. Stems of Prunus avium L. are a byproduct resulting from the processing of cherry, and have been consumed over the years as infusions and decoctions due to its bioactive properties, being used as sedative, diuretic and draining, to relief of renal stones, edema and hypertension. In this work, we prepared a hydroethanolic and infusion extracts from stems of P. avium collected in Fundão Region (Portugal), and evaluate the phenolic profile by LC/DAD, antioxidant capacity, α-glucosidase inhibitory activity and protection of human erythrocytes against oxidative damage. The LC-DAD analysis allowed to the identification of 19 phenolic compounds, catechin and 3-O-caffolquinic acid were the main ones. In a general way, hydroethanolic extract proved to be more active than infusion. This extract had the best antioxidant activity against DPPH• (IC50=22.37 ± 0.28 µg/mL) and superoxide radical (IC50=13.93 ± 0.30 µg/mL). Furthermore, it was the most active concerning inhibition of hemoglobin oxidation (IC50=13.73 ± 0.67 µg/mL), hemolysis (IC50=1.49 ± 0.18 µg/mL) and lipid peroxidation (IC50=26.20 ± 0.38 µg/mL) on human erythrocytes. On the other hand, infusion revealed to be more efficient towards α-glucosidase inhibitory activity (IC50=3.18 ± 0.23 µg/mL) and against nitric oxide radical (IC50=99.99 ± 1.89 µg/mL). The Sweet cherry sector is very important in Fundão Region (Portugal), and taking profit from the great wastes produced during processing of the cherry to produce added-value products, such as food supplements cannot be ignored. Our results demonstrate that P. avium stems possesses remarkable antioxidant and free radical scavenging properties. It is therefore, suggest, that P. avium stems can be used as a natural antioxidant with high potential to prevent or slow the progress of human diseases mediated by oxidative stress.Keywords: stems, Prunus avium, phenolic compounds, biological potential
Procedia PDF Downloads 297669 Using Nature-Based Solutions to Decarbonize Buildings in Canadian Cities
Authors: Zahra Jandaghian, Mehdi Ghobadi, Michal Bartko, Alex Hayes, Marianne Armstrong, Alexandra Thompson, Michael Lacasse
Abstract:
The Intergovernmental Panel on Climate Change (IPCC) report stated the urgent need to cut greenhouse gas emissions to avoid the adverse impacts of climatic changes. The United Nations has forecasted that nearly 70 percent of people will live in urban areas by 2050 resulting in a doubling of the global building stock. Given that buildings are currently recognised as emitting 40 percent of global carbon emissions, there is thus an urgent incentive to decarbonize existing buildings and to build net-zero carbon buildings. To attain net zero carbon emissions in communities in the future requires action in two directions: I) reduction of emissions; and II) removal of on-going emissions from the atmosphere once de-carbonization measures have been implemented. Nature-based solutions (NBS) have a significant role to play in achieving net zero carbon communities, spanning both emission reductions and removal of on-going emissions. NBS for the decarbonisation of buildings can be achieved by using green roofs and green walls – increasing vertical and horizontal vegetation on the building envelopes – and using nature-based materials that either emit less heat to the atmosphere thus decreasing photochemical reaction rates, or store substantial amount of carbon during the whole building service life within their structure. The NBS approach can also mitigate urban flooding and overheating, improve urban climate and air quality, and provide better living conditions for the urban population. For existing buildings, de-carbonization mostly requires retrofitting existing envelopes efficiently to use NBS techniques whereas for future construction, de-carbonization involves designing new buildings with low carbon materials as well as having the integrity and system capacity to effectively employ NBS. This paper presents the opportunities and challenges in respect to the de-carbonization of buildings using NBS for both building retrofits and new construction. This review documents the effectiveness of NBS to de-carbonize Canadian buildings, identifies the missing links to implement these techniques in cold climatic conditions, and determine a road map and immediate approaches to mitigate the adverse impacts of climate change such as urban heat islanding. Recommendations are drafted for possible inclusion in the Canadian building and energy codes.Keywords: decarbonization, nature-based solutions, GHG emissions, greenery enhancement, buildings
Procedia PDF Downloads 93668 The Ethics of Documentary Filmmaking Discuss the Ethical Considerations and Responsibilities of Documentary Filmmakers When Portraying Real-life Events and Subjects
Authors: Batatunde Kolawole
Abstract:
Documentary filmmaking stands as a distinctive medium within the cinematic realm, commanding a unique responsibility the portrayal of real-life events and subjects. This research delves into the profound ethical considerations and responsibilities that documentary filmmakers shoulder as they embark on the quest to unveil truth and weave compelling narratives. In the exploration, they embark on a comprehensive review of ethical frameworks and real-world case studies, illuminating the intricate web of challenges that documentarians confront. These challenges encompass an array of ethical intricacies, from securing informed consent to safeguarding privacy, maintaining unwavering objectivity, and sidestepping the snares of narrative manipulation when crafting stories from reality. Furthermore, they dissect the contemporary ethical terrain, acknowledging the emergence of novel dilemmas in the digital age, such as deepfakes and digital alterations. Through a meticulous analysis of ethical quandaries faced by distinguished documentary filmmakers and their strategies for ethical navigation, this study offers invaluable insights into the evolving role of documentaries in molding public discourse. They underscore the indispensable significance of transparency, integrity, and an indomitable commitment to encapsulating the intricacies of reality within the realm of ethical documentary filmmaking. In a world increasingly reliant on visual narratives, an understanding of the subtle ethical dimensions of documentary filmmaking holds relevance not only for those behind the camera but also for the diverse audiences who engage with and interpret the realities unveiled on screen. This research stands as a rigorous examination of the moral compass that steers this potent form of cinematic expression. It emphasizes the capacity of ethical documentary filmmaking to enlighten, challenge, and inspire, all while unwaveringly upholding the core principles of truthfulness and respect for the human subjects under scrutiny. Through this holistic analysis, they illuminate the enduring significance of upholding ethical integrity while uncovering the truths that shape our world. Ethical documentary filmmaking, as exemplified by "Rape" and countless other powerful narratives, serves as a testament to the enduring potential of cinema to inform, challenge, and drive meaningful societal discourse.Keywords: filmmaking, documentary, human right, film
Procedia PDF Downloads 66667 Community Observatory for Territorial Information Control and Management
Authors: A. Olivi, P. Reyes Cabrera
Abstract:
Ageing and urbanization are two of the main trends that characterize the twenty-first century. Its trending is especially accelerated in the emerging countries of Asia and Latin America. Chile is one of the countries in the Latin American region, where the demographic transition to ageing is becoming increasingly visible. The challenges that the new demographic scenario poses to urban administrators call for searching innovative solutions to maximize the functional and psycho-social benefits derived from the relationship between older people and the environment in which they live. Although mobility is central to people's everyday practices and social relationships, it is not distributed equitably. On the contrary, it can be considered another factor of inequality in our cities. Older people are a particularly sensitive and vulnerable group to mobility. In this context, based on the ageing in place strategy and following the social innovation approach within a spatial context, the "Community Observatory of Territorial Information Control and Management" project aims at the collective search and validation of solutions for the satisfaction of mobility and accessibility specific needs of urban aged people. Specifically, the Observatory intends to: i) promote the direct participation of the aged population in order to generate relevant information on the territorial situation and the satisfaction of the mobility needs of this group; ii) co-create dynamic and efficient mechanisms for the reporting and updating of territorial information; iii) increase the capacity of the local administration to plan and manage solutions to environmental problems at the neighborhood scale. Based on a participatory mapping methodology and on the application of digital technology, the Observatory designed and developed, together with aged people, a crowdsourcing platform for smartphones, called DIMEapp, for reporting environmental problems affecting mobility and accessibility. DIMEapp has been tested at a prototype level in two neighborhoods of the city of Valparaiso. The results achieved in the testing phase have shown high potential in order to i) contribute to establishing coordination mechanisms with the local government and the local community; ii) improve a local governance system that guides and regulates the allocation of goods and services destined to solve those problems.Keywords: accessibility, ageing, city, digital technology, local governance
Procedia PDF Downloads 131666 Effect of Locally Injected Mesenchymal Stem Cells on Bone Regeneration of Rat Calvaria Defects
Authors: Gileade P. Freitas, Helena B. Lopes, Alann T. P. Souza, Paula G. F. P. Oliveira, Adriana L. G. Almeida, Paulo G. Coelho, Marcio M. Beloti, Adalberto L. Rosa
Abstract:
Bone tissue presents great capacity to regenerate when injured by trauma, infectious processes, or neoplasia. However, the extent of injury may exceed the inherent tissue regeneration capability demanding some kind of additional intervention. In this scenario, cell therapy has emerged as a promising alternative to treat challenging bone defects. This study aimed at evaluating the effect of local injection of bone marrow-derived mesenchymal stem cells (BM-MSCs) and adipose tissue-derived mesenchymal stem cells (AT-MSCs) on bone regeneration of rat calvaria defects. BM-MSCs and AT-MSCs were isolated and characterized by expression of surface markers; cell viability was evaluated after injection through a 21G needle. Defects of 5 mm in diameter were created in calvaria and after two weeks a single injection of BM-MSCs, AT-MSCs or vehicle-PBS without cells (Control) was carried out. Cells were tracked by bioluminescence and at 4 weeks post-injection bone formation was evaluated by micro-computed tomography (μCT) and histology, nanoindentation, and through gene expression of bone remodeling markers. The data were evaluated by one-way analysis of variance (p≤0.05). BM-MSCs and AT-MSCs presented characteristics of mesenchymal stem cells, kept viability after passing through a 21G needle and remained in the defects until day 14. In general, injection of both BM-MSCs and AT-MSCs resulted in higher bone formation compared to Control. Additionally, this bone tissue displayed elastic modulus and hardness similar to the pristine calvaria bone. The expression of all evaluated genes involved in bone formation was upregulated in bone tissue formed by BM-MSCs compared to AT-MSCs while genes involved in bone resorption were upregulated in AT-MSCs-formed bone. We show that cell therapy based on the local injection of BM-MSCs or AT-MSCs is effective in delivering viable cells that displayed local engraftment and induced a significant improvement in bone healing. Despite differences in the molecular cues observed between BM-MSCs and AT-MSCs, both cells were capable of forming bone tissue at comparable amounts and properties. These findings may drive cell therapy approaches toward the complete bone regeneration of challenging sites.Keywords: cell therapy, mesenchymal stem cells, bone repair, cell culture
Procedia PDF Downloads 184665 A Review of the Agroecological Farming System as a Viable Alternative Food Production Approach in South Africa
Authors: Michael Rudolph, Evans Muchesa, Katiya Yassim, Venkatesha Prasad
Abstract:
Input-intensive production systems characterise industrial agriculture as an unsustainable means to address food and nutrition security and sustainable livelihoods. There is extensive empirical evidence that supports the diversification and reorientation of industrial agriculture and that incorporates ecological practices viewed as essential for achieving balanced and productive farming systems. An agroecological farming system is a viable alternative approach that can improve food production, especially for the most vulnerable communities and households. Furthermore, substantial proof and supporting evidence show that such a system holds the key to increasing dietary diversity at the local level and reducing the multiple health and environmental risks stemming from industrial agriculture. This paper, therefore, aims to demonstrate the benefits of the agroecology food system through an evidenced-based approach that shows how the broader agricultural network structures can play a meaningful role, particularly for impoverished households in today’s reality. The methodology is centered on a structured literature review that analyses urban agriculture, agroecology, and food insecurity. Notably, ground-truthing, practical experiences, and field observation of agroecological farming were deployed. This paper places particular emphasis on the practical application of the agroecological approach in urban and peri-urban settings. Several evaluation reports on local and provincial initiatives clearly show that very few households engage in food gardens and urban agriculture. These households do not make use of their backyards or nearby open spaces for a number of reasons, such as stringent city by-laws, restricted access to land, little or no knowledge of innovative or alternative farming practices, and a general lack of interest. Furthermore, limited resources such as water and energy and lack of capacity building and training implementation are additional constraints that are hampering small scale food gardens and farms in other settings. The Agroecology systems approach is viewed as one of the key solutions to tackling these problems.Keywords: agroecology, water-energy-food nexus, sutainable development goals, social, environmental and economc impact
Procedia PDF Downloads 113664 Neurophysiology of Domain Specific Execution Costs of Grasping in Working Memory Phases
Authors: Rumeysa Gunduz, Dirk Koester, Thomas Schack
Abstract:
Previous behavioral studies have shown that working memory (WM) and manual actions share limited capacity cognitive resources, which in turn results in execution costs of manual actions in WM. However, to the best of our knowledge, there is no study investigating the neurophysiology of execution costs. The current study aims to fill this research gap investigating the neurophysiology of execution costs of grasping in WM phases (encoding, maintenance, retrieval) considering verbal and visuospatial domains of WM. A WM-grasping dual task paradigm was implemented to examine execution costs. Baseline single task required performing verbal or visuospatial version of a WM task. Dual task required performing the WM task embedded in a high precision grasp to place task. 30 participants were tested in a 2 (single vs. dual task) x 2 (visuo-spatial vs. verbal WM) within subject design. Event related potentials (ERPs) were extracted for each WM phase separately in the single and dual tasks. Memory performance for visuospatial WM, but not for verbal WM, was significantly lower in the dual task compared to the single task. Encoding related ERPs in the single task revealed different ERPs of verbal WM and visuospatial WM at bilateral anterior sites and right posterior site. In the dual task, bilateral anterior difference disappeared due to bilaterally increased anterior negativities for visuospatial WM. Maintenance related ERPs in the dual task revealed different ERPs of verbal WM and visuospatial WM at bilateral posterior sites. There was also anterior negativity for visuospatial WM. Retrieval related ERPs in the single task revealed different ERPs of verbal WM and visuospatial WM at bilateral posterior sites. In the dual task, there was no difference between verbal WM and visuospatial WM. Behavioral and ERP findings suggest that execution of grasping shares cognitive resources only with visuospatial WM, which in turn results in domain specific execution costs. Moreover, ERP findings suggest unique patterns of costs in each WM phase, which supports the idea that each WM phase reflects a separate cognitive process. This study not only contributes to the understanding of cognitive principles of manual action control, but also contributes to the understanding of WM as an entity consisting of separate modalities and cognitive processes.Keywords: dual task, grasping execution, neurophysiology, working memory domains, working memory phases
Procedia PDF Downloads 426663 Relationship of Macro-Concepts in Educational Technologies
Authors: L. R. Valencia Pérez, A. Morita Alexander, Peña A. Juan Manuel, A. Lamadrid Álvarez
Abstract:
This research shows the reflection and identification of explanatory variables and their relationships between different variables that are involved with educational technology, all of them encompassed in macro-concepts which are: cognitive inequality, economy, food and language; These will give the guideline to have a more detailed knowledge of educational systems, the communication and equipment, the physical space and the teachers; All of them interacting with each other give rise to what is called educational technology management. These elements contribute to have a very specific knowledge of the equipment of communications, networks and computer equipment, systems and content repositories. This is intended to establish the importance of knowing a global environment in the transfer of knowledge in poor countries, so that it does not diminish the capacity to be authentic and preserve their cultures, their languages or dialects, their hierarchies and real needs; In short, to respect the customs of different towns, villages or cities that are intended to be reached through the use of internationally agreed professional educational technologies. The methodology used in this research is the analytical - descriptive, which allows to explain each of the variables, which in our opinion must be taken into account, in order to achieve an optimal incorporation of the educational technology in a model that gives results in a medium term. The idea is that in an encompassing way the concepts will be integrated to others with greater coverage until reaching macro concepts that are of national coverage in the countries and that are elements of conciliation in the different federal and international reforms. At the center of the model is the educational technology which is directly related to the concepts that are contained in factors such as the educational system, communication and equipment, spaces and teachers, which are globally immersed in macro concepts Cognitive inequality, economics, food and language. One of the major contributions of this article is to leave this idea under an algorithm that allows to be as unbiased as possible when evaluating this indicator, since other indicators that are to be taken from international preference entities like the OECD in the area of education systems studied, so that they are not influenced by particular political or interest pressures. This work opens the way for a relationship between involved entities, both conceptual, procedural and human activity, to clearly identify the convergence of their impact on the problem of education and how the relationship can contribute to an improvement, but also shows possibilities of being able to reach a comprehensive education reform for all.Keywords: relationships macro-concepts, cognitive inequality, economics, alimentation and language
Procedia PDF Downloads 199662 Cancer Burden and Policy Needs in the Democratic Republic of the Congo: A Descriptive Study
Authors: Jean Paul Muambangu Milambo, Peter Nyasulu, John Akudugu, Leonidas Ndayisaba, Joyce Tsoka-Gwegweni, Lebwaze Massamba Bienvenu, Mitshindo Mwambangu Chiro
Abstract:
In 2018, non-communicable diseases (NCDs) were responsible for 48% of deaths in the Democratic Republic of Congo (DRC), with cancer contributing to 5% of these deaths. There is a notable absence of cancer registries, capacity-building activities, budgets, and treatment roadmaps in the DRC. Current cancer estimates are primarily based on mathematical modeling with limited data from neighboring countries. This study aimed to assess cancer subtype prevalence in Kinshasa hospitals and compare these findings with WHO model estimates. Methods: A retrospective observational study was conducted from 2018 to 2020 at HJ Hospitals in Kinshasa. Data were collected using American Cancer Society (ACS) questionnaires and physician logs. Descriptive analysis was performed using STATA version 16 to estimate cancer burden and provide evidence-based recommendations. Results: The results from the chart review at HJ Hospitals in Kinshasa (2018-2020) indicate that out of 6,852 samples, approximately 11.16% were diagnosed with cancer. The distribution of cancer subtypes in this cohort was as follows: breast cancer (33.6%), prostate cancer (21.8%), colorectal cancer (9.6%), lymphoma (4.6%), and cervical cancer (4.4%). These figures are based on histopathological confirmation at the facility and may not fully represent the broader population due to potential selection biases related to geographic and financial accessibility to the hospital. In contrast, the World Health Organization (WHO) model estimates for cancer prevalence in the DRC show different proportions. According to WHO data, the distribution of cancer types is as follows: cervical cancer (15.9%), prostate cancer (15.3%), breast cancer (14.9%), liver cancer (6.8%), colorectal cancer (5.9%), and other cancers (41.2%) (WHO, 2020). Conclusion: The data indicate a rising cancer prevalence in DRC but highlight significant gaps in clinical, biomedical, and genetic cancer data. The establishment of a population-based cancer registry (PBCR) and a defined cancer management pathway is crucial. The current estimates are limited due to data scarcity and inconsistencies in clinical practices. There is an urgent need for multidisciplinary cancer management, integration of palliative care, and improvement in care quality based on evidence-based measures.Keywords: cancer, risk factors, DRC, gene-environment interactions, survivors
Procedia PDF Downloads 21661 Sensitivity Analysis of the Heat Exchanger Design in Net Power Oxy-Combustion Cycle for Carbon Capture
Authors: Hirbod Varasteh, Hamidreza Gohari Darabkhani
Abstract:
The global warming and its impact on climate change is one of main challenges for current century. Global warming is mainly due to the emission of greenhouse gases (GHG) and carbon dioxide (CO2) is known to be the major contributor to the GHG emission profile. Whilst the energy sector is the primary source for CO2 emission, Carbon Capture and Storage (CCS) are believed to be the solution for controlling this emission. Oxyfuel combustion (Oxy-combustion) is one of the major technologies for capturing CO2 from power plants. For gas turbines, several Oxy-combustion power cycles (Oxyturbine cycles) have been investigated by means of thermodynamic analysis. NetPower cycle is one of the leading oxyturbine power cycles with almost full carbon capture capability from a natural gas fired power plant. In this manuscript, sensitivity analysis of the heat exchanger design in NetPower cycle is completed by means of process modelling. The heat capacity variation and supercritical CO2 with gaseous admixtures are considered for multi-zone analysis with Aspen Plus software. It is found that the heat exchanger design has a major role to increase the efficiency of NetPower cycle. The pinch-point analysis is done to extract the composite and grand composite curve for the heat exchanger. In this paper, relationship between the cycle efficiency and the minimum approach temperature (∆Tmin) of the heat exchanger has also been evaluated. Increase in ∆Tmin causes a decrease in the temperature of the recycle flue gases (RFG) and an overall decrease in the required power for the recycled gas compressor. The main challenge in the design of heat exchangers in power plants is a tradeoff between the capital and operational costs. To achieve lower ∆Tmin, larger size of heat exchanger is required. This means a higher capital cost but leading to a better heat recovery and lower operational cost. To achieve this, ∆Tmin is selected from the minimum point in the diagrams of capital and operational costs. This study provides an insight into the NetPower Oxy-combustion cycle’s performance analysis and operational condition based on its heat exchanger design.Keywords: carbon capture and storage, oxy-combustion, netpower cycle, oxy turbine cycles, zero emission, heat exchanger design, supercritical carbon dioxide, oxy-fuel power plant, pinch point analysis
Procedia PDF Downloads 204660 Bioinformatics High Performance Computation and Big Data
Authors: Javed Mohammed
Abstract:
Right now, bio-medical infrastructure lags well behind the curve. Our healthcare system is dispersed and disjointed; medical records are a bit of a mess; and we do not yet have the capacity to store and process the crazy amounts of data coming our way from widespread whole-genome sequencing. And then there are privacy issues. Despite these infrastructure challenges, some researchers are plunging into bio medical Big Data now, in hopes of extracting new and actionable knowledge. They are doing delving into molecular-level data to discover bio markers that help classify patients based on their response to existing treatments; and pushing their results out to physicians in novel and creative ways. Computer scientists and bio medical researchers are able to transform data into models and simulations that will enable scientists for the first time to gain a profound under-standing of the deepest biological functions. Solving biological problems may require High-Performance Computing HPC due either to the massive parallel computation required to solve a particular problem or to algorithmic complexity that may range from difficult to intractable. Many problems involve seemingly well-behaved polynomial time algorithms (such as all-to-all comparisons) but have massive computational requirements due to the large data sets that must be analyzed. High-throughput techniques for DNA sequencing and analysis of gene expression have led to exponential growth in the amount of publicly available genomic data. With the increased availability of genomic data traditional database approaches are no longer sufficient for rapidly performing life science queries involving the fusion of data types. Computing systems are now so powerful it is possible for researchers to consider modeling the folding of a protein or even the simulation of an entire human body. This research paper emphasizes the computational biology's growing need for high-performance computing and Big Data. It illustrates this article’s indispensability in meeting the scientific and engineering challenges of the twenty-first century, and how Protein Folding (the structure and function of proteins) and Phylogeny Reconstruction (evolutionary history of a group of genes) can use HPC that provides sufficient capability for evaluating or solving more limited but meaningful instances. This article also indicates solutions to optimization problems, and benefits Big Data and Computational Biology. The article illustrates the Current State-of-the-Art and Future-Generation Biology of HPC Computing with Big Data.Keywords: high performance, big data, parallel computation, molecular data, computational biology
Procedia PDF Downloads 363659 Scrutinizing the Effective Parameters on Cuttings Movement in Deviated Wells: Experimental Study
Authors: Siyamak Sarafraz, Reza Esmaeil Pour, Saeed Jamshidi, Asghar Molaei Dehkordi
Abstract:
Cutting transport is one of the major problems in directional and extended reach oil and gas wells. Lack of sufficient attention to this issue may bring some troubles such as casing running, stuck pipe, excessive torque and drag, hole pack off, bit wear, decreased the rate of penetration (ROP), increased equivalent circulation density (ECD) and logging. Since it is practically impossible to directly observe the behavior of deep wells, a test setup was designed to investigate cutting transport phenomena. This experimental work carried out to scrutiny behavior of the effective variables in cutting transport. The test setup contained a test section with 17 feet long that made of a 3.28 feet long transparent glass pipe with 3 inch diameter, a storage tank with 100 liters capacity, drill pipe rotation which made of stainless steel with 1.25 inches diameter, pump to circulate drilling fluid, valve to adjust flow rate, bit and a camera to record all events which then converted to RGB images via the Image Processing Toolbox. After preparation of test process, each test performed separately, and weights of the output particles were measured and compared with each other. Observation charts were plotted to assess the behavior of viscosity, flow rate and RPM in inclinations of 0°, 30°, 60° and 90°. RPM was explored with other variables such as flow rate and viscosity in different angles. Also, effect of different flow rate was investigated in directional conditions. To access the precise results, captured image were analyzed to find out bed thickening and particles behave in the annulus. The results of this experimental study demonstrate that drill string rotation helps particles to be suspension and reduce the particle deposition cutting movement increased significantly. By raising fluid velocity, laminar flow converted to turbulence flow in the annulus. Increases in flow rate in horizontal section by considering a lower range of viscosity is more effective and improved cuttings transport performance.Keywords: cutting transport, directional drilling, flow rate, hole cleaning, pipe rotation
Procedia PDF Downloads 284658 Quantum Graph Approach for Energy and Information Transfer through Networks of Cables
Authors: Mubarack Ahmed, Gabriele Gradoni, Stephen C. Creagh, Gregor Tanner
Abstract:
High-frequency cables commonly connect modern devices and sensors. Interestingly, the proportion of electric components is rising fast in an attempt to achieve lighter and greener devices. Modelling the propagation of signals through these cable networks in the presence of parameter uncertainty is a daunting task. In this work, we study the response of high-frequency cable networks using both Transmission Line and Quantum Graph (QG) theories. We have successfully compared the two theories in terms of reflection spectra using measurements on real, lossy cables. We have derived a generalisation of the vertex scattering matrix to include non-uniform networks – networks of cables with different characteristic impedances and propagation constants. The QG model implicitly takes into account the pseudo-chaotic behavior, at the vertices, of the propagating electric signal. We have successfully compared the asymptotic growth of eigenvalues of the Laplacian with the predictions of Weyl law. We investigate the nearest-neighbour level-spacing distribution of the resonances and compare our results with the predictions of Random Matrix Theory (RMT). To achieve this, we will compare our graphs with the generalisation of Wigner distribution for open systems. The problem of scattering from networks of cables can also provide an analogue model for wireless communication in highly reverberant environments. In this context, we provide a preliminary analysis of the statistics of communication capacity for communication across cable networks, whose eventual aim is to enable detailed laboratory testing of information transfer rates using software defined radio. We specialise this analysis in particular for the case of MIMO (Multiple-Input Multiple-Output) protocols. We have successfully validated our QG model with both TL model and laboratory measurements. The growth of Eigenvalues compares well with Weyl’s law and the level-spacing distribution agrees so well RMT predictions. The results we achieved in the MIMO application compares favourably with the prediction of a parallel on-going research (sponsored by NEMF21.)Keywords: eigenvalues, multiple-input multiple-output, quantum graph, random matrix theory, transmission line
Procedia PDF Downloads 173657 Colonization Pattern and Growth of Reintroduced Tiger (Panthera tigris) Population at Central India
Authors: M. S. Sarkar, J. A. Johnson, S. Sen, G. K. Saha, K. Ramesh
Abstract:
There is growing recognition of several important roles played by tigers for maintaining sustainable biodiversity at diverse ecosystems in South and South-East Asia. Only <3200 individuals are left in the wild because of poaching and habitat loss. Thus, restoring wild population is an emerging as well as important conservation initiative, but such efforts still remain challenging due to their elusive and solitary behavior. After careful translocation of few individuals, how reintroduced individuals colonize into suitable habitat and achieve stable stage population through reproduction is vital information for forest managers and policy makers of its 13 distribution range countries. Four wild and two captive radio collared tigers were reintroduced at Panna Tiger Reserve, Madhya-pradesh, India during 2009-2014. We critically examined their settlement behavior and population growth over the period. Results from long term telemetry data showed that male explored larger areas rapidly in short time span, while females explored small area in long time period and with significant high rate of movement in both sexes during exploratory period. Significant difference in home range sizes of tigers were observed in exploratory and settlement period. Though all reintroduced tigers preferred densely vegetated undisturbed forest patches within the core area of tiger reserve, a niche based k select analysis showed that individual variation in habitat selection was prominent among reintroduced tigers. Total 18 litter of >42 known cubs were born with low mortality rate, high maternity rate, high observed growth rate and short generation time in both the sexes. The population achieved its carrying capacity in a very short time span, marking success of this current tiger conservation programme. Our study information could provide significant insights on the tiger biology of translocated tigers with implication for future conservation strategies that consider translocation based recovery in their range countries.Keywords: reintroduction, tiger, home range, demography
Procedia PDF Downloads 219