Search results for: monitoring network optimization
1212 The Characterization and Optimization of Bio-Graphene Derived From Oil Palm Shell Through Slow Pyrolysis Environment and Its Electrical Conductivity and Capacitance Performance as Electrodes Materials in Fast Charging Supercapacitor Application
Authors: Nurhafizah Md. Disa, Nurhayati Binti Abdullah, Muhammad Rabie Bin Omar
Abstract:
This research intends to identify the existing knowledge gap because of the lack of substantial studies to fabricate and characterize bio-graphene created from Oil Palm Shell (OPS) through the means of pre-treatment and slow pyrolysis. By fabricating bio-graphene through OPS, a novel material can be found to procure and used for graphene-based research. The characterization of produced bio-graphene is intended to possess a unique hexagonal graphene pattern and graphene properties in comparison to other previously fabricated graphene. The OPS will be fabricated by pre-treatment of zinc chloride (ZnCl₂) and iron (III) chloride (FeCl3), which then induced the bio-graphene thermally by slow pyrolysis. The pyrolizer's final temperature and resident time will be set at 550 °C, 5/min, and 1 hour respectively. Finally, the charred product will be washed with hydrochloric acid (HCL) to remove metal residue. The obtained bio-graphene will undergo different analyses to investigate the physicochemical properties of the two-dimensional layer of carbon atoms with sp2 hybridization hexagonal lattice structure. The analysis that will be taking place is Raman Spectroscopy (RAMAN), UV-visible spectroscopy (UV-VIS), Transmission Electron Microscopy (TEM), Scanning Electron Microscopy (SEM), and X-Ray Diffraction (XRD). In retrospect, RAMAN is used to analyze three key peaks found in graphene, namely D, G, and 2D peaks, which will evaluate the quality of the bio-graphene structure and the number of layers generated. To compare and strengthen graphene layer resolves, UV-VIS may be used to establish similar results of graphene layer from last layer analysis and also characterize the types of graphene procured. A clear physical image of graphene can be obtained by analyzation of TEM in order to study structural quality and layers condition and SEM in order to study the surface quality and repeating porosity pattern. Lastly, establishing the crystallinity of the produced bio-graphene, simultaneously as an oxygen contamination factor and thus pristineness of the graphene can be done by XRD. In the conclusion of this paper, this study is able to obtain bio-graphene through OPS as a novel material in pre-treatment by chloride ZnCl₂ and FeCl3 and slow pyrolization to provide a characterization analysis related to bio-graphene that will be beneficial for future graphene-related applications. The characterization should yield similar findings to previous papers as to confirm graphene quality.Keywords: oil palm shell, bio-graphene, pre-treatment, slow pyrolysis
Procedia PDF Downloads 841211 Strategies for Improving and Sustaining Quality in Higher Education
Authors: Anshu Radha Aggarwal
Abstract:
Higher Education (HE) in the India has experienced a series of remarkable changes over the last fifteen years as successive governments have sought to make the sector more efficient and more accountable for investment of public funds. Rapid expansion in student numbers and pressures to widen Participation amongst non-traditional students are key challenges facing HE. Learning outcomes can act as a benchmark for assuring quality and efficiency in HE and they also enable universities to describe courses in an unambiguous way so as to demystify (and open up) education to a wider audience. This paper examines how learning outcomes are used in HE and evaluates the implications for curriculum design and student learning. There has been huge expansion in the field of higher education, both technical and non-technical, in India during the last two decades, and this trend is continuing. It is expected that another about 400 colleges and 300 universities will be created by the end of the 13th Plan Period. This has lead to many concerns about the quality of education and training of our students. Many studies have brought the issues ailing our curricula, delivery, monitoring and assessment. Govt. of India, (via MHRD, UGC, NBA,…) has initiated several steps to bring improvement in quality of higher education and training, such as National Skills Qualification Framework, making accreditation of institutions mandatory in order to receive Govt. grants, and so on. Moreover, Outcome-based Education and Training (OBET) has also been mandated and encouraged in the teaching/learning institutions. MHRD, UGC and NBAhas made accreditation of schools, colleges and universities mandatory w.e.f Jan 2014. Outcome-based Education and Training (OBET) approach is learner-centric, whereas the traditional approach has been teacher-centric. OBET is a process which involves the re-orientation/restructuring the curriculum, implementation, assessment/measurements of educational goals, and achievement of higher order learning, rather than merely clearing/passing the university examinations. OBET aims to bring about these desired changes within the students, by increasing knowledge, developing skills, influencing attitudes and creating social-connect mind-set. This approach has been adopted by several leading universities and institutions around the world in advanced countries. Objectives of this paper is to highlight the issues concerning quality in higher education and quality frameworks, to deliberate on the various education and training models, to explain the outcome-based education and assessment processes, to provide an understanding of the NAAC and outcome-based accreditation criteria and processes and to share best-practice outcomes-based accreditation system and process.Keywords: learning outcomes, curriculum development, pedagogy, outcome based education
Procedia PDF Downloads 5241210 Cumulative Pressure Hotspot Assessment in the Red Sea and Arabian Gulf
Authors: Schröde C., Rodriguez D., Sánchez A., Abdul Malak, Churchill J., Boksmati T., Alharbi, Alsulmi H., Maghrabi S., Mowalad, Mutwalli R., Abualnaja Y.
Abstract:
Formulating a strategy for sustainable development of the Kingdom of Saudi Arabia’s coastal and marine environment is at the core of the “Marine and Coastal Protection Assessment Study for the Kingdom of Saudi Arabia Coastline (MCEP)”; that was set up in the context of the Vision 2030 by the Saudi Arabian government and aimed at providing a first comprehensive ‘Status Quo Assessment’ of the Kingdom’s marine environment to inform a sustainable development strategy and serve as a baseline assessment for future monitoring activities. This baseline assessment relied on scientific evidence of the drivers, pressures and their impact on the environments of the Red Sea and Arabian Gulf. A key element of the assessment was the cumulative pressure hotspot analysis developed for both national waters of the Kingdom following the principles of the Driver-Pressure-State-Impact-Response (DPSIR) framework and using the cumulative pressure and impact assessment methodology. The ultimate goals of the analysis were to map and assess the main hotspots of environmental pressures, and identify priority areas for further field surveillance and for urgent management actions. The study identified maritime transport, fisheries, aquaculture, oil, gas, energy, coastal industry, coastal and maritime tourism, and urban development as the main drivers of pollution in the Saudi Arabian marine waters. For each of these drivers, pressure indicators were defined to spatially assess the potential influence of the drivers on the coastal and marine environment. A list of hotspots of 90 locations could be identified based on the assessment. Spatially grouped the list could be reduced to come up with of 10 hotspot areas, two in the Arabian Gulf, 8 in the Red Sea. The hotspot mapping revealed clear spatial patterns of drivers, pressures and hotspots within the marine environment of waters under KSA’s maritime jurisdiction in the Red Sea and Arabian Gulf. The cascading assessment approach based on the DPSIR framework ensured that the root causes of the hotspot patterns, i.e. the human activities and other drivers, can be identified. The adapted CPIA methodology allowed for the combination of the available data to spatially assess the cumulative pressure in a consistent manner, and to identify the most critical hotspots by determining the overlap of cumulative pressure with areas of sensitive biodiversity. Further improvements are expected by enhancing the data sources of drivers and pressure indicators, fine-tuning the decay factors and distances of the pressure indicators, as well as including trans-boundary pressures across the regional seas.Keywords: Arabian Gulf, DPSIR, hotspot, red sea
Procedia PDF Downloads 1411209 Technology Road Mapping in the Fourth Industrial Revolution: A Comprehensive Analysis and Strategic Framework
Authors: Abdul Rahman Hamdan
Abstract:
The Fourth Industrial Revolution (4IR) has brought unprecedented technological advancements that have disrupted many industries worldwide. In keeping up with the technological advances and rapid disruption by the introduction of many technological advancements brought forth by the 4IR, the use of technology road mapping has emerged as one of the critical tools for organizations to leverage. Technology road mapping can be used by many companies to guide them to become more adaptable and anticipate future transformation and innovation, and avoid being redundant or irrelevant due to the rapid changes in technological advancement. This research paper provides a comprehensive analysis of technology road mapping within the context of the 4IR. The objectives of the paper are to provide companies with practical insights and a strategic framework of technology road mapping for them to navigate the fast-changing nature of the 4IR. This study also contributes to the understanding and practice of technology road mapping in the 4IR and, at the same time, provides organizations with the necessary tools and critical insight to navigate the 4IR transformation by leveraging technology road mapping. Based on the literature review and case studies, the study analyses key principles, methodologies, and best practices in technology road mapping and integrates them with the unique characteristics and challenges of the 4IR. The research paper gives the background of the fourth industrial revolution. It explores the disruptive potential of technologies in the 4IR and the critical need for technology road mapping that consists of strategic planning and foresight to remain competitive and relevant in the 4IR era. It also highlights the importance of technology road mapping as an organisation’s proactive approach to align the organisation’s objectives and resources to their technology and product development in meeting the fast-evolving technological 4IR landscape. The paper also includes the theoretical foundations of technology road mapping and examines various methodological approaches, and identifies external stakeholders in the process, such as external experts, stakeholders, collaborative platforms, and cross-functional teams to ensure an integrated and robust technological roadmap for the organisation. Moreover, this study presents a comprehensive framework for technology road mapping in the 4IR by incorporating key elements and processes such as technology assessment, competitive intelligence, risk analysis, and resource allocation. It provides a framework for implementing technology road mapping from strategic planning, goal setting, and technology scanning to road mapping visualisation, implementation planning, monitoring, and evaluation. In addition, the study also addresses the challenges and limitations related to technology roadmapping in 4IR, including the gap analysis. In conclusion of the study, the study will propose a set of practical recommendations for organizations that intend to leverage technology road mapping as a strategic tool in the 4IR in driving innovation and becoming competitive in the current and future ecosystem.Keywords: technology management, technology road mapping, technology transfer, technology planning
Procedia PDF Downloads 691208 Mapping Forest Biodiversity Using Remote Sensing and Field Data in the National Park of Tlemcen (Algeria)
Authors: Bencherif Kada
Abstract:
In forest management practice, landscape and Mediterranean forest are never posed as linked objects. But sustainable forestry requires the valorization of the forest landscape and this aim involves assessing the spatial distribution of biodiversity by mapping forest landscaped units and subunits and by monitoring the environmental trends. This contribution aims to highlight, through object-oriented classifications, the landscaped biodiversity of the National Park of Tlemcen (Algeria). The methodology used is based on ground data and on the basic processing units of object-oriented classification that are segments, so-called image-objects, representing a relatively homogenous units on the ground. The classification of Landsat Enhanced Thematic Mapper plus (ETM+) imagery is performed on image objects, and not on pixels. Advantages of object-oriented classification are to make full use of meaningful statistic and texture calculation, uncorrelated shape information (e.g., length-to-width ratio, direction and area of an object, etc.) and topological features (neighbor, super-object, etc.), and the close relation between real-world objects and image objects. The results show that per object classification using the k-nearest neighbor’s method is more efficient than per pixel one. It permits to simplify the content of the image while preserving spectrally and spatially homogeneous types of land covers such as Aleppo pine stands, cork oak groves, mixed groves of cork oak, holm oak and zen oak, mixed groves of holm oak and thuja, water plan, dense and open shrub-lands of oaks, vegetable crops or orchard, herbaceous plants and bare soils. Texture attributes seem to provide no useful information while spatial attributes of shape, compactness seem to be performant for all the dominant features, such as pure stands of Aleppo pine and/or cork oak and bare soils. Landscaped sub-units are individualized while conserving the spatial information. Continuously dominant dense stands over a large area were formed into a single class, such as dense, fragmented stands with clear stands. Low shrublands formations and high wooded shrublands are well individualized but with some confusion with enclaves for the former. Overall, a visual evaluation of the classification shows that the classification reflects the actual spatial state of the study area at the landscape level.Keywords: forest, oaks, remote sensing, biodiversity, shrublands
Procedia PDF Downloads 311207 An Integrated Power Generation System Design Developed between Solar Energy-Assisted Dual Absorption Cycles
Authors: Asli Tiktas, Huseyin Gunerhan, Arif Hepbasli
Abstract:
Solar energy, with its abundant and clean features, is one of the prominent renewable energy sources in multigeneration energy systems where various outputs, especially power generation, are produced together. In the literature, concentrated solar energy systems, which are an expensive technology, are mostly used in solar power plants where medium-high capacity production outputs are achieved. In addition, although different methods have been developed and proposed for solar energy-supported integrated power generation systems by different investigators, absorption technology, which is one of the key points of the present study, has been used extensively in cooling systems in these studies. Unlike these common uses mentioned in the literature, this study designs a system in which a flat plate solar collector (FPSC), Rankine cycle, absorption heat transformer (AHT), and cooling systems (ACS) are integrated. The system proposed within the scope of this study aims to produce medium-high-capacity electricity, heating, and cooling outputs using a technique different from the literature, with lower production costs than existing systems. With the proposed integrated system design, the average production costs based on electricity, heating, and cooling load production for similar scale systems are 5-10% of the average production costs of 0.685 USD/kWh, 0.247 USD/kWh, and 0.342 USD/kWh. In the proposed integrated system design, this will be achieved by increasing the outlet temperature of the AHT and FPSC system first, expanding the high-temperature steam coming out of the absorber of the AHT system in the turbine up to the condenser temperature of the ACS system, and next directly integrating it into the evaporator of this system and then completing the AHT cycle. Through this proposed system, heating and cooling will be carried out by completing the AHT and ACS cycles, respectively, while power generation will be provided because of the expansion of the turbine. Using only a single generator in the production of these three outputs together, the costs of additional boilers and the need for a heat source are also saved. In order to demonstrate that the system proposed in this study offers a more optimum solution, the techno-economic parameters obtained based on energy, exergy, economic, and environmental analysis were compared with the parameters of similar scale systems in the literature. The design parameters of the proposed system were determined through a parametric optimization study to exceed the maximum efficiency and effectiveness and reduce the production cost rate values of the compared systems.Keywords: solar energy, absorption technology, Rankine cycle, multigeneration energy system
Procedia PDF Downloads 581206 Optimization of Adsorptive Removal of Common Used Pesticides Water Wastewater Using Golden Activated Charcoal
Authors: Saad Mohamed Elsaid, Nabil Anwar, Mahmoud Rushdi
Abstract:
One of the reasons for the intensive use of pesticides is to protect agricultural crops and orchards from pests or agricultural worms. The period of time that pesticides stay inside the soil is estimated at about (2) to (12) weeks. Perhaps the most important reason that led to groundwater pollution is the easy leakage of these harmful pesticides from the soil into the aquifers. This research aims to find the best ways to use traded activated charcoal with gold nitrate solution; for removing the deadly pesticides from the aqueous solution by adsorption phenomenon. The most used pesticides in Egypt were selected, such as Malathion, Methomyl Abamectin and, Thiamethoxam. Activated charcoal doped with gold ions was prepared by applying chemical and thermal treatments to activated charcoal using gold nitrate solution. Adsorption of studied pesticide onto activated carbon /Au was mainly by chemical adsorption, forming a complex with the gold metal immobilized on activated carbon surfaces. In addition, the gold atom was considered as a catalyst to cracking the pesticide molecule. Gold activated charcoal is a low cost material due to the use of very low concentrations of gold nitrate solution. its notice the great ability of activated charcoal in removing selected pesticides due to the presence of the positive charge of the gold ion, in addition to other active groups such as functional oxygen and lignin cellulose. The presence of pores of different sizes on the surface of activated charcoal is the driving force for the good adsorption efficiency for the removal of the pesticides under study The surface area of the prepared char as well as the active groups, were determined using infrared spectroscopy and scanning electron microscopy. Some factors affecting the ability of activated charcoal were applied in order to reach the highest adsorption capacity of activated charcoal, such as the weight of the charcoal, the concentration of the pesticide solution, the time of the experiment, and the pH. Experiments showed that the maximum limit revealed by the batch adsorption study for the adsorption of selected insecticides was in contact time (80) minutes at pH (7.70). These promising results were confirmed, and by establishing the practical application of the developed system, the effect of various operating factors with equilibrium, kinetic and thermodynamic studies is evident, using the Langmuir application on the effectiveness of the absorbent material with absorption capacities higher than most other adsorbents.Keywords: waste water, pesticides pollution, adsorption, activated carbon
Procedia PDF Downloads 741205 A Machine Learning Approach for Efficient Resource Management in Construction Projects
Authors: Soheila Sadeghi
Abstract:
Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.Keywords: resource allocation, machine learning, optimization, data-driven decision-making, project management
Procedia PDF Downloads 401204 The Effect of Corporate Governance to Islamic Banking Performance Using Maqasid Index Approach in Indonesia
Authors: Audia Syafa'atur Rahman, Rozali Haron
Abstract:
The practices of Islamic banking are more attuned to the goals of profit maximization rather than obtaining ethical profit. Ethical profit is obtained from interest-free earnings and to give an impact which benefits to the growth of society and economy. Good corporate governance practices are needed to assure the sustainability of Islamic banks in order to achieve Maqasid Shariah with the main purpose of boosting the well-being of people. The Maqasid Shariah performance measurement is used to measure the duties and responsibilities expected to be performed by Islamic banks. It covers not only unification dimension like financial measurement, but also many dimensions covered to reflect the main purpose of Islamic banks. The implementation of good corporate governance is essential because it covers the interests of the stakeholders and facilitates effective monitoring to encourage Islamic banks to utilize resources more efficiently in order to achieve the Maqasid Shariah. This study aims to provide the empirical evidence on the Maqasid performance of Islamic banks in relation to the Maqasid performance evaluation model, to examine the influence of SSB characteristics and board structures to Islamic Banks performance as measured by Maqasid performance evaluation model. By employing the simple additive weighting method, Maqasid index for all the Islamic Banks in Indonesia within 2012 to 2016 ranged from above 11% to 28%. The Maqasid Syariah performance index where results reached above 20% are obtained by Islamic Banks such as Bank Muamalat Indonesia, Bank Panin Syariah, and Bank BRI Syariah. The consistent achievement above 23% is achieved by BMI. Other Islamic Banks such as Bank Victoria Syariah, Bank Jabar Banten Syariah, Bank BNI Syariah, Bank Mega Syariah, BCA Syariah, and Maybank Syariah Indonesia shows a fluctuating value of the Maqasid performance index every year. The impact of SSB characteristics and board structures are tested using random-effects generalized least square. The findings indicate that SSB characteristics (Shariah Supervisory Board size, Shariah Supervisory Board cross membership, Shariah Supervisory Board Education, and Shariah Supervisory Board reputation) and board structures (Board size and Board independence) have an essential role in improving the performance of Islamic Banks. The findings denote Shariah Supervisory Board with smaller size, higher portion of Shariah Supervisory Board cross membership; lesser Shariah Supervisory Board holds doctorate degree, lesser reputable scholar, more members on board of directors, and less independence non-executive directors will enhance the performance of Islamic Banks.Keywords: Maqasid Shariah, corporate governance, Islamic banks, Shariah supervisory board
Procedia PDF Downloads 2401203 Synthesis and Properties of Oxidized Corn Starch Based Wood Adhesive
Authors: Salise Oktay, Nilgun Kizilcan, Basak Bengu
Abstract:
At present, formaldehyde-based adhesives such as urea-formaldehyde (UF), melamine-formaldehyde (MF), melamine – urea-formaldehyde (MUF), etc. are mostly used in wood-based panel industry because of their high reactivity, chemical versatility, and economic competitiveness. However, formaldehyde-based wood adhesives are produced from non- renewable resources and also formaldehyde is classified as a probable human carcinogen (Group B1) by the U.S. Environmental Protection Agency (EPA). Therefore, there has been a growing interest in the development of environment-friendly, economically competitive, bio-based wood adhesives to meet wood-based panel industry requirements. In this study, like a formaldehyde-free adhesive, oxidized starch – urea wood adhesives was synthesized. In this scope, firstly, acid hydrolysis of corn starch was conducted and then acid thinned corn starch was oxidized by using hydrogen peroxide and CuSO₄ as an oxidizer and catalyst, respectively. Secondly, the polycondensation reaction between oxidized starch and urea conducted. Finally, nano – TiO₂ was added to the reaction system to strengthen the adhesive network. Solid content, viscosity, and gel time analyses of the prepared adhesive were performed to evaluate the adhesive processability. FTIR, DSC, TGA, SEM characterization techniques were used to investigate chemical structures, thermal, and morphological properties of the adhesive, respectively. Rheological analysis of the adhesive was also performed. In order to evaluate the quality of oxidized corn starch – urea adhesives, particleboards were produced in laboratory scale and mechanical and physical properties of the boards were investigated such as an internal bond, modulus of rupture, modulus of elasticity, formaldehyde emission, etc. The obtained results revealed that oxidized starch – urea adhesives were synthesized successfully and it can be a good potential candidate to use the wood-based panel industry with some developments.Keywords: nano-TiO₂, corn starch, formaldehyde emission, wood adhesives
Procedia PDF Downloads 1511202 Research on Innovation Service based on Science and Technology Resources in Beijing-Tianjin-Hebei
Authors: Runlian Miao, Wei Xie, Hong Zhang
Abstract:
In China, Beijing-Tianjin-Hebei is regarded as a strategically important region because itenjoys highest development in economic development, opening up, innovative capacity and andpopulation. Integrated development of Beijing-Tianjin-Hebei region is increasingly emphasized by the government recently years. In 2014, it has ascended to one of the national great development strategies by Chinese central government. In 2015, Coordinated Development Planning Compendium for Beijing-Tianjin-Hebei Region was approved. Such decisions signify Beijing-Tianjin-Hebei region would lead innovation-driven economic development in China. As an essential factor to achieve national innovation-driven development and significant part of regional industry chain, the optimization of science and technology resources allocation will exert great influence to regional economic transformation and upgrading and innovation-driven development. However, unbalanced distribution, poor sharing of resources and existence of information isolated islands have contributed to different interior innovation capability, vitality and efficiency, which impeded innovation and growth of the whole region. Under such a background, to integrate and vitalize regional science and technology resources and then establish high-end, fast-responding and precise innovation service system basing on regional resources, would be of great significance for integrated development of Beijing-Tianjin-Hebei region and even handling of unbalanced and insufficient development problem in China. This research uses the method of literature review and field investigation and applies related theories prevailing home and abroad, centering service path of science and technology resources for innovation. Based on the status quo and problems of regional development of Beijing-Tianjin-Hebei, theoretically, the author proposed to combine regional economics and new economic geography to explore solution to problem of low resource allocation efficiency. Further, the author puts forward to applying digital map into resource management and building a platform for information co-building and sharing. At last, the author presents the thought to establish a specific service mode of ‘science and technology plus digital map plus intelligence research plus platform service’ and suggestion on co-building and sharing mechanism of 3 (Beijing, Tianjin and Hebei ) plus 11 (important cities in Hebei Province).Keywords: Beijing-Tianjin-Hebei, science and technology resources, innovation service, digital platform
Procedia PDF Downloads 1611201 A Comparison of qCON/qNOX to the Bispectral Index as Indices of Antinociception in Surgical Patients Undergoing General Anesthesia with Laryngeal Mask Airway
Authors: Roya Yumul, Ofelia Loani Elvir-Lazo, Sevan Komshian, Ruby Wang, Jun Tang
Abstract:
BACKGROUND: An objective means for monitoring the anti-nociceptive effects of perioperative medications has long been desired as a way to provide anesthesiologists information regarding a patient’s level of antinociception and preclude any untoward autonomic responses and reflexive muscular movements from painful stimuli intraoperatively. To this end, electroencephalogram (EEG) based tools including BIS and qCON were designed to provide information about the depth of sedation while qNOX was produced to inform on the degree of antinociception. The goal of this study was to compare the reliability of qCON/qNOX to BIS as specific indicators of response to nociceptive stimulation. METHODS: Sixty-two patients undergoing general anesthesia with LMA were included in this study. Institutional Review Board (IRB) approval was obtained, and informed consent was acquired prior to patient enrollment. Inclusion criteria included American Society of Anesthesiologists (ASA) class I-III, 18 to 80 years of age, and either gender. Exclusion criteria included the inability to consent. Withdrawal criteria included conversion to the endotracheal tube and EEG malfunction. BIS and qCON/qNOX electrodes were simultaneously placed on all patients prior to induction of anesthesia and were monitored throughout the case, along with other perioperative data, including patient response to noxious stimuli. All intraoperative decisions were made by the primary anesthesiologist without influence from qCON/qNOX. Student’s t-distribution, prediction probability (PK), and ANOVA were used to statistically compare the relative ability to detect nociceptive stimuli for each index. Twenty patients were included for the preliminary analysis. RESULTS: A comparison of overall intraoperative BIS, qCON and qNOX indices demonstrated no significant difference between the three measures (N=62, p> 0.05). Meanwhile, index values for qNOX (62±18) were significantly higher than those for BIS (46±14) and qCON (54±19) immediately preceding patient responses to nociceptive stimulation in a preliminary analysis (N=20, * p= 0.0408). Notably, certain hemodynamic measurements demonstrated a significant increase in response to painful stimuli (MAP increased from 74 ±13 mm Hg at baseline to 84 ± 18 mm Hg during noxious stimuli [p= 0.032] and HR from 76 ± 12 BPM at baseline to 80 ± 13 BPM during noxious stimuli [p=0.078] respectively). CONCLUSION: In this observational study, BIS and qCON/qNOX provided comparable information on patients’ level of sedation throughout the course of an anesthetic. Meanwhile, increases in qNOX values demonstrated a superior correlation to an imminent response to stimulation relative to all other indicesKeywords: antinociception, BIS, general anesthesia, LMA, qCON/qNOX
Procedia PDF Downloads 1371200 Theoretical Discussion on the Classification of Risks in Supply Chain Management
Authors: Liane Marcia Freitas Silva, Fernando Augusto Silva Marins, Maria Silene Alexandre Leite
Abstract:
The adoption of a network structure, like in the supply chains, favors the increase of dependence between companies and, by consequence, their vulnerability. Environment disasters, sociopolitical and economical events, and the dynamics of supply chains elevate the uncertainty of their operation, favoring the occurrence of events that can generate break up in the operations and other undesired consequences. Thus, supply chains are exposed to various risks that can influence the profitability of companies involved, and there are several previous studies that have proposed risk classification models in order to categorize the risks and to manage them. The objective of this paper is to analyze and discuss thirty of these risk classification models by means a theoretical survey. The research method adopted for analyzing and discussion includes three phases: The identification of the types of risks proposed in each one of the thirty models, the grouping of them considering equivalent concepts associated to their definitions, and, the analysis of these risks groups, evaluating their similarities and differences. After these analyses, it was possible to conclude that, in fact, there is more than thirty risks types identified in the literature of Supply Chains, but some of them are identical despite of be used distinct terms to characterize them, because different criteria for risk classification are adopted by researchers. In short, it is observed that some types of risks are identified as risk source for supply chains, such as, demand risk, environmental risk and safety risk. On the other hand, other types of risks are identified by the consequences that they can generate for the supply chains, such as, the reputation risk, the asset depreciation risk and the competitive risk. These results are consequence of the disagreements between researchers on risk classification, mainly about what is risk event and about what is the consequence of risk occurrence. An additional study is in developing in order to clarify how the risks can be generated, and which are the characteristics of the components in a Supply Chain that leads to occurrence of risk.Keywords: sisks classification, survey, supply chain management, theoretical discussion
Procedia PDF Downloads 6331199 Effect of Term of Preparation on Performance of Cool Chamber Stored White Poplar Hardwood Cuttings in Nursery
Authors: Branislav Kovačević, Andrej Pilipović, Zoran Novčić, Marina Milović, Lazar Kesić, Milan Drekić, Saša Pekeč, Leopold Poljaković Pajnik, Saša Orlović
Abstract:
Poplars present one of the most important tree species used for phytoremediation in the northern hemisphere. They can be used either as direct “cleaners” of the contaminated soils or as buffer zones preventing the contaminant plume to the surrounding environment. In order to produce appropriate planting material for this purpose, there is a long process of the breeding of the most favorable candidates. Although the development of the poplar propagation technology has been evolving for decades, white poplar nursery production, as well as the establishment of short-rotation coppice plantations, still considerably depends on the success of hardwood cuttings’ survival. This is why easy rooting is among the most desirable properties in white poplar breeding. On the other hand, there are many opportunities for the optimization of the technological procedures in order to meet the demands of particular genotype (clonal technology). In this study the effect of the term of hardwood cuttings’ preparation of four white poplar clones on their survival and further growth of rooted cuttings in nursery conditions were tested. There were three terms of cuttings’ preparation: the beginning of February (2nd Feb 2023), the beginning of March (3rd Mar 2023) and the end of March (21nd Mar 2023), which is regarded as the standard term. The cuttings were stored in cool chamber at 2±2°C. All cuttings were planted on the same date (11th Apr 2023), in soil prepared with rotary tillage, and then cultivated by usual nursey procedures. According to the results obtained after the bud set (29th Sept 2023) there were significant differences in the survival and growth of rooted cuttings between examined terms of cutting preparation. Also, there were significant differences in the reaction of examined clones on terms of cutting preparation. In total, the best results provided cuttings prepared at the first term (2nd Feb 2023) (survival rate of 39.4%), while performance after two later preparation terms was significantly poorer (20.5% after second and 16.5% after third term). These results stress the significance of dormancy preservation in cuttings of examined white poplar clones for their survival, which could be especially important in context of climate change. Differences in clones’ reaction to term of cutting preparation suggest necessity of adjustment of the technology to the needs of particular clone i.e. design of clone specific technology.Keywords: rooting, Populus alba, nursery, clonal technology
Procedia PDF Downloads 651198 Dendroremediation of a Defunct Lead Acid Battery Recycling Site
Authors: Alejandro Ruiz-Olivares, M. del Carmen González-Chávez, Rogelio Carrillo-González, Martha Reyes-Ramos, Javier Suárez Espinosa
Abstract:
Use of automobiles has increased and proportionally, the demand for batteries to impulse them. When the device is aged, all the battery materials are reused through lead acid battery recycling (LABR). Importation of used lead acid batteries in Mexico has increased in the last years since many recycling factories have been settled in the country. Inadequate disposal of lead-acid battery recycling (LABR) wastes left soil severely polluted with Pb, Cu, and salts (Na+, SO2− 4, PO3− 4). Soil organic amendments may contribute with essential nutrients and sequester (scavenger compounds) metals to allow plant establishment. The objective of this research was to revegetate a former lead-acid battery recycling site aided with organic amendments. Seven tree species (Acacia farnesiana, Casuarina equisetifolia, Cupressus lusitanica, Eucalyptus obliqua, Fraxinus excelsior, Prosopis laevigata and Pinus greggii) and two organic amendments (vermicompost and vermicompost + sawdust mixture) were tested for phytoremediation of a defunct LABR site. Plants were irrigated during the dry season. Monitoring of the soils was carried out during the experiment: Available metals, salts concentrations and their spatial pattern in soil were analyzed. Plant species and amendments were compared through analysis of covariance and longitudinal analysis. High concentrations of extractable (DTPA-TEA-CaCl₂) metals (up to 15,685 mg kg⁻¹ and 478 mg kg⁻¹ for Pb and Cu) and soluble salts (292 mg kg-1 and 23,578 mg kg-1 for PO3− 4and SO2− 4) were found in the soil after three and six months of setting up the experiment. Lead and Cu concentrations were depleted in the rhizosphere after amendments addition. Spatial pattern of PO3− 4, SO2− 4 and DTPA-extractable Pb and Cu changed slightly through time. In spite of extreme soil conditions the plant species planted: A. farnesiana, E. obliqua, C. equisetifolia and F. excelsior had 100% of survival. Available metals and salts differently affected each species. In addition, negative effect on growth due to Pb accumulated in shoots was observed only in C. lusitanica. Many specimens accumulated high concentrations of Pb ( > 1000 mg kg-1) in shoots. C. equisetifolia and C. lusitanica had the best rate of growth. Based on the results, all the evaluated species may be useful for revegetation of Pb-polluted soils. Besides their use in phytoremediation, some ecosystem services can be obtained from the woodland such as encourage wildlife, wood production, and carbon sequestration. Further research should be conducted to analyze these services.Keywords: heavy metals, inadequate disposal, organic amendments, phytoremediation with trees
Procedia PDF Downloads 2851197 Shift from Distance to In-Person Learning of Indigenous People’s Schools during the COVID 19 Pandemic: Gains and Challenges
Authors: May B. Eclar, Romeo M. Alip, Ailyn C. Eay, Jennifer M. Alip, Michelle A. Mejica, Eloy C.eclar
Abstract:
The COVID-19 pandemic has significantly changed the educational landscape of the Philippines. The groups affected by these changes are the poor and those living in the Geographically Isolated and Depressed Areas (GIDA), such as the Indigenous Peoples (IP). This was heavily experienced by the ten IP schools in Zambales, a province in the country. With this in mind, plus other factors relative to safety, the Schools Division of Zambales selected these ten schools to conduct the pilot implementation of in-person classes two (2) years after the country-wide school closures. This study aimed to explore the lived experiences of the school heads of the first ten Indigenous People’s (IP) schools that shifted from distance learning to limited in-person learning. These include the challenges met and the coping mechanism they set to overcome the challenges. The study is linked to experiential learning theory as it focuses on the idea that the best way to learn things is by having experiences). It made use of qualitative research, specifically phenomenology. All the ten school heads from the IP schools were chosen as participants in the study. Afterward, participants underwent semi-structured interviews, both individual and focus group discussions, for triangulation. Data were analyzed through thematic analysis. As a result, the study found that most IP schools did not struggle to convince parents to send their children back to school as they downplay the pandemic threat due to their geographical location. The parents struggled the most during modular learning since many of them are either illiterate, too old to teach their children, busy with their lands, or have too many children to teach. Moreover, there is a meager vaccination rate in the ten barangays where the schools are located because of local beliefs. In terms of financial needs, school heads did not find it difficult even though funding is needed to adjust the schools to the new normal because of the financial support coming from the central office. Technical assistance was also provided to the schools by division personnel. Teachers also welcomed the idea of shifting back to in-person classes, and minor challenges were met but were solved immediately through various mechanisms. Learning losses were evident since most learners struggled with essential reading, writing, and counting skills. Although the community has positively received the conduct of in-person classes, the challenges these IP schools have been experiencing pre-pandemic were also exacerbated due to the school closures. It is therefore recommended that constant monitoring and provision of support must continue to solve other challenges the ten IP schools are still experiencing due to in-person classesKeywords: In-person learning, indigenous peoples, phenomenology, philippines
Procedia PDF Downloads 1101196 The Phenomenon of the Seawater Intrusion with Fresh Groundwater in the Arab Region
Authors: Kassem Natouf, Ihab Jnad
Abstract:
In coastal aquifers, the interface between fresh groundwater and salty seawater may shift inland, reaching coastal wells and causing an increase in the salinity of the water they pump, putting them out of service. Many Arab coastal sites suffer from this phenomenon due to the increased pumping of coastal groundwater. This research aims to prepare a comprehensive study describing the common characteristics of the phenomenon of seawater intrusion with coastal freshwater aquifers in the Arab region, its general and specific causes and negative effects, in a way that contributes to overcoming this phenomenon, and to exchanging expertise between Arab countries in studying and analyzing it, leading to overcoming it. This research also aims to build geographical and relational databases for data, information and studies available in Arab countries about seawater intrusion with freshwater so as to provide the data and information necessary for managing groundwater resources on Arab coasts, including studying the effects of climate change on these resources and helping decision-makers in developing executive programs to overcome the seawater intrusion with groundwater. The research relied on the methodology of analysis and comparison, where the available information and data about the phenomenon in the Arab region were collected. After that, the information and data collected were studied and analyzed, and the causes of the phenomenon in each case, its results, and solutions for prevention were stated. Finally, the different cases were compared, and the common causes, results, and methods of treatment between them were deduced, and a technical report summarizing that was prepared. To overcome the phenomenon of seawater intrusion with fresh groundwater: (1) It is necessary to develop efforts to monitor the quantity and quality of groundwater on the coasts and to develop mathematical models to predict the impact of climate change, sea level rise, and human activities on coastal groundwater. (2) Over-pumping of coastal aquifers is an important cause of seawater intrusion. To mitigate this problem, Arab countries should reduce groundwater pumping and promote rainwater harvesting, surface irrigation, and water recycling practices. (3) Artificial recharge of coastal groundwater with various forms of water, whether fresh or treated, is a promising technology to mitigate the effects of seawater intrusion.Keywords: coastal aquifers, seawater intrusion, fresh groundwater, salinity increase, Arab region, groundwater management, climate change effects, sustainable water practices, over-pumping, artificial recharge, monitoring and modeling, data databases, groundwater resources, negative effects, comparative analysis, technical report, water scarcity, groundwater quality, decision-making, environmental impact, agricultural practices
Procedia PDF Downloads 381195 Roundabout Implementation Analyses Based on Traffic Microsimulation Model
Authors: Sanja Šurdonja, Aleksandra Deluka-Tibljaš, Mirna Klobučar, Irena Ištoka Otković
Abstract:
Roundabouts are a common choice in the case of reconstruction of an intersection, whether it is to improve the capacity of the intersection or traffic safety, especially in urban conditions. The regulation for the design of roundabouts is often related to driving culture, the tradition of using this type of intersection, etc. Individual values in the regulation are usually recommended in a wide range (this is the case in Croatian regulation), and the final design of a roundabout largely depends on the designer's experience and his/her choice of design elements. Therefore, before-after analyses are a good way to monitor the performance of roundabouts and possibly improve the recommendations of the regulation. This paper presents a comprehensive before-after analysis of a roundabout on the country road network near Rijeka, Croatia. The analysis is based on a thorough collection of traffic data (operating speeds and traffic load) and design elements data, both before and after the reconstruction into a roundabout. At the chosen location, the roundabout solution aimed to improve capacity and traffic safety. Therefore, the paper analyzed the collected data to see if the roundabout achieved the expected effect. A traffic microsimulation model (VISSIM) of the roundabout was created based on the real collected data, and the influence of the increase of traffic load and different traffic structures, as well as of the selected design elements on the capacity of the roundabout, were analyzed. Also, through the analysis of operating speeds and potential conflicts by application of the Surrogate Safety Assessment Model (SSAM), the traffic safety effect of the roundabout was analyzed. The results of this research show the practical value of before-after analysis as an indicator of roundabout effectiveness at a specific location. The application of a microsimulation model provides a practical method for analyzing intersection functionality from a capacity and safety perspective in present and changed traffic and design conditions.Keywords: before-after analysis, operating speed, capacity, design.
Procedia PDF Downloads 241194 Dynamic Two-Way FSI Simulation for a Blade of a Small Wind Turbine
Authors: Alberto Jiménez-Vargas, Manuel de Jesús Palacios-Gallegos, Miguel Ángel Hernández-López, Rafael Campos-Amezcua, Julio Cesar Solís-Sanchez
Abstract:
An optimal wind turbine blade design must be able of capturing as much energy as possible from the wind source available at the area of interest. Many times, an optimal design means the use of large quantities of material and complicated processes that make the wind turbine more expensive, and therefore, less cost-effective. For the construction and installation of a wind turbine, the blades may cost up to 20% of the outline pricing, and become more important due to they are part of the rotor system that is in charge of transmitting the energy from the wind to the power train, and where the static and dynamic design loads for the whole wind turbine are produced. The aim of this work is the develop of a blade fluid-structure interaction (FSI) simulation that allows the identification of the major damage zones during the normal production situation, and thus better decisions for design and optimization can be taken. The simulation is a dynamic case, since we have a time-history wind velocity as inlet condition instead of a constant wind velocity. The process begins with the free-use software NuMAD (NREL), to model the blade and assign material properties to the blade, then the 3D model is exported to ANSYS Workbench platform where before setting the FSI system, a modal analysis is made for identification of natural frequencies and modal shapes. FSI analysis is carried out with the two-way technic which begins with a CFD simulation to obtain the pressure distribution on the blade surface, then these results are used as boundary condition for the FEA simulation to obtain the deformation levels for the first time-step. For the second time-step, CFD simulation is reconfigured automatically with the next time-step inlet wind velocity and the deformation results from the previous time-step. The analysis continues the iterative cycle solving time-step by time-step until the entire load case is completed. This work is part of a set of projects that are managed by a national consortium called “CEMIE-Eólico” (Mexican Center in Wind Energy Research), created for strengthen technological and scientific capacities, the promotion of creation of specialized human resources, and to link the academic with private sector in national territory. The analysis belongs to the design of a rotor system for a 5 kW wind turbine design thought to be installed at the Isthmus of Tehuantepec, Oaxaca, Mexico.Keywords: blade, dynamic, fsi, wind turbine
Procedia PDF Downloads 4821193 LTE Performance Analysis in the City of Bogota Northern Zone for Two Different Mobile Broadband Operators over Qualipoc
Authors: Víctor D. Rodríguez, Edith P. Estupiñán, Juan C. Martínez
Abstract:
The evolution in mobile broadband technologies has allowed to increase the download rates in users considering the current services. The evaluation of technical parameters at the link level is of vital importance to validate the quality and veracity of the connection, thus avoiding large losses of data, time and productivity. Some of these failures may occur between the eNodeB (Evolved Node B) and the user equipment (UE), so the link between the end device and the base station can be observed. LTE (Long Term Evolution) is considered one of the IP-oriented mobile broadband technologies that work stably for data and VoIP (Voice Over IP) for those devices that have that feature. This research presents a technical analysis of the connection and channeling processes between UE and eNodeB with the TAC (Tracking Area Code) variables, and analysis of performance variables (Throughput, Signal to Interference and Noise Ratio (SINR)). Three measurement scenarios were proposed in the city of Bogotá using QualiPoc, where two operators were evaluated (Operator 1 and Operator 2). Once the data were obtained, an analysis of the variables was performed determining that the data obtained in transmission modes vary depending on the parameters BLER (Block Error Rate), performance and SNR (Signal-to-Noise Ratio). In the case of both operators, differences in transmission modes are detected and this is reflected in the quality of the signal. In addition, due to the fact that both operators work in different frequencies, it can be seen that Operator 1, despite having spectrum in Band 7 (2600 MHz), together with Operator 2, is reassigning to another frequency, a lower band, which is AWS (1700 MHz), but the difference in signal quality with respect to the establishment with data by the provider Operator 2 and the difference found in the transmission modes determined by the eNodeB in Operator 1 is remarkable.Keywords: BLER, LTE, network, qualipoc, SNR.
Procedia PDF Downloads 1151192 Flexible PVC Based Nanocomposites With the Incorporation of Electric and Magnetic Nanofillers for the Shielding Against EMI and Thermal Imaging Signals
Authors: H. M. Fayzan Shakir, Khadija Zubair, Tingkai Zhao
Abstract:
Electromagnetic (EM) waves are being used widely now a days. Cell phone signals, WIFI signals, wireless telecommunications etc everything uses EM waves which then create EM pollution. EM pollution can cause serious effects on both human health and nearby electronic devices. EM waves have electric and magnetic components that disturb the flow of charged particles in both human nervous system and electronic devices. The shielding of both humans and electronic devices are a prime concern today. EM waves can cause headaches, anxiety, suicide and depression, nausea, fatigue and loss of libido in humans and malfunctioning in electronic devices. Polyaniline (PANI) and polypyrrole (PPY) were successfully synthesized using chemical polymerizing using ammonium persulfate and DBSNa as oxidant respectively. Barium ferrites (BaFe) were also prepared using co-precipitation method and calcinated at 10500C for 8h. Nanocomposite thin films with various combinations and compositions of Polyvinylchloride, PANI, PPY and BaFe were prepared. X-ray diffraction technique was first used to confirm the successful fabrication of all nano fillers and particle size analyzer to measure the exact size and scanning electron microscopy is used for the shape. According to Electromagnetic Interference theory, electrical conductivity is the prime property required for the Electromagnetic Interference shielding. 4-probe technique is then used to evaluate DC conductivity of all samples. Samples with high concentration of PPY and PANI exhibit remarkable increased electrical conductivity due to fabrication of interconnected network structure inside the Polyvinylchloride matrix that is also confirmed by SEM analysis. Less than 1% transmission was observed in whole NIR region (700 nm – 2500 nm). Also, less than -80 dB Electromagnetic Interference shielding effectiveness was observed in microwave region (0.1 GHz to 20 GHz).Keywords: nanocomposites, polymers, EMI shielding, thermal imaging
Procedia PDF Downloads 1061191 Evaluation of Surface Roughness Condition Using App Roadroid
Authors: Diego de Almeida Pereira
Abstract:
The roughness index of a road is considered the most important parameter about the quality of the pavement, as it has a close relation with the comfort and safety of the road users. Such condition can be established by means of functional evaluation of pavement surface deviations, measured by the International Roughness Index (IRI), an index that came out of the international evaluation of pavements, coordinated by the World Bank, and currently owns, as an index of limit measure, for purposes of receiving roads in Brazil, the value of 2.7 m/km. This work make use of the e.IRI parameter, obtained by the Roadroid app. for smartphones which use Android operating system. The choice of such application is due to the practicality for the user interaction, as it possesses a data storage on a cloud of its own, and the support given to universities all around the world. Data has been collected for six months, once in each month. The studies begun in March 2018, season of precipitations that worsen the conditions of the roads, besides the opportunity to accompany the damage and the quality of the interventions performed. About 350 kilometers of sections of four federal highways were analyzed, BR-020, BR-040, BR-060 and BR-070 that connect the Federal District (area where Brasilia is located) and surroundings, chosen for their economic and tourist importance, been two of them of federal and two others of private exploitation. As well as much of the road network, the analyzed stretches are coated of Hot Mix Asphalt (HMA). Thus, this present research performs a contrastive discussion between comfort conditions and safety of the roads under private exploitation in which users pay a fee to the concessionaires so they could travel on a road that meet the minimum requirements for usage, and regarding the quality of offered service on the roads under Federal Government jurisdiction. And finally, the contrast of data collected by National Department of Transport Infrastructure – DNIT, by means of a laser perfilometer, with data achieved by Roadroid, checking the applicability, the practicality and cost-effective, considering the app limitations.Keywords: roadroid, international roughness index, Brazilian roads, pavement
Procedia PDF Downloads 851190 Characterization and Correlation of Neurodegeneration and Biological Markers of Model Mice with Traumatic Brain Injury and Alzheimer's Disease
Authors: J. DeBoard, R. Dietrich, J. Hughes, K. Yurko, G. Harms
Abstract:
Alzheimer’s disease (AD) is a predominant type of dementia and is likely a major cause of neural network impairment. The pathogenesis of this neurodegenerative disorder has yet to be fully elucidated. There are currently no known cures for the disease, and the best hope is to be able to detect it early enough to impede its progress. Beyond age and genetics, another prevalent risk factor for AD might be traumatic brain injury (TBI), which has similar neurodegenerative hallmarks. Our research focuses on obtaining information and methods to be able to predict when neurodegenerative effects might occur at a clinical level by observation of events at a cellular and molecular level in model mice. First, we wish to introduce our evidence that brain damage can be observed via brain imaging prior to the noticeable loss of neuromuscular control in model mice of AD. We then show our evidence that some blood biomarkers might be able to be early predictors of AD in the same model mice. Thus, we were interested to see if we might be able to predict which mice might show long-term neurodegenerative effects due to differing degrees of TBI and what level of TBI causes further damage and earlier death to the AD model mice. Upon application of TBIs via an apparatus to effectively induce extremely mild to mild TBIs, wild-type (WT) mice and AD mouse models were tested for cognition, neuromuscular control, olfactory ability, blood biomarkers, and brain imaging. Experiments are currently still in process, and more results are therefore forthcoming. Preliminary data suggest that neuromotor control diminishes as well as olfactory function for both AD and WT mice after the administration of five consecutive mild TBIs. Also, seizure activity increases significantly for both AD and WT after the administration of the five TBI treatment. If future data supports these findings, important implications about the effect of TBI on those at risk for AD might be possible.Keywords: Alzheimer's disease, blood biomarker, neurodegeneration, neuromuscular control, olfaction, traumatic brain injury
Procedia PDF Downloads 1411189 Detailed Analysis of Mechanism of Crude Oil and Surfactant Emulsion
Authors: Riddhiman Sherlekar, Umang Paladia, Rachit Desai, Yash Patel
Abstract:
A number of surfactants which exhibit ultra-low interfacial tension and an excellent microemulsion phase behavior with crude oils of low to medium gravity are not sufficiently soluble at optimum salinity to produce stable aqueous solutions. Such solutions often show phase separation after a few days at reservoir temperature, which does not suffice the purpose and the time is short when compared to the residence time in a reservoir for a surfactant flood. The addition of polymer often exacerbates the problem although the poor stability of the surfactant at high salinity remains a pivotal issue. Surfactants such as SDS, Ctab with large hydrophobes produce lowest IFT, but are often not sufficiently water soluble at desired salinity. Hydrophilic co-solvents and/or co-surfactants are needed to make the surfactant-polymer solution stable at the desired salinity. This study focuses on contrasting the effect of addition of a co-solvent in stability of a surfactant –oil emulsion. The idea is to use a co-surfactant to increase stability of an emulsion. Stability of the emulsion is enhanced because of creation of micro-emulsion which is verified both visually and with the help of particle size analyzer at varying concentration of salinity, surfactant and co-surfactant. A lab-experimental method description is provided and the method is described in detail to permit readers to emulate all results. The stability of the oil-water emulsion is visualized with respect to time, temperature, salinity of the brine and concentration of the surfactant. Nonionic surfactant TX-100 when used as a co-surfactant increases the stability of the oil-water emulsion. The stability of the prepared emulsion is checked by observing the particle size distribution. For stable emulsion in volume% vs particle size curve, the peak should be obtained for particle size of 5-50 nm while for the unstable emulsion a bigger sized particles are observed. The UV-Visible spectroscopy is also used to visualize the fraction of oil that plays important role in the formation of micelles in stable emulsion. This is important as the study will help us to decide applicability of the surfactant based EOR method for a reservoir that contains a specific type of crude. The use of nonionic surfactant as a co-surfactant would also increase the efficiency of surfactant EOR. With the decline in oil discoveries during the last decades it is believed that EOR technologies will play a key role to meet the energy demand in years to come. Taking this into consideration, the work focuses on the optimization of the secondary recovery(Water flooding) with the help of surfactant and/or co-surfactants by creating desired conditions in the reservoir.Keywords: co-surfactant, enhanced oil recovery, micro-emulsion, surfactant flooding
Procedia PDF Downloads 2521188 Commercial Winding for Superconducting Cables and Magnets
Authors: Glenn Auld Knierim
Abstract:
Automated robotic winding of high-temperature superconductors (HTS) addresses precision, efficiency, and reliability critical to the commercialization of products. Today’s HTS materials are mature and commercially promising but require manufacturing attention. In particular to the exaggerated rectangular cross-section (very thin by very wide), winding precision is critical to address the stress that can crack the fragile ceramic superconductor (SC) layer and destroy the SC properties. Damage potential is highest during peak operations, where winding stress magnifies operational stress. Another challenge is operational parameters such as magnetic field alignment affecting design performance. Winding process performance, including precision, capability for geometric complexity, and efficient repeatability, are required for commercial production of current HTS. Due to winding limitations, current HTS magnets focus on simple pancake configurations. HTS motors, generators, MRI/NMR, fusion, and other projects are awaiting robotic wound solenoid, planar, and spherical magnet configurations. As with conventional power cables, full transposition winding is required for long length alternating current (AC) and pulsed power cables. Robotic production is required for transposition, periodic swapping of cable conductors, and placing into precise positions, which allows power utility required minimized reactance. A full transposition SC cable, in theory, has no transmission length limits for AC and variable transient operation due to no resistance (a problem with conventional cables), negligible reactance (a problem for helical wound HTS cables), and no long length manufacturing issues (a problem with both stamped and twisted stacked HTS cables). The Infinity Physics team is solving manufacturing problems by developing automated manufacturing to produce the first-ever reliable and utility-grade commercial SC cables and magnets. Robotic winding machines combine mechanical and process design, specialized sense and observer, and state-of-the-art optimization and control sequencing to carefully manipulate individual fragile SCs, especially HTS, to shape previously unattainable, complex geometries with electrical geometry equivalent to commercially available conventional conductor devices.Keywords: automated winding manufacturing, high temperature superconductor, magnet, power cable
Procedia PDF Downloads 1401187 Innovating Electronics Engineering for Smart Materials Marketing
Authors: Muhammad Awais Kiani
Abstract:
The field of electronics engineering plays a vital role in the marketing of smart materials. Smart materials are innovative, adaptive materials that can respond to external stimuli, such as temperature, light, or pressure, in order to enhance performance or functionality. As the demand for smart materials continues to grow, it is crucial to understand how electronics engineering can contribute to their marketing strategies. This abstract presents an overview of the role of electronics engineering in the marketing of smart materials. It explores the various ways in which electronics engineering enables the development and integration of smart features within materials, enhancing their marketability. Firstly, electronics engineering facilitates the design and development of sensing and actuating systems for smart materials. These systems enable the detection and response to external stimuli, providing valuable data and feedback to users. By integrating sensors and actuators into materials, their functionality and performance can be significantly enhanced, making them more appealing to potential customers. Secondly, electronics engineering enables the creation of smart materials with wireless communication capabilities. By incorporating wireless technologies such as Bluetooth or Wi-Fi, smart materials can seamlessly interact with other devices, providing real-time data and enabling remote control and monitoring. This connectivity enhances the marketability of smart materials by offering convenience, efficiency, and improved user experience. Furthermore, electronics engineering plays a crucial role in power management for smart materials. Implementing energy-efficient systems and power harvesting techniques ensures that smart materials can operate autonomously for extended periods. This aspect not only increases their market appeal but also reduces the need for constant maintenance or battery replacements, thus enhancing customer satisfaction. Lastly, electronics engineering contributes to the marketing of smart materials through innovative user interfaces and intuitive control mechanisms. By designing user-friendly interfaces and integrating advanced control systems, smart materials become more accessible to a broader range of users. Clear and intuitive controls enhance the user experience and encourage wider adoption of smart materials in various industries. In conclusion, electronics engineering significantly influences the marketing of smart materials by enabling the design of sensing and actuating systems, wireless connectivity, efficient power management, and user-friendly interfaces. The integration of electronics engineering principles enhances the functionality, performance, and marketability of smart materials, making them more adaptable to the growing demand for innovative and connected materials in diverse industries.Keywords: electronics engineering, smart materials, marketing, power management
Procedia PDF Downloads 591186 Exploring Faculty Attitudes about Grades and Alternative Approaches to Grading: Pilot Study
Authors: Scott Snyder
Abstract:
Grading approaches in higher education have not changed meaningfully in over 100 years. While there is variation in the types of grades assigned across countries, most use approaches based on simple ordinal scales (e.g, letter grades). While grades are generally viewed as an indication of a student's performance, challenges arise regarding the clarity, validity, and reliability of letter grades. Research about grading in higher education has primarily focused on grade inflation, student attitudes toward grading, impacts of grades, and benefits of plus-minus letter grade systems. Little research is available about alternative approaches to grading, varying approaches used by faculty within and across colleges, and faculty attitudes toward grades and alternative approaches to grading. To begin to address these gaps, a survey was conducted of faculty in a sample of departments at three diverse colleges in a southeastern state in the US. The survey focused on faculty experiences with and attitudes toward grading, the degree to which faculty innovate in teaching and grading practices, and faculty interest in alternatives to the point system approach to grading. Responses were received from 104 instructors (21% response rate). The majority reported that teaching accounted for 50% or more of their academic duties. Almost all (92%) of respondents reported using point and percentage systems for their grading. While all respondents agreed that grades should reflect the degree to which objectives were mastered, half indicated that grades should also reflect effort or improvement. Over 60% felt that grades should be predictive of success in subsequent courses or real life applications. Most respondents disagreed that grades should compare students to other students. About 42% worried about their own grade inflation and grade inflation in their college. Only 17% disagreed that grades mean different things based on the instructor while 75% thought it would be good if there was agreement. Less than 50% of respondents felt that grades were directly useful for identifying students who should/should not continue, identify strengths/weaknesses, predict which students will be most successful, or contribute to program monitoring of student progress. Instructors were less willing to modify assessment than they were to modify instruction and curriculum. Most respondents (76%) were interested in learning about alternative approaches to grading (e.g., specifications grading). The factors that were most associated with willingness to adopt a new grading approach were clarity to students and simplicity of adoption of the approach. Follow-up studies are underway to investigate implementations of alternative grading approaches, expand the study to universities and departments not involved in the initial study, examine student attitudes about alternative approaches, and refine the measure of attitude toward adoption of alternative grading practices within the survey. Workshops about challenges of using percentage and point systems for determining grades and workshops regarding alternative approaches to grading are being offered.Keywords: alternative approaches to grading, grades, higher education, letter grades
Procedia PDF Downloads 961185 Building a Framework for Digital Emergency Response System for Aged, Long Term Care and Chronic Disease Patients in Asia Pacific Region
Authors: Nadeem Yousuf Khan
Abstract:
This paper proposes the formation of a digital emergency response system (dERS) in the aged, long-term care, and chronic disease setups in the post-COVID healthcare ecosystem, focusing on the Asia Pacific market where the aging population is increasing significantly. It focuses on the use of digital technologies such as wearables, a global positioning system (GPS), and mobile applications to build an integrated care system for old folks with co-morbidities and other chronic diseases. The paper presents a conceptual framework of a connected digital health ecosystem that not only provides proactive care to registered patients but also prevents the damages due to sudden conditions such as strokes by alerting and treating the patients in a digitally connected and coordinated manner. A detailed review of existing digital health technologies such as wearables, GPS, and mobile apps was conducted in context with the new post-COVID healthcare paradigm, along with a detailed literature review on the digital health policies and usability. A good amount of research papers is available in the application of digital health, but very few of them discuss the formation of a new framework for a connected digital ecosystem for the aged care population, which is increasing around the globe. A connected digital emergency response system has been proposed by the author whereby all registered patients (chronic disease and aged/long term care) will be connected to the proposed digital emergency response system (dERS). In the proposed ecosystem, patients will be provided with a tracking wrist band and a mobile app through which the control room will be monitoring the mobility and vitals such as atrial fibrillation (AF), blood sugar, blood pressure, and other vital signs. In addition to that, an alert in case if the patient falls down will add value to this system. In case of any variation in the vitals, an alert is sent to the dERS 24/7, and dERS clinical staff immediately trigger that alert which goes to the connected hospital and the adulatory service providers, and the patient is escorted to the nearest connected tertiary care hospital. By the time, the patient reaches the hospital, dERS team is ready to take appropriate clinical action to save the life of the patient. Strokes or myocardial infarction patients can be prevented from disaster if they are accessible to engagement healthcare. This dERS will play an effective role in saving the lives of aged patients or patients with chronic co-morbidities.Keywords: aged care, atrial fibrillation, digital health, digital emergency response system, digital technology
Procedia PDF Downloads 1221184 A Critical Reflection of Ableist Methodologies: Approaching Interviews and Go-Along Interviews
Authors: Hana Porkertová, Pavel Doboš
Abstract:
Based on a research project studying the experience of visually disabled people with urban space in the Czech Republic, the conference contribution discusses the limits of social-science methodologies used in sociology and human geography. It draws on actor-network theory, assuming that science does not describe reality but produces it. Methodology connects theory, research questions, ways to answer them (methods), and results. A research design utilizing ableist methodologies can produce ableist realities. Therefore, it was necessary to adjust the methods so that they could mediate blind experience to the scientific community without reproducing ableism. The researchers faced multiple challenges, ranging from questionable validity to how to research experience that differs from that of the researchers who are able-bodied. Finding a suitable theory that could be used as an analytical tool that would demonstrate space and blind experience as multiple, dynamic, and mutually constructed was the first step that could offer a range of potentially productive methods and research questions, as well as bring critically reflected results. Poststructural theory, mainly Deleuze-Guattarian philosophy, was chosen, and two methods were used: interviews and go-along interviews that had to be adjusted to be able to explore blind experience. In spite of a thorough preparation of these methods, new difficulties kept emerging, which exposed the ableist character of scientific knowledge. From the beginning of data collecting, there was an agreement to work in teams with slightly different roles of each of the researchers, which was significant especially during go-along interviews. In some cases, the anticipations of the researchers and participants differed, which led to unexpected and potentially dangerous situations. These were not caused only by the differences between scientific and lay communities but also between able-bodied and disabled people. Researchers were sometimes assigned to the assistants’ roles, and this new position – doing research together – required further negotiations, which also opened various ethical questions.Keywords: ableist methodology, blind experience, go-along interviews, research ethics, scientific knowledge
Procedia PDF Downloads 1651183 Leadership in the Era of AI: Growing Organizational Intelligence
Authors: Mark Salisbury
Abstract:
The arrival of artificially intelligent avatars and the automation they bring is worrying many of us, not only for our livelihood but for the jobs that may be lost to our kids. We worry about what our place will be as human beings in this new economy where much of it will be conducted online in the metaverse – in a network of 3D virtual worlds – working with intelligent machines. The Future of Leadership was written to address these fears and show what our place will be – the right place – in this new economy of AI avatars, automation, and 3D virtual worlds. But to be successful in this new economy, our job will be to bring wisdom to our workplace and the marketplace. And we will use AI avatars and 3D virtual worlds to do it. However, this book is about more than AI and the avatars that we will work with in the metaverse. It’s about building Organizational intelligence (OI) -- the capability of an organization to comprehend and create knowledge relevant to its purpose; in other words, it is the intellectual capacity of the entire organization. To increase organizational intelligence requires a new kind of knowledge worker, a wisdom worker, that requires a new kind of leadership. This book begins your story for how to become a leader of wisdom workers and be successful in the emerging wisdom economy. After this presentation, conference participants will be able to do the following: Recognize the characteristics of the new generation of wisdom workers and how they differ from their predecessors. Recognize that new leadership methods and techniques are needed to lead this new generation of wisdom workers. Apply personal and professional values – personal integrity, belief in something larger than yourself, and keeping the best interest of others in mind – to improve your work performance and lead others. Exhibit an attitude of confidence, courage, and reciprocity of sharing knowledge to increase your productivity and influence others. Leverage artificial intelligence to accelerate your ability to learn, augment your decision-making, and influence others.Utilize new technologies to communicate with human colleagues and intelligent machines to develop better solutions more quickly.Keywords: metaverse, generative artificial intelligence, automation, leadership, organizational intelligence, wisdom worker
Procedia PDF Downloads 44