Search results for: renewable energy technology
914 Learning Instructional Managements between the Problem-Based Learning and Stem Education Methods for Enhancing Students Learning Achievements and their Science Attitudes toward Physics the 12th Grade Level
Authors: Achirawatt Tungsombatsanti, Toansakul Santiboon, Kamon Ponkham
Abstract:
Strategies of the STEM education was aimed to prepare of an interdisciplinary and applied approach for the instructional of science, technology, engineering, and mathematics in an integrated students for enhancing engagement of their science skills to the Problem-Based Learning (PBL) method in Borabu School with a sample consists of 80 students in 2 classes at the 12th grade level of their learning achievements on electromagnetic issue. Research administrations were to separate on two different instructional model groups, the 40-experimental group was designed with the STEM instructional experimenting preparation and induction in a 40-student class and the controlling group using the PBL was designed to students identify what they already know, what they need to know, and how and where to access new information that may lead to the resolution of the problem in other class. The learning environment perceptions were obtained using the 35-item Physics Laboratory Environment Inventory (PLEI). Students’ creating attitude skills’ sustainable development toward physics were assessed with the Test Of Physics-Related Attitude (TOPRA) The term scaling was applied to the attempts to measure the attitude objectively with the TOPRA was used to assess students’ perceptions of their science attitude toward physics. Comparisons between pretest and posttest techniques were assessed students’ learning achievements on each their outcomes from each instructional model, differently. The results of these findings revealed that the efficiency of the PLB and the STEM based on criteria indicate that are higher than the standard level of the 80/80. Statistically, significant of students’ learning achievements to their later outcomes on the controlling and experimental physics class groups with the PLB and the STEM instructional designs were differentiated between groups at the .05 level, evidently. Comparisons between the averages mean scores of students’ responses to their instructional activities in the STEM education method are higher than the average mean scores of the PLB model. Associations between students’ perceptions of their physics classes to their attitudes toward physics, the predictive efficiency R2 values indicate that 77%, and 83% of the variances in students’ attitudes for the PLEI and the TOPRA in physics environment classes were attributable to their perceptions of their physics PLB and the STEM instructional design classes, consequently. An important of these findings was contributed to student understanding of scientific concepts, attitudes, and skills as evidence with STEM instructional ought to higher responding than PBL educational teaching. Statistically significant between students’ learning achievements were differentiated of pre and post assessments which overall on two instructional models.Keywords: learning instructional managements, problem-based learning, STEM education, method, enhancement, students learning achievements, science attitude, physics classes
Procedia PDF Downloads 230913 Modification of Magneto-Transport Properties of Ferrimagnetic Mn₄N Thin Films by Ni Substitution and Their Magnetic Compensation
Authors: Taro Komori, Toshiki Gushi, Akihito Anzai, Taku Hirose, Kaoru Toko, Shinji Isogami, Takashi Suemasu
Abstract:
Ferrimagnetic antiperovskite Mn₄₋ₓNiₓN thin film exhibits both small saturation magnetization and rather large perpendicular magnetic anisotropy (PMA) when x is small. Both of them are suitable features for application to current induced domain wall motion devices using spin transfer torque (STT). In this work, we successfully grew antiperovskite 30-nm-thick Mn₄₋ₓNiₓN epitaxial thin films on MgO(001) and STO(001) substrates by MBE in order to investigate their crystalline qualities and magnetic and magneto-transport properties. Crystalline qualities were investigated by X-ray diffraction (XRD). The magnetic properties were measured by vibrating sample magnetometer (VSM) at room temperature. Anomalous Hall effect was measured by physical properties measurement system. Both measurements were performed at room temperature. Temperature dependence of magnetization was measured by VSM-Superconducting quantum interference device. XRD patterns indicate epitaxial growth of Mn₄₋ₓNiₓN thin films on both substrates, ones on STO(001) especially have higher c-axis orientation thanks to greater lattice matching. According to VSM measurement, PMA was observed in Mn₄₋ₓNiₓN on MgO(001) when x ≤ 0.25 and on STO(001) when x ≤ 0.5, and MS decreased drastically with x. For example, MS of Mn₃.₉Ni₀.₁N on STO(001) was 47.4 emu/cm³. From the anomalous Hall resistivity (ρAH) of Mn₄₋ₓNiₓN thin films on STO(001) with the magnetic field perpendicular to the plane, we found out Mr/MS was about 1 when x ≤ 0.25, which suggests large magnetic domains in samples and suitable features for DW motion device application. In contrast, such square curves were not observed for Mn₄₋ₓNiₓN on MgO(001), which we attribute to difference in lattice matching. Furthermore, it’s notable that although the sign of ρAH was negative when x = 0 and 0.1, it reversed positive when x = 0.25 and 0.5. The similar reversal occurred for temperature dependence of magnetization. The magnetization of Mn₄₋ₓNiₓN on STO(001) increases with decreasing temperature when x = 0 and 0.1, while it decreases when x = 0.25. We considered that these reversals were caused by magnetic compensation which occurred in Mn₄₋ₓNiₓN between x = 0.1 and 0.25. We expect Mn atoms of Mn₄₋ₓNiₓN crystal have larger magnetic moments than Ni atoms do. The temperature dependence stated above can be explained if we assume that Ni atoms preferentially occupy the corner sites, and their magnetic moments have different temperature dependence from Mn atoms at the face-centered sites. At the compensation point, Mn₄₋ₓNiₓN is expected to show very efficient STT and ultrafast DW motion with small current density. What’s more, if angular momentum compensation is found, the efficiency will be best optimized. In order to prove the magnetic compensation, X-ray magnetic circular dichroism will be performed. Energy dispersive X-ray spectrometry is a candidate method to analyze the accurate composition ratio of samples.Keywords: compensation, ferrimagnetism, Mn₄N, PMA
Procedia PDF Downloads 135912 Water Management of Polish Agriculture and Adaptation to Climate Change
Authors: Dorota M. Michalak
Abstract:
The agricultural sector, due to the growing demand for food and over-exploitation of the natural environment, contributes to the deepening of climate change, on the one hand, and on the other hand, shrinking freshwater resources, as a negative effect of climate change, threaten the food security of each country. Therefore, adaptation measures to climate change should take into account effective water management and seek solutions ensuring food production at an unchanged or higher level, while not burdening the environment and not contributing to the worsening of the negative consequences of climate change. The problems of Poland's water management result not only from relatively small, natural water resources but to a large extent on the low efficiency of their use. Appropriate agricultural practices and state solutions in this field can contribute to achieving significant benefits in terms of economical water management in agriculture, providing a greater amount of water that could also be used for other purposes, including for purposes related to environmental protection. The aim of the article is to determine the level of use of water resources in Polish agriculture and the advancement of measures aimed at adapting Polish agriculture in the field of water management to climate change. The study provides knowledge about Polish legal regulations and water management tools, the shaping of water policy of Polish agriculture against the background of EU countries and other sources of energy, and measures supporting Polish agricultural holdings in the effective management of water resources run by state budget institutions. In order to achieve the above-mentioned goals, the author used research tools such as the analysis of existing sources and a survey conducted among five groups of entities, i.e. agricultural advisory centers and departments, agricultural, rural and environmental protection departments, regional water management boards, provincial agricultural chambers and restructuring and modernization of agriculture. The main conclusion of the analyses carried out is the low use of water in Polish agriculture in relation to other EU countries, other sources of intake in Poland, as well as irrigation. The analysis allows us to observe another problem, which is the lack of reporting and data collection, which is extremely important from the point of view of the effectiveness of adaptation measures to climate change. The results obtained from the survey indicate a very low level of support for government institutions in the implementation of adaptation measures to climate change and the water management of Polish farms. Some of the basic problems of the adaptation policy to change climate with regard to water management in Polish agriculture include a lack of knowledge regarding climate change, the possibilities of adapting, the available tools or ways to rationalize the use of water resources. It also refers to the lack of ordering procedures and the separation of responsibility with a proper territorial unit, non-functioning channels of information flow and practically low effects.Keywords: water management, adaptation policy, agriculture, climate change
Procedia PDF Downloads 142911 Demonstrating the Efficacy of a Low-Cost Carbon Dioxide-Based Cryoablation Device in Veterinary Medicine for Translation to Third World Medical Applications
Authors: Grace C. Kuroki, Yixin Hu, Bailey Surtees, Rebecca Krimins, Nicholas J. Durr, Dara L. Kraitchman
Abstract:
The purpose of this study was to perform a Phase I veterinary clinical trial with a low-cost, carbon-dioxide-based, passive thaw cryoablation device as proof-of-principle for application in pets and translation to third-world treatment of breast cancer. This study was approved by the institutional animal care and use committee. Client-owned dogs with subcutaneous masses, primarily lipomas or mammary cancers, were recruited for the study. Inclusion was based on clinical history, lesion location, preanesthetic blood work, and fine needle aspirate or biopsy confirmation of mass. Informed consent was obtained from the owners for dogs that met inclusion criteria. Ultrasound assessment of mass extent was performed immediately prior to mass cryoablation. Dogs were placed under general anesthesia and sterilely prepared. A stab incision was created to insert a custom 4.19 OD x 55.9 mm length cryoablation probe (Kubanda Cryotherapy) into the mass. Originally designed for treating breast cancer in low resource settings, this device has demonstrated potential in effectively necrosing subcutaneous masses. A dose escalation study of increasing freeze-thaw cycles (5/4/5, 7/5/7, and 10/7/10 min) was performed to assess the size of the iceball/necrotic extent of cryoablation. Each dog was allowed to recover for ~1-2 weeks before surgical removal of the mass. A single mass was treated in seven dogs (2 mammary masses, a sarcoma, 4 lipomas, and 1 adnexal mass) with most masses exceeding 2 cm in any dimension. Mass involution was most evident in the malignant mammary and adnexal mass. Lipomas showed minimal shrinkage prior to surgical removal, but an area of necrosis was evident along the cryoablation probe path. Gross assessment indicated a clear margin of cryoablation along the cryoprobe independent of tumor type. Detailed histopathology is pending, but complete involution of large lipomas appeared to be unlikely with a 10/7/10 protocol. The low-cost, carbon dioxide-based cryotherapy device permits a minimally invasive technique that may be useful for veterinary applications but is also informative of the unlikely resolution of benign adipose breast masses that may be encountered in third world countries.Keywords: cryoablation, cryotherapy, interventional oncology, veterinary technology
Procedia PDF Downloads 131910 Expanding the Atelier: Design Lead Academic Project Using Immersive User-Generated Mobile Images and Augmented Reality
Authors: David Sinfield, Thomas Cochrane, Marcos Steagall
Abstract:
While there is much hype around the potential and development of mobile virtual reality (VR), the two key critical success factors are the ease of user experience and the development of a simple user-generated content ecosystem. Educational technology history is littered with the debris of over-hyped revolutionary new technologies that failed to gain mainstream adoption or were quickly superseded. Examples include 3D television, interactive CDROMs, Second Life, and Google Glasses. However, we argue that this is the result of curriculum design that substitutes new technologies into pre-existing pedagogical strategies that are focused upon teacher-delivered content rather than exploring new pedagogical strategies that enable student-determined learning or heutagogy. Visual Communication design based learning such as Graphic Design, Illustration, Photography and Design process is heavily based on the traditional forms of the classroom environment whereby student interaction takes place both at peer level and indeed teacher based feedback. In doing so, this makes for a healthy creative learning environment, but does raise other issue in terms of student to teacher learning ratios and reduced contact time. Such issues arise when students are away from the classroom and cannot interact with their peers and teachers and thus we see a decline in creative work from the student. Using AR and VR as a means of stimulating the students and to think beyond the limitation of the studio based classroom this paper will discuss the outcomes of a student project considering the virtual classroom and the techniques involved. The Atelier learning environment is especially suited to the Visual Communication model as it deals with the creative processing of ideas that needs to be shared in a collaborative manner. This has proven to have been a successful model over the years, in the traditional form of design education, but has more recently seen a shift in thinking as we move into a more digital model of learning and indeed away from the classical classroom structure. This study focuses on the outcomes of a student design project that employed Augmented Reality and Virtual Reality technologies in order to expand the dimensions of the classroom beyond its physical limits. Augmented Reality when integrated into the learning experience can improve the learning motivation and engagement of students. This paper will outline some of the processes used and the findings from the semester-long project that took place.Keywords: augmented reality, blogging, design in community, enhanced learning and teaching, graphic design, new technologies, virtual reality, visual communications
Procedia PDF Downloads 240909 Bauhaus Exhibition 1922: New Weapon of Anti-Colonial Resistance in India
Authors: Suneet Jagdev
Abstract:
The development of the original Bauhaus occurred at a time in the beginning of the 20th century when the industrialization of Germany had reached a climax. The cities were a reflection of the new living conditions of an industrialized society. The Bauhaus can be interpreted as an ambitious attempt to find appropriate answers to the challenges by using architecture-urban development and design. The core elements of the conviction of the day were the belief in the necessary crossing of boundaries between the various disciplines and courage to experiment for a better solution. Even after 100 years, the situation in our cities is shaped by similar complexity. The urban consequences of developments are difficult to estimate and to predict. The paper critically reflected on the central aspects of the history of the Bauhaus and its role in bringing the modernism in India by comparative studies of the methodology adopted by the artists and designer in both the countries. The paper talked in detail about how the Bauhaus Exhibition in 1922 offered Indian artists a new weapon of anti-colonial resistance. The original Bauhaus fought its aesthetic and political battles in the context of economic instability and the rise of German fascism. The Indians had access to dominant global languages and in a particular English. The availability of print media and a vibrant indigenous intellectual culture provided Indian people a tool to accept technology while denying both its dominant role in culture and the inevitability of only one form of modernism. The indigenous was thus less an engagement with their culture as in the West than a tool of anti-colonial struggle. We have shown how the Indian people used Bauhaus as a critique of colonialism itself through an undermining of its typical modes of representation and as a means of incorporating the Indian desire for spirituality into art and as providing the cultural basis for a non-materialistic and anti-industrial form of what we might now term development. The paper reflected how through painting the Bauhaus entered the artistic consciousness of the sub-continent not only for its stylistic and technical innovations but as a tool for a critical and even utopian modernism that could challenge both the hegemony of academic and orientalist art and as the bearer of a transnational avant-garde as much political as it was artistic, and as such the basis of a non-Eurocentric but genuinely cosmopolitan alternative to the hierarchies of oppression and domination that had long bound India and were at that moment rising once again to a tragic crescendo in Europe. We have talked about how the Bauhaus of today can offer an innovative orientation towards discourse around architecture and design.Keywords: anti-colonial struggle, art over architecture, Bauhaus exhibition of 1922, industrialization
Procedia PDF Downloads 262908 On-Chip Ku-Band Bandpass Filter with Compact Size and Wide Stopband
Authors: Jyh Sheen, Yang-Hung Cheng
Abstract:
This paper presents a design of a microstrip bandpass filter with a compact size and wide stopband by using 0.15-μm GaAs pHEMT process. The wide stop band is achieved by suppressing the first and second harmonic resonance frequencies. The slow-wave coupling stepped impedance resonator with cross coupled structure is adopted to design the bandpass filter. A two-resonator filter was fabricated with 13.5GHz center frequency and 11% bandwidth was achieved. The devices are simulated using the ADS design software. This device has shown a compact size and very low insertion loss of 2.6 dB. Microstrip planar bandpass filters have been widely adopted in various communication applications due to the attractive features of compact size and ease of fabricating. Various planar resonator structures have been suggested. In order to reach a wide stopband to reduce the interference outside the passing band, various designs of planar resonators have also been submitted to suppress the higher order harmonic frequencies of the designed center frequency. Various modifications to the traditional hairpin structure have been introduced to reduce large design area of hairpin designs. The stepped-impedance, slow-wave open-loop, and cross-coupled resonator structures have been studied to miniaturize the hairpin resonators. In this study, to suppress the spurious harmonic bands and further reduce the filter size, a modified hairpin-line bandpass filter with cross coupled structure is suggested by introducing the stepped impedance resonator design as well as the slow-wave open-loop resonator structure. In this way, very compact circuit size as well as very wide upper stopband can be achieved and realized in a Roger 4003C substrate. On the other hand, filters constructed with integrated circuit technology become more attractive for enabling the integration of the microwave system on a single chip (SOC). To examine the performance of this design structure at the integrated circuit, the filter is fabricated by the 0.15 μm pHEMT GaAs integrated circuit process. This pHEMT process can also provide a much better circuit performance for high frequency designs than those made on a PCB board. The design example was implemented in GaAs with center frequency at 13.5 GHz to examine the performance in higher frequency in detail. The occupied area is only about 1.09×0.97 mm2. The ADS software is used to design those modified filters to suppress the first and second harmonics.Keywords: microstrip resonator, bandpass filter, harmonic suppression, GaAs
Procedia PDF Downloads 326907 Natural Monopolies and Their Regulation in Georgia
Authors: Marina Chavleishvili
Abstract:
Introduction: Today, the study of monopolies, including natural monopolies, is topical. In real life, pure monopolies are natural monopolies. Natural monopolies are used widely and are regulated by the state. In particular, the prices and rates are regulated. The paper considers the problems associated with the operation of natural monopolies in Georgia, in particular, their microeconomic analysis, pricing mechanisms, and legal mechanisms of their operation. The analysis was carried out on the example of the power industry. The rates of natural monopolies in Georgia are controlled by the Georgian National Energy and Water Supply Regulation Commission. The paper analyzes the positive role and importance of the regulatory body and the issues of improving the legislative base that will support the efficient operation of the branch. Methodology: In order to highlight natural monopolies market tendencies, the domestic and international markets are studied. An analysis of monopolies is carried out based on the endogenous and exogenous factors that determine the condition of companies, as well as the strategies chosen by firms to increase the market share. According to the productivity-based competitiveness assessment scheme, the segmentation opportunities, business environment, resources, and geographical location of monopolist companies are revealed. Main Findings: As a result of the analysis, certain assessments and conclusions were made. Natural monopolies are quite a complex and versatile economic element, and it is important to specify and duly control their frame conditions. It is important to determine the pricing policy of natural monopolies. The rates should be transparent, should show the level of life in the country, and should correspond to the incomes. The analysis confirmed the significance of the role of the Antimonopoly Service in the efficient management of natural monopolies. The law should adapt to reality and should be applied only to regulate the market. The present-day differential electricity tariffs varying depending on the consumed electrical power need revision. The effects of the electricity price discrimination are important, segmentation in different seasons in particular. Consumers use more electricity in winter than in summer, which is associated with extra capacities and maintenance costs. If the price of electricity in winter is higher than in summer, the electricity consumption will decrease in winter. The consumers will start to consume the electricity more economically, what will allow reducing extra capacities. Conclusion: Thus, the practical realization of the views given in the paper will contribute to the efficient operation of natural monopolies. Consequently, their activity will be oriented not on the reduction but on the increase of increments of the consumers or producers. Overall, the optimal management of the given fields will allow for improving the well-being throughout the country. In the article, conclusions are made, and the recommendations are developed to deliver effective policies and regulations toward the natural monopolies in Georgia.Keywords: monopolies, natural monopolies, regulation, antimonopoly service
Procedia PDF Downloads 87906 The Effect of Lead(II) Lone Electron Pair and Non-Covalent Interactions on the Supramolecular Assembly and Fluorescence Properties of Pb(II)-Pyrrole-2-Carboxylato Polymer
Authors: M. Kowalik, J. Masternak, K. Kazimierczuk, O. V. Khavryuchenko, B. Kupcewicz, B. Barszcz
Abstract:
Recently, the growing interest of chemists in metal-organic coordination polymers (MOCPs) is primarily derived from their intriguing structures and potential applications in catalysis, gas storage, molecular sensing, ion exchanges, nonlinear optics, luminescence, etc. Currently, we are devoting considerable effort to finding the proper method of synthesizing new coordination polymers containing S- or N-heteroaromatic carboxylates as linkers and characterizing the obtained Pb(II) compounds according to their structural diversity, luminescence, and thermal properties. The choice of Pb(II) as the central ion of MOCPs was motivated by several reasons mentioned in the literature: i) a large ionic radius allowing for a wide range of coordination numbers, ii) the stereoactivity of the 6s2 lone electron pair leading to a hemidirected or holodirected geometry, iii) a flexible coordination environment, and iv) the possibility to form secondary bonds and unusual non-covalent interactions, such as classic hydrogen bonds and π···π stacking interactions, as well as nonconventional hydrogen bonds and rarely reported tetrel bonds, Pb(lone pair)···π interactions, C–H···Pb agostic-type interactions or hydrogen bonds, and chelate ring stacking interactions. Moreover, the construction of coordination polymers requires the selection of proper ligands acting as linkers, because we are looking for materials exhibiting different network topologies and fluorescence properties, which point to potential applications. The reaction of Pb(NO₃)₂ with 1H-pyrrole-2-carboxylic acid (2prCOOH) leads to the formation of a new four-nuclear Pb(II) polymer, [Pb4(2prCOO)₈(H₂O)]ₙ, which has been characterized by CHN, FT-IR, TG, PL and single-crystal X-ray diffraction methods. In view of the primary Pb–O bonds, Pb1 and Pb2 show hemidirected pentagonal pyramidal geometries, while Pb2 and Pb4 display hemidirected octahedral geometries. The topology of the strongest Pb–O bonds was determined as the (4·8²) fes topology. Taking the secondary Pb–O bonds into account, the coordination number of Pb centres increased, Pb1 exhibited a hemidirected monocapped pentagonal pyramidal geometry, Pb2 and Pb4 exhibited a holodirected tricapped trigonal prismatic geometry, and Pb3 exhibited a holodirected bicapped trigonal prismatic geometry. Moreover, the Pb(II) lone pair stereoactivity was confirmed by DFT calculations. The 2D structure was expanded into 3D by the existence of non-covalent O/C–H···π and Pb···π interactions, which was confirmed by the Hirshfeld surface analysis. The above mentioned interactions improve the rigidity of the structure and facilitate the charge and energy transfer between metal centres, making the polymer a promising luminescent compound.Keywords: coordination polymers, fluorescence properties, lead(II), lone electron pair stereoactivity, non-covalent interactions
Procedia PDF Downloads 145905 Analytical Solutions of Josephson Junctions Dynamics in a Resonant Cavity for Extended Dicke Model
Authors: S.I.Mukhin, S. Seidov, A. Mukherjee
Abstract:
The Dicke model is a key tool for the description of correlated states of quantum atomic systems, excited by resonant photon absorption and subsequently emitting spontaneous coherent radiation in the superradiant state. The Dicke Hamiltonian (DH) is successfully used for the description of the dynamics of the Josephson Junction (JJ) array in a resonant cavity under applied current. In this work, we have investigated a generalized model, which is described by DH with a frustrating interaction term. This frustrating interaction term is explicitly the infinite coordinated interaction between all the spin half in the system. In this work, we consider an array of N superconducting islands, each divided into two sub-islands by a Josephson Junction, taken in a charged qubit / Cooper Pair Box (CPB) condition. The array is placed inside the resonant cavity. One important aspect of the problem lies in the dynamical nature of the physical observables involved in the system, such as condensed electric field and dipole moment. It is important to understand how these quantities behave with time to define the quantum phase of the system. The Dicke model without frustrating term is solved to find the dynamical solutions of the physical observables in analytic form. We have used Heisenberg’s dynamical equations for the operators and on applying newly developed Rotating Holstein Primakoff (HP) transformation and DH we have arrived at the four coupled nonlinear dynamical differential equations for the momentum and spin component operators. It is possible to solve the system analytically using two-time scales. The analytical solutions are expressed in terms of Jacobi's elliptic functions for the metastable ‘bound luminosity’ dynamic state with the periodic coherent beating of the dipoles that connect the two double degenerate dipolar ordered phases discovered previously. In this work, we have proceeded the analysis with the extended DH with a frustrating interaction term. Inclusion of the frustrating term involves complexity in the system of differential equations and it gets difficult to solve analytically. We have solved semi-classical dynamic equations using the perturbation technique for small values of Josephson energy EJ. Because the Hamiltonian contains parity symmetry, thus phase transition can be found if this symmetry is broken. Introducing spontaneous symmetry breaking term in the DH, we have derived the solutions which show the occurrence of finite condensate, showing quantum phase transition. Our obtained result matches with the existing results in this scientific field.Keywords: Dicke Model, nonlinear dynamics, perturbation theory, superconductivity
Procedia PDF Downloads 135904 Research on the Optimization of Satellite Mission Scheduling
Authors: Pin-Ling Yin, Dung-Ying Lin
Abstract:
Satellites play an important role in our daily lives, from monitoring the Earth's environment and providing real-time disaster imagery to predicting extreme weather events. As technology advances and demands increase, the tasks undertaken by satellites have become increasingly complex, with more stringent resource management requirements. A common challenge in satellite mission scheduling is the limited availability of resources, including onboard memory, ground station accessibility, and satellite power. In this context, efficiently scheduling and managing the increasingly complex satellite missions under constrained resources has become a critical issue that needs to be addressed. The core of Satellite Onboard Activity Planning (SOAP) lies in optimizing the scheduling of the received tasks, arranging them on a timeline to form an executable onboard mission plan. This study aims to develop an optimization model that considers the various constraints involved in satellite mission scheduling, such as the non-overlapping execution periods for certain types of tasks, the requirement that tasks must fall within the contact range of specified types of ground stations during their execution, onboard memory capacity limits, and the collaborative constraints between different types of tasks. Specifically, this research constructs a mixed-integer programming mathematical model and solves it with a commercial optimization package. Simultaneously, as the problem size increases, the problem becomes more difficult to solve. Therefore, in this study, a heuristic algorithm has been developed to address the challenges of using commercial optimization package as the scale increases. The goal is to effectively plan satellite missions, maximizing the total number of executable tasks while considering task priorities and ensuring that tasks can be completed as early as possible without violating feasibility constraints. To verify the feasibility and effectiveness of the algorithm, test instances of various sizes were generated, and the results were validated through feedback from on-site users and compared against solutions obtained from a commercial optimization package. Numerical results show that the algorithm performs well under various scenarios, consistently meeting user requirements. The satellite mission scheduling algorithm proposed in this study can be flexibly extended to different types of satellite mission demands, achieving optimal resource allocation and enhancing the efficiency and effectiveness of satellite mission execution.Keywords: mixed-integer programming, meta-heuristics, optimization, resource management, satellite mission scheduling
Procedia PDF Downloads 31903 ePAM: Advancing Sustainable Mobility through Digital Parking, AI-Driven Vehicle Recognition, and CO₂ Reporting
Authors: Robert Monsberger
Abstract:
The increasing scarcity of resources and the pressing challenge of climate change demand transformative technological, economic, and societal approaches. In alignment with the European Green Deal's goal to achieve net-zero greenhouse gas emissions by 2050, this paper presents the development and implementation of an electronic parking and mobility system (ePAM). This system offers a distinct, integrated solution aimed at promoting climate-positive mobility, reducing individual vehicle use, and advancing the digital transformation of off-street parking. The core objectives include the accurate recognition of electric vehicles and occupant counts using advanced camera-based systems, achieving a very high accuracy. This capability enables the dynamic categorization and classification of vehicles to provide fair and automated tariff adjustments. The study also seeks to replace physical barriers with virtual ‘digital gates’ using augmented reality, significantly improving user acceptance as shown in studies conducted. The system is designed to operate as an end-to-end software solution, enabling a fully digital and paperless parking management system by leveraging license plate recognition (LPR) and metadata processing. By eliminating physical infrastructure like gates and terminals, the system significantly reduces resource consumption, maintenance complexity, and operational costs while enhancing energy efficiency. The platform also integrates CO₂ reporting tools to support compliance with upcoming EU emission trading schemes and to incentivize eco-friendly transportation behaviors. By fostering the adoption of electric vehicles and ride-sharing models, the system contributes to the optimization of traffic flows and the minimization of search traffic in urban centers. The platform's open data interfaces enable seamless integration into multimodal transport systems, facilitating a transition from individual to public transportation modes. This study emphasizes sustainability, data privacy, and compliance with the AI Act, aiming to achieve a market share of at least 4.5% in the DACH region by 2030. ePAM sets a benchmark for innovative mobility solutions, driving significant progress toward climate-neutral urban mobility.Keywords: sustainable mobility, digital parking, AI-driven vehicle recognition, license plate recognition, virtual gates, multimodal transport integration
Procedia PDF Downloads 0902 Gas-Phase Nondestructive and Environmentally Friendly Covalent Functionalization of Graphene Oxide Paper with Amines
Authors: Natalia Alzate-Carvajal, Diego A. Acevedo-Guzman, Victor Meza-Laguna, Mario H. Farias, Luis A. Perez-Rey, Edgar Abarca-Morales, Victor A. Garcia-Ramirez, Vladimir A. Basiuk, Elena V. Basiuk
Abstract:
Direct covalent functionalization of prefabricated free-standing graphene oxide paper (GOP) is considered as the only approach suitable for systematic tuning of thermal, mechanical and electronic characteristics of this important class of carbon nanomaterials. At the same time, the traditional liquid-phase functionalization protocols can compromise physical integrity of the paper-like material up to its total disintegration. To avoid such undesirable effects, we explored the possibility of employing an alternative, solvent-free strategy for facile and nondestructive functionalization of GOP with two representative aliphatic amines, 1-octadecylamine (ODA) and 1,12-diaminododecane (DAD), as well as with two aromatic amines, 1-aminopyrene (AP) and 1,5-diaminonaphthalene (DAN). The functionalization was performed under moderate heating at 150-180 °C in vacuum. Under such conditions, it proceeds through both amidation and epoxy ring opening reactions. Comparative characterization of pristine and amine-functionalized GOP mats was carried out by using Fourier-transform infrared, Raman, and X-ray photoelectron spectroscopy (XPS), thermogravimetric (TGA) and differential thermal analysis, scanning electron and atomic force microscopy (SEM and AFM, respectively). Besides that, we compared the stability in water, wettability, electrical conductivity and elastic (Young's) modulus of GOP mats before and after amine functionalization. The highest content of organic species was obtained in the case of GOP-ODA, followed by GOP-DAD, GOP-AP and GOP-DAN samples. The covalent functionalization increased mechanical and thermal stability of GOP, as well as its electrical conductivity. The magnitude of each effect depends on the particular chemical structure of amine employed, which allows for tuning a given GOP property. Morphological characterization by using SEM showed that, compared to pristine graphene oxide paper, amine-modified GOP mats become relatively ordered layered assemblies, in which individual GO sheets are organized in a near-parallel pattern. Financial support from the National Autonomous University of Mexico (grants DGAPA-IN101118 and IN200516) and from the National Council of Science and Technology of Mexico (CONACYT, grant 250655) is greatly appreciated. The authors also thank David A. Domínguez (CNyN of UNAM) for XPS measurements and Dr. Edgar Alvarez-Zauco (Faculty of Science of UNAM) for the opportunity to use TGA equipment.Keywords: amines, covalent functionalization, gas-phase, graphene oxide paper
Procedia PDF Downloads 182901 Polypyrrole as Bifunctional Materials for Advanced Li-S Batteries
Authors: Fang Li, Jiazhao Wang, Jianmin Ma
Abstract:
The practical application of Li-S batteries is hampered due to poor cycling stability caused by electrolyte-dissolved lithium polysulfides. Dual functionalities such as strong chemical adsorption stability and high conductivity are highly desired for an ideal host material for a sulfur-based cathode. Polypyrrole (PPy), as a conductive polymer, was widely studied as matrixes for sulfur cathode due to its high conductivity and strong chemical interaction with soluble polysulfides. Thus, a novel cathode structure consisting of a free-standing sulfur-polypyrrole cathode and a polypyrrole coated separator was designed for flexible Li-S batteries. The PPy materials show strong interaction with dissoluble polysulfides, which could suppress the shuttle effect and improve the cycling stability. In addition, the synthesized PPy film with a rough surface acts as a current collector, which improves the adhesion of sulfur materials and restrain the volume expansion, enhancing the structural stability during the cycling process. For further enhancing the cycling stability, a PPy coated separator was also applied, which could make polysulfides into the cathode side to alleviate the shuttle effect. Moreover, the PPy layer coated on commercial separator is much lighter than other reported interlayers. A soft-packaged flexible Li-S battery has been designed and fabricated for testing the practical application of the designed cathode and separator, which could power a device consisting of 24 light-emitting diode (LED) lights. Moreover, the soft-packaged flexible battery can still show relatively stable cycling performance after repeated bending, indicating the potential application in flexible batteries. A novel vapor phase deposition method was also applied to prepare uniform polypyrrole layer coated sulfur/graphene aerogel composite. The polypyrrole layer simultaneously acts as host and adsorbent for efficient suppression of polysulfides dissolution through strong chemical interaction. The density functional theory (DFT) calculations reveal that the polypyrrole could trap lithium polysulfides through stronger bonding energy. In addition, the deflation of sulfur/graphene hydrogel during the vapor phase deposition process enhances the contact of sulfur with matrixes, resulting in high sulfur utilization and good rate capability. As a result, the synthesized polypyrrole coated sulfur/graphene aerogel composite delivers a specific discharge capacity of 1167 mAh g⁻¹ and 409.1 mAh g⁻¹ at 0.2 C and 5 C respectively. The capacity can maintain at 698 mAh g⁻¹ at 0.5 C after 500 cycles, showing an ultra-slow decay rate of 0.03% per cycle.Keywords: polypyrrole, strong chemical interaction, long-term stability, Li-S batteries
Procedia PDF Downloads 141900 An Investigation on MgAl₂O₄ Based Mould System in Investment Casting Titanium Alloy
Authors: Chen Yuan, Nick Green, Stuart Blackburn
Abstract:
The investment casting process offers a great freedom of design combined with the economic advantage of near net shape manufacturing. It is widely used for the production of high value precision cast parts in particularly in the aerospace sector. Various combinations of materials have been used to produce the ceramic moulds, but most investment foundries use a silica based binder system in conjunction with fused silica, zircon, and alumino-silicate refractories as both filler and coarse stucco materials. However, in the context of advancing alloy technologies, silica based systems are struggling to keep pace, especially when net-shape casting titanium alloys. Study has shown that the casting of titanium based alloys presents considerable problems, including the extensive interactions between the metal and refractory, and the majority of metal-mould interaction is due to reduction of silica, present as binder and filler phases, by titanium in the molten state. Cleaner, more refractory systems are being devised to accommodate these changes. Although yttria has excellent chemical inertness to titanium alloy, it is not very practical in a production environment combining high material cost, short slurry life, and poor sintering properties. There needs to be a cost effective solution to these issues. With limited options for using pure oxides, in this work, a silica-free magnesia spinel MgAl₂O₄ was used as a primary coat filler and alumina as a binder material to produce facecoat in the investment casting mould. A comparison system was also studied with a fraction of the rare earth oxide Y₂O₃ adding into the filler to increase the inertness. The stability of the MgAl₂O₄/Al₂O₃ and MgAl₂O₄/Y₂O₃/Al₂O₃ slurries was assessed by tests, including pH, viscosity, zeta-potential and plate weight measurement, and mould properties such as friability were also measured. The interaction between the face coat and titanium alloy was studied by both a flash re-melting technique and a centrifugal investment casting method. The interaction products between metal and mould were characterized using x-ray diffraction (XRD), scanning electron microscopy (SEM) and Energy Dispersive X-Ray Spectroscopy (EDS). The depth of the oxygen hardened layer was evaluated by micro hardness measurement. Results reveal that introducing a fraction of Y₂O₃ into magnesia spinel can significantly increase the slurry life and reduce the thickness of hardened layer during centrifugal casting.Keywords: titanium alloy, mould, MgAl₂O₄, Y₂O₃, interaction, investment casting
Procedia PDF Downloads 113899 (Re)Processing of ND-Fe-B Permanent Magnets Using Electrochemical and Physical Approaches
Authors: Kristina Zuzek, Xuan Xu, Awais Ikram, Richard Sheridan, Allan Walton, Saso Sturm
Abstract:
Recycling of end-of-life REEs based Nd-Fe-B magnets is an important strategy for reducing the environmental dangers associated with rare-earth mining and overcoming the well-documented supply risks related to the REEs. However, challenges on their reprocessing still remain. We report on the possibility of direct electrochemical recycling and reprocessing of Nd-Fe(B)-based magnets. In this investigation, we were able first to electrochemically leach the end-of-life NdFeB magnet and to electrodeposit Nd–Fe using a 1-ethyl-3-methyl imidazolium dicyanamide ([EMIM][DCA]) ionic liquid-based electrolyte. We observed that Nd(III) could not be reduced independently. However, it can be co-deposited on a substrate with the addition of Fe(II). Using advanced TEM techniques of electron-energy-loss spectroscopy (EELS) it was shown that Nd(III) is reduced to Nd(0) during the electrodeposition process. This gave a new insight into determining the Nd oxidation state, as X-ray photoelectron spectroscopy (XPS) has certain limitations. This is because the binding energies of metallic Nd (Nd0) and neodymium oxide (Nd₂O₃) are very close, i. e., 980.5-981.5 eV and 981.7-982.3 eV, respectively, making it almost impossible to differentiate between the two states. These new insights into the electrodeposition process represent an important step closer to efficient recycling of rare piles of earth in metallic form at mild temperatures, thus providing an alternative to high-temperature molten-salt electrolysis and a step closer to deposit Nd-Fe-based magnetic materials. Further, we propose a new concept of recycling the sintered Nd-Fe-B magnets by direct recovering the 2:14:1 matrix phase. Via an electrochemical etching method, we are able to recover pure individual 2:14:1 grains that can be re-used for new types of magnet production. In the frame of physical reprocessing, we have successfully synthesized new magnets out of hydrogen (HDDR)-recycled stocks with a contemporary technique of pulsed electric current sintering (PECS). The optimal PECS conditions yielded fully dense Nd-Fe-B magnets with the coercivity Hc = 1060 kA/m, which was boosted to 1160 kA/m after the post-PECS thermal treatment. The Br and Hc were tackled further and increased applied pressures of 100 – 150 MPa resulted in Br = 1.01 T. We showed that with a fine tune of the PECS and post-annealing it is possible to revitalize the Nd-Fe-B end-of-life magnets. By applying advanced TEM, i.e. atomic-scale Z-contrast STEM combined with EDXS and EELS, the resulting magnetic properties were critically assessed against various types of structural and compositional discontinuities down to atomic-scale, which we believe control the microstructure evolution during the PECS processing route.Keywords: electrochemistry, Nd-Fe-B, pulsed electric current sintering, recycling, reprocessing
Procedia PDF Downloads 158898 Institutional and Economic Determinants of Foreign Direct Investment: Comparative Analysis of Three Clusters of Countries
Authors: Ismatilla Mardanov
Abstract:
There are three types of countries, the first of which is willing to attract foreign direct investment (FDI) in enormous amounts and do whatever it takes to make this happen. Therefore, FDI pours into such countries. In the second cluster of countries, even if the country is suffering tremendously from the shortage of investments, the governments are hesitant to attract investments because they are at the hands of local oligarchs/cartels. Therefore, FDI inflows are moderate to low in such countries. The third type is countries whose companies prefer investing in the most efficient locations globally and are hesitant to invest in the homeland. Sorting countries into such clusters, the present study examines the essential institutions and economic factors that make these countries different. Past literature has discussed various determinants of FDI in all kinds of countries. However, it did not classify countries based on government motivation, institutional setup, and economic factors. A specific approach to each target country is vital for corporate foreign direct investment risk analysis and decisions. The research questions are 1. What specific institutional and economic factors paint the pictures of the three clusters; 2. What specific institutional and economic factors are determinants of FDI; 3. Which of the determinants are endogenous and exogenous variables? 4. How can institutions and economic and political variables impact corporate investment decisions Hypothesis 1: In the first type, country institutions and economic factors will be favorable for FDI. Hypothesis 2: In the second type, even if country economic factors favor FDI, institutions will not. Hypothesis 3: In the third type, even if country institutions favorFDI, economic factors will not favor domestic investments. Therefore, FDI outflows occur in large amounts. Methods: Data come from open sources of the World Bank, the Fraser Institute, the Heritage Foundation, and other reliable sources. The dependent variable is FDI inflows. The independent variables are institutions (economic and political freedom indices) and economic factors (natural, material, and labor resources, government consumption, infrastructure, minimum wage, education, unemployment, tax rates, consumer price index, inflation, and others), the endogeneity or exogeneity of which are tested in the instrumental variable estimation. Political rights and civil liberties are used as instrumental variables. Results indicate that in the first type, both country institutions and economic factors, specifically labor and logistics/infrastructure/energy intensity, are favorable for potential investors. In the second category of countries, the risk of loss of assets is very high due to governmentshijacked by local oligarchs/cartels/special interest groups. In the third category of countries, the local economic factors are unfavorable for domestic investment even if the institutions are well acceptable. Cluster analysis and instrumental variable estimation were used to reveal cause-effect patterns in each of the clusters.Keywords: foreign direct investment, economy, institutions, instrumental variable estimation
Procedia PDF Downloads 161897 Criteria to Access Justice in Remote Criminal Trial Implementation
Authors: Inga Žukovaitė
Abstract:
This work aims to present postdoc research on remote criminal proceedings in court in order to streamline the proceedings and, at the same time, ensure the effective participation of the parties in criminal proceedings and the court's obligation to administer substantive and procedural justice. This study tests the hypothesis that remote criminal proceedings do not in themselves violate the fundamental principles of criminal procedure; however, their implementation must ensure the right of the parties to effective legal remedies and a fair trial and, only then, must address the issues of procedural economy, speed and flexibility/functionality of the application of technologies. In order to ensure that changes in the regulation of criminal proceedings are in line with fair trial standards, this research will provide answers to the questions of what conditions -first of all, legal and only then organisational- are required for remote criminal proceedings to ensure respect for the parties and enable their effective participation in public proceedings, to create conditions for quality legal defence and its accessibility, to give a correct impression to the party that they are heard and that the court is impartial and fair. It also seeks to present the results of empirical research in the courts of Lithuania that was made by using the interview method. The research will serve as a basis for developing a theoretical model for remote criminal proceedings in the EU to ensure a balance between the intention to have innovative, cost-effective, and flexible criminal proceedings and the positive obligation of the State to ensure the rights of participants in proceedings to just and fair criminal proceedings. Moreover, developments in criminal proceedings also keep changing the image of the court itself; therefore, in the paper will create preconditions for future research on the impact of remote criminal proceedings on the trust in courts. The study aims at laying down the fundamentals for theoretical models of a remote hearing in criminal proceedings and at making recommendations for the safeguarding of human rights, in particular the rights of the accused, in such proceedings. The following criteria are relevant for the remote form of criminal proceedings: the purpose of judicial instance, the legal position of participants in proceedings, their vulnerability, and the nature of required legal protection. The content of the study consists of: 1. Identification of the factual and legal prerequisites for a decision to organise the entire criminal proceedings by remote means or to carry out one or several procedural actions by remote means 2. After analysing the legal regulation and practice concerning the application of the elements of remote criminal proceedings, distinguish the main legal safeguards for protection of the rights of the accused to ensure: (a) the right of effective participation in a court hearing; (b) the right of confidential consultation with the defence counsel; (c) the right of participation in the examination of evidence, in particular material evidence, as well as the right to question witnesses; and (d) the right to a public trial.Keywords: remote criminal proceedings, fair trial, right to defence, technology progress
Procedia PDF Downloads 73896 Selective Conversion of Biodiesel Derived Glycerol to 1,2-Propanediol over Highly Efficient γ-Al2O3 Supported Bimetallic Cu-Ni Catalyst
Authors: Smita Mondal, Dinesh Kumar Pandey, Prakash Biswas
Abstract:
During past two decades, considerable attention has been given to the value addition of biodiesel derived glycerol (~10wt.%) to make the biodiesel industry economically viable. Among the various glycerol value-addition methods, hydrogenolysis of glycerol to 1,2-propanediol is one of the attractive and promising routes. In this study, highly active and selective γ-Al₂O₃ supported bimetallic Cu-Ni catalyst was developed for selective hydrogenolysis of glycerol to 1,2-propanediol in the liquid phase. The catalytic performance was evaluated in a high-pressure autoclave reactor. The formation of mixed oxide indicated the strong interaction of Cu, Ni with the alumina support. Experimental results demonstrated that bimetallic copper-nickel catalyst was more active and selective to 1,2-PDO as compared to monometallic catalysts due to bifunctional behavior. To verify the effect of calcination temperature on the formation of Cu-Ni mixed oxide phase, the calcination temperature of 20wt.% Cu:Ni(1:1)/Al₂O₃ catalyst was varied from 300°C-550°C. The physicochemical properties of the catalysts were characterized by various techniques such as specific surface area (BET), X-ray diffraction study (XRD), temperature programmed reduction (TPR), and temperature programmed desorption (TPD). The BET surface area and pore volume of the catalysts were in the range of 71-78 m²g⁻¹, and 0.12-0.15 cm³g⁻¹, respectively. The peaks at the 2θ range of 43.3°-45.5° and 50.4°-52°, was corresponded to the copper-nickel mixed oxidephase [JCPDS: 78-1602]. The formation of mixed oxide indicated the strong interaction of Cu, Ni with the alumina support. The crystallite size decreased with increasing the calcination temperature up to 450°C. Further, the crystallite size was increased due to agglomeration. Smaller crystallite size of 16.5 nm was obtained for the catalyst calcined at 400°C. Total acidic sites of the catalysts were determined by NH₃-TPD, and the maximum total acidic of 0.609 mmol NH₃ gcat⁻¹ was obtained over the catalyst calcined at 400°C. TPR data suggested the maximum of 75% degree of reduction of catalyst calcined at 400°C among all others. Further, 20wt.%Cu:Ni(1:1)/γ-Al₂O₃ catalyst calcined at 400°C exhibited highest catalytic activity ( > 70%) and 1,2-PDO selectivity ( > 85%) at mild reaction condition due to highest acidity, highest degree of reduction, smallest crystallite size. Further, the modified Power law kinetic model was developed to understand the true kinetic behaviour of hydrogenolysis of glycerol over 20wt.%Cu:Ni(1:1)/γ-Al₂O₃ catalyst. Rate equations obtained from the model was solved by ode23 using MATLAB coupled with Genetic Algorithm. Results demonstrated that the model predicted data were very well fitted with the experimental data. The activation energy of the formation of 1,2-PDO was found to be 45 kJ mol⁻¹.Keywords: glycerol, 1, 2-PDO, calcination, kinetic
Procedia PDF Downloads 148895 Forest Degradation and Implications for Rural Livelihood in Kaimur Reserve Forest of Bihar, India
Authors: Shashi Bhushan, Sucharita Sen
Abstract:
In India, forest and people are inextricably linked since millions of people live adjacent to or within protected areas and harvest forest products. Indian forest has their own legacy to sustain by its own climatic nature with several social, economic and cultural activities. People surrounding forest areas are not only dependent on this resource for their livelihoods but also for the other source, like religious ceremonies, social customs and herbal medicines, which are determined by the forest like agricultural land, groundwater level, and soil fertility. The assumption that fuelwood and fodder extraction, which is the part of local livelihood leads to deforestation, has so far been the dominant mainstream views in deforestation discourses. Given the occupational division across social groups in Kaimur reserve forest, the differential nature of dependence of forest resources is important to understand. This paper attempts to assess the nature of dependence and impact of forest degradation on rural households across various social groups. Also, an additional element that is added to the enquiry is the way degradation of forests leading to scarcity of forest-based resources impacts the patterns of dependence across various social groups. Change in forest area calculated through land use land cover analysis using remote sensing technique and examination of different economic activities carried out by the households that are forest-based was collected by primary survey in Kaimur reserve forest of state of Bihar in India. The general finding indicates that the Scheduled Tribe and Scheduled Caste communities, the most socially and economically deprived sections of the rural society are involved in a significant way in collection of fuelwood, fodder, and fruits, both for self-consumption and sale in the market while other groups of society uses fuelwood, fruit, and fodder for self-use only. Depending on the local forest resources for fuelwood consumption was the primary need for all social groups due to easy accessibility and lack of alternative energy source. In last four decades, degradation of forest made a direct impact on rural community mediated through the socio-economic structure, resulting in a shift from forest-based occupations to cultivation and manual labour in agricultural and non-agricultural activities. Thus there is a need to review the policies with respect to the ‘community forest management’ since this study clearly throws up the fact that engagement with and dependence on forest resources is socially differentiated. Thus tying the degree of dependence and forest management becomes extremely important from the view of ‘sustainable’ forest resource management. The statization of forest resources also has to keep in view the intrinsic way in which the forest-dependent population interacts with the forest.Keywords: forest degradation, livelihood, social groups, tribal community
Procedia PDF Downloads 175894 Analysis of Fuel Adulteration Consequences in Bangladesh
Authors: Mahadehe Hassan
Abstract:
In most countries manufacturing, trading and distribution of gasoline and diesel fuels belongs to the most important sectors of national economy. For Bangladesh, a robust, well-functioning, secure and smartly managed national fuel distribution chain is an essential precondition for achieving Government top priorities in development and modernization of transportation infrastructure, protection of national environment and population health as well as, very importantly, securing due tax revenue for the State Budget. Bangladesh is a developing country with complex fuel supply network, high fuel taxes incidence and – till now - limited possibilities in application of modern, automated technologies for Government national fuel market control. Such environment allows dishonest physical and legal persons and organized criminals to build and profit from illegal fuel distribution schemes and fuel illicit trade. As a result, the market transparency and the country attractiveness for foreign investments, law-abiding economic operators, national consumers, State Budget and the Government ability to finance development projects, and the country at large suffer significantly. Research shows that over 50% of retail petrol stations in major agglomerations of Bangladesh sell adulterated fuels and/or cheat customers on the real volume of the fuel pumped into their vehicles. Other forms of detected fuel illicit trade practices include misdeclaration of fuel quantitative and qualitative parameters during internal transit and selling of non-declared and smuggled fuels. The aim of the study is to recommend the implementation of a National Fuel Distribution Integrity Program (FDIP) in Bangladesh to address and resolve fuel adulteration and illicit trade problems. The program should be customized according to the specific needs of the country and implemented in partnership with providers of advanced technologies. FDIP should enable and further enhance capacity of respective Bangladesh Government authorities in identification and elimination of all forms of fuel illicit trade swiftly and resolutely. FDIP high-technology, IT and automation systems and secure infrastructures should be aimed at the following areas (1) fuel adulteration, misdeclaration and non-declaration; (2) fuel quality and; (3) fuel volume manipulation at retail level. Furthermore, overall concept of FDIP delivery and its interaction with the reporting and management systems used by the Government shall be aligned with and support objectives of the Vision 2041 and Smart Bangladesh Government programs.Keywords: fuel adulteration, octane, kerosene, diesel, petrol, pollution, carbon emissions
Procedia PDF Downloads 78893 Analyzing Transit Network Design versus Urban Dispersion
Authors: Hugo Badia
Abstract:
This research answers which is the most suitable transit network structure to serve specific demand requirements in an increasing urban dispersion process. Two main approaches of network design are found in the literature. On the one hand, a traditional answer, widespread in our cities, that develops a high number of lines to connect most of origin-destination pairs by direct trips; an approach based on the idea that users averse to transfers. On the other hand, some authors advocate an alternative design characterized by simple networks where transfer is essential to complete most of trips. To answer which of them is the best option, we use a two-step methodology. First, by means of an analytical model, three basic network structures are compared: a radial scheme, starting point for the other two structures, a direct trip-based network, and a transfer-based one, which represent the two alternative transit network designs. The model optimizes the network configuration with regard to the total cost for each structure. For a scenario of dispersion, the best alternative is the structure with the minimum cost. This dispersion degree is defined in a simple way considering that only a central area attracts all trips. If this area is small, we have a high concentrated mobility pattern; if this area is too large, the city is highly decentralized. In this first step, we can determine the area of applicability for each structure in function to that urban dispersion degree. The analytical results show that a radial structure is suitable when the demand is so centralized, however, when this demand starts to scatter, new transit lines should be implemented to avoid transfers. If the urban dispersion advances, the introduction of more lines is no longer a good alternative, in this case, the best solution is a change of structure, from direct trips to a network based on transfers. The area of applicability of each network strategy is not constant, it depends on the characteristics of demand, city and transport technology. In the second step, we translate analytical results to a real case study by the relationship between the parameters of dispersion of the model and direct measures of dispersion in a real city. Two dimensions of the urban sprawl process are considered: concentration, defined by Gini coefficient, and centralization by area based centralization index. Once it is estimated the real dispersion degree, we are able to identify in which area of applicability the city is located. In summary, from a strategic point of view, we can obtain with this methodology which is the best network design approach for a city, comparing the theoretical results with the real dispersion degree.Keywords: analytical network design model, network structure, public transport, urban dispersion
Procedia PDF Downloads 231892 Assessment of Serum Osteopontin, Osteoprotegerin and Bone-Specific Alp as Markers of Bone Turnover in Patients with Disorders of Thyroid Function in Nigeria, Sub-Saharan Africa
Authors: Oluwabori Emmanuel Olukoyejo, Ogra Victor Ogra, Bosede Amodu, Tewogbade Adeoye Adedeji
Abstract:
Background: Disorders of thyroid function are the second most common endocrine disorders worldwide, with a direct relationship with metabolic bone diseases. These metabolic bone complications are often subtle but manifest as bone pains and an increased risk of fractures. The gold standard for diagnosis, Dual Energy X-ray Absorptiometry (DEXA), is limited in this environment due to unavailability, cumbersomeness and cost. However, bone biomarkers have shown prospects in assessing alterations in bone remodeling, which has not been studied in this environment. Aim: This study evaluates serum levels of bone-specific alkaline phosphatase (bone-specific ALP), osteopontin and osteoprotegerin biomarkers of bone turnover in patients with disorders of thyroid function. Methods: This is a cross-sectional study carried out over a period of one and a half years. Forty patients with thyroid dysfunctions, aged 20 to 50 years, and thirty-eight age and sex-matched healthy euthyroid controls were included in this study. Patients were further stratified into hyperthyroid and hypothyroid groups. Bone-specific ALP, osteopontin, and osteoprotegerin, alongside serum total calcium, ionized calcium and inorganic phosphate, were assayed for all patients and controls. A self-administered questionnaire was used to obtain data on sociodemographic and medical history. Then, 5 ml of blood was collected in a plain bottle and serum was harvested following clotting and centrifugation. Serum samples were assayed for B-ALP, osteopontin, and osteoprotegerin using the ELISA technique. Total calcium and ionized calcium were assayed using an ion-selective electrode, while the inorganic phosphate was assayed with automated photometry. Results: The hyperthyroid and hypothyroid patient groups had significantly increased median serum B-ALP (30.40 and 26.50) ng/ml and significantly lower median OPG (0.80 and 0.80) ng/ml than the controls (10.81 and 1.30) ng/ml respectively, p < 0.05. However, serum osteopontin in the hyperthyroid group was significantly higher and significantly lower in the hypothyroid group when compared with the controls (11.00 and 2.10 vs 3.70) ng/ml, respectively, p < 0.05. Both hyperthyroid and hypothyroid groups had significantly higher mean serum total calcium, ionized calcium and inorganic phosphate than the controls (2.49 ± 0.28, 1.27 ± 0.14 and 1.33 ± 0.33) mmol/l and (2.41 ± 0.04, 1.20 ± 0.04 and 1.15 ± 0.16) mmol/l vs (2.27 ± 0.11, 1.17 ± 0.06 and 1.08 ± 0.16) mmol/l respectively, p < 0.05. Conclusion: Patients with disorders of thyroid function have metabolic imbalances of all the studied bone markers, suggesting a higher bone turnover. The routine bone markers will be an invaluable tool for monitoring bone health in patients with thyroid dysfunctions, while the less readily available markers can be introduced as supplementary tools. Moreover, bone-specific ALP, osteopontin and osteoprotegerin were found to be the strongest independent predictors of metabolic bone markers’ derangements in patients with thyroid dysfunctions.Keywords: metabolic bone diseases, biomarker, bone turnover, hyperthyroid, hypothyroid, euthyroid
Procedia PDF Downloads 38891 Integrated Management System Applied in Dismantling and Waste Management of the Primary Cooling System from the VVR-S Nuclear Reactor Magurele, Bucharest
Authors: Radu Deju, Carmen Mustata
Abstract:
The VVR-S nuclear research reactor owned by Horia Hubulei National Institute of Physics and Nuclear Engineering (IFIN-HH) was designed for research and radioisotope production, being permanently shut-down in 2002, after 40 years of operation. All amount of the nuclear spent fuel S-36 and EK-10 type was returned to Russian Federation (first in 2009 and last in 2012), and the radioactive waste resulted from the reprocessing of it will remain permanently in the Russian Federation. The decommissioning strategy chosen is immediate dismantling. At this moment, the radionuclides with half-life shorter than 1 year have a minor contribution to the contamination of materials and equipment used in reactor department. The decommissioning of the reactor has started in 2010 and is planned to be finalized in 2020, being the first nuclear research reactor that has started the decommissioning project from the South-East of Europe. The management system applied in the decommissioning of the VVR-S research reactor integrates all common elements of management: nuclear safety, occupational health and safety, environment, quality- compliance with the requirements for decommissioning activities, physical protection and economic elements. This paper presents the application of integrated management system in decommissioning of systems, structures, equipment and components (SSEC) from pumps room, including the management of the resulted radioactive waste. The primary cooling system of this type of reactor includes circulation pumps, heat exchangers, degasser, filter ion exchangers, piping connection, drainage system and radioactive leaks. All the decommissioning activities of primary circuit were performed in stage 2 (year 2014), and they were developed and recorded according to the applicable documents, within the requirements of the Regulatory Body Licenses. In the presentation there will be emphasized how the integrated management system provisions are applied in the dismantling of the primary cooling system, for elaboration, approval, application of necessary documentation, records keeping before, during and after the dismantling activities. Radiation protection and economics are the key factors for the selection of the proper technology. Dedicated and advanced technologies were chosen to perform specific tasks. Safety aspects have been taken into consideration. Resource constraints have also been an important issue considered in defining the decommissioning strategy. Important aspects like radiological monitoring of the personnel and areas, decontamination, waste management and final characterization of the released site are demonstrated and documented.Keywords: decommissioning, integrated management system, nuclear reactor, waste management
Procedia PDF Downloads 291890 Automatic Aggregation and Embedding of Microservices for Optimized Deployments
Authors: Pablo Chico De Guzman, Cesar Sanchez
Abstract:
Microservices are a software development methodology in which applications are built by composing a set of independently deploy-able, small, modular services. Each service runs a unique process and it gets instantiated and deployed in one or more machines (we assume that different microservices are deployed into different machines). Microservices are becoming the de facto standard for developing distributed cloud applications due to their reduced release cycles. In principle, the responsibility of a microservice can be as simple as implementing a single function, which can lead to the following issues: - Resource fragmentation due to the virtual machine boundary. - Poor communication performance between microservices. Two composition techniques can be used to optimize resource fragmentation and communication performance: aggregation and embedding of microservices. Aggregation allows the deployment of a set of microservices on the same machine using a proxy server. Aggregation helps to reduce resource fragmentation, and is particularly useful when the aggregated services have a similar scalability behavior. Embedding deals with communication performance by deploying on the same virtual machine those microservices that require a communication channel (localhost bandwidth is reported to be about 40 times faster than cloud vendor local networks and it offers better reliability). Embedding can also reduce dependencies on load balancer services since the communication takes place on a single virtual machine. For example, assume that microservice A has two instances, a1 and a2, and it communicates with microservice B, which also has two instances, b1 and b2. One embedding can deploy a1 and b1 on machine m1, and a2 and b2 are deployed on a different machine m2. This deployment configuration allows each pair (a1-b1), (a2-b2) to communicate using the localhost interface without the need of a load balancer between microservices A and B. Aggregation and embedding techniques are complex since different microservices might have incompatible runtime dependencies which forbid them from being installed on the same machine. There is also a security concern since the attack surface between microservices can be larger. Luckily, container technology allows to run several processes on the same machine in an isolated manner, solving the incompatibility of running dependencies and the previous security concern, thus greatly simplifying aggregation/embedding implementations by just deploying a microservice container on the same machine as the aggregated/embedded microservice container. Therefore, a wide variety of deployment configurations can be described by combining aggregation and embedding to create an efficient and robust microservice architecture. This paper presents a formal method that receives a declarative definition of a microservice architecture and proposes different optimized deployment configurations by aggregating/embedding microservices. The first prototype is based on i2kit, a deployment tool also submitted to ICWS 2018. The proposed prototype optimizes the following parameters: network/system performance, resource usage, resource costs and failure tolerance.Keywords: aggregation, deployment, embedding, resource allocation
Procedia PDF Downloads 204889 The Relationships among Self-Efficacy, Critical Thinking and Communication Skills Ability in Oncology Nurses for Cancer Immunotherapy in Taiwan
Authors: Yun-Hsiang Lee
Abstract:
Cancer is the main cause of death worldwide. With advances in medical technology, immunotherapy, which is a newly developed advanced treatment, is currently a crucial cancer treatment option. For better quality cancer care, the ability to communicate and critical thinking plays a central role in clinical oncology settings. However, few studies have explored the impact of communication skills on immunotherapy-related issues and their related factors. This study was to (i) explore the current status of communication skill ability for immunotherapy-related issues, self-efficacy for immunotherapy-related care, and critical thinking ability; and (ii) identify factors related to communication skill ability. This is a cross-sectional study. Oncology nurses were recruited from the Taiwan Oncology Nursing Society, in which nurses came from different hospitals distributed across four major geographic regions (North, Center, South, East) of Taiwan. A total of 123 oncology nurses participated in this study. A set of questionnaires were used for collecting data. Communication skill ability for immunotherapy issues, self-efficacy for immunotherapy-related care, critical thinking ability, and background information were assessed in this survey. Independent T-test and one-way ANOVA were used to examine different levels of communication skill ability based on nurses having done oncology courses (yes vs. no) and education years (< 1 year, 1-3 years, and > 3 years), respectively. Spearman correlation was conducted to understand the relationships between communication skill ability and other variables. Among the 123 oncology nurses in the current study, the majority of them were female (98.4%), and most of them were employed at a hospital in the North (46.8%) of Taiwan. Most of them possessed a university degree (78.9%) and had at least 3 years of prior work experience (71.7%). Forty-three of the oncology nurses indicated in the survey that they had not received oncology nurses-related training. Those oncology nurses reported moderate to high levels of communication skill ability for immunotherapy issues (mean=4.24, SD=0.7, range 1-5). Nurses reported moderate levels of self-efficacy for immunotherapy-related care (mean=5.20, SD=1.98, range 0-10) and also had high levels of critical thinking ability (mean=4.76, SD=0.60, range 1-6). Oncology nurses who had received oncology training courses had significantly better communication skill ability than those who had not received oncology training. Oncology nurses who had higher work experience (1-3 years, or > 3 years) had significantly higher levels of communication skill ability for immunotherapy-related issues than those with lower work experience (<1 year). When those nurses reported better communication skill ability, they also had significantly better self-efficacy (r=.42, p<.01) and better critical thinking ability (r=.47, p<.01). Taken altogether, courses designed to improve communication skill ability for immunotherapy-related issues can make a significant impact in clinical settings. Communication skill ability for oncology nurses is the major factor associated with self-efficacy and critical thinking, especially for those with lower work experience (< 1 year).Keywords: communication skills, critical thinking, immunotherapy, oncology nurses, self-efficacy
Procedia PDF Downloads 108888 The Use of Information and Communication Technology within and between Emergency Medical Teams during a Disaster: A Qualitative study
Authors: Badryah Alshehri, Kevin Gormley, Gillian Prue, Karen McCutcheon
Abstract:
In a disaster event, sharing patient information between the pre-hospital Emergency Medical Services (EMS) and Emergency Department (ED) hospitals is a complex process during which important information may be altered or lost due to poor communication. The aim of this study was to critically discuss the current evidence base in relation to communication between pre- EMS hospital and ED hospital professionals by the use of Information and Communication Systems (ICT). This study followed the systematic approach; six electronic databases were searched: CINAHL, Medline, Embase, PubMed, Web of Science, and IEEE Xplore Digital Library were comprehensively searched in January 2018 and a second search was completed in April 2020 to capture more recent publications. The study selection process was undertaken independently by the study authors. Both qualitative and quantitative studies were chosen that focused on factors that are positively or negatively associated with coordinated communication between pre-hospital EMS and ED teams in a disaster event. These studies were assessed for quality, and the data were analyzed according to the key screening themes which emerged from the literature search. Twenty-two studies were included. Eleven studies employed quantitative methods, seven studies used qualitative methods, and four studies used mixed methods. Four themes emerged on communication between EMTs (pre-hospital EMS and ED staff) in a disaster event using the ICT. (1) Disaster preparedness plans and coordination. This theme reported that disaster plans are in place in hospitals, and in some cases, there are interagency agreements with pre-hospital and relevant stakeholders. However, the findings showed that the disaster plans highlighted in these studies lacked information regarding coordinated communications within and between the pre-hospital and hospital. (2) Communication systems used in the disaster. This theme highlighted that although various communication systems are used between and within hospitals and pre-hospitals, technical issues have influenced communication between teams during disasters. (3) Integrated information management systems. This theme suggested the need for an integrated health information system that can help pre-hospital and hospital staff to record patient data and ensure the data is shared. (4) Disaster training and drills. While some studies analyzed disaster drills and training, the majority of these studies were focused on hospital departments other than EMTs. These studies suggest the need for simulation disaster training and drills, including EMTs. This review demonstrates that considerable gaps remain in the understanding of the communication between the EMS and ED hospital staff in relation to response in disasters. The review shows that although different types of ICTs are used, various issues remain which affect coordinated communication among the relevant professionals.Keywords: emergency medical teams, communication, information and communication technologies, disaster
Procedia PDF Downloads 127887 The New World Kirkpatrick Model as an Evaluation Tool for a Publication Writing Programme
Authors: Eleanor Nel
Abstract:
Research output is an indicator of institutional performance (and quality), resulting in increased pressure on academic institutions to perform in the research arena. Research output is further utilised to obtain research funding. Resultantly, academic institutions face significant pressure from governing bodies to provide evidence on the return for research investments. Research output has thus become a substantial discourse within institutions, mainly due to the processes linked to evaluating research output and the associated allocation of research funding. This focus on research outputs often surpasses the development of robust, widely accepted tools to additionally measure research impact at institutions. A publication writing programme, for enhancing research output, was launched at a South African university in 2011. Significant amounts of time, money, and energy have since been invested in the programme. Although participants provided feedback after each session, no formal review was conducted to evaluate the research output directly associated with the programme. Concerns in higher education about training costs, learning results, and the effect on society have increased the focus on value for money and the need to improve training, research performance, and productivity. Furthermore, universities rely on efficient and reliable monitoring and evaluation systems, in addition to the need to demonstrate accountability. While publishing does not occur immediately, achieving a return on investment from the intervention is critical. A multi-method study, guided by the New World Kirkpatrick Model (NWKM), was conducted to determine the impact of the publication writing programme for the period of 2011 to 2018. Quantitative results indicated a total of 314 academics participating in 72 workshops over the study period. To better understand the quantitative results, an open-ended questionnaire and semi-structured interviews were conducted with nine participants from a particular faculty as a convenience sample. The purpose of the research was to collect information to develop a comprehensive framework for impact evaluation that could be used to enhance the current design and delivery of the programme. The qualitative findings highlighted the critical role of a multi-stakeholder strategy in strengthening support before, during, and after a publication writing programme to improve the impact and research outputs. Furthermore, monitoring on-the-job learning is critical to ingrain the new skills academics have learned during the writing workshops and to encourage them to be accountable and empowered. The NWKM additionally provided essential pointers on how to link the results more effectively from publication writing programmes to institutional strategic objectives to improve research performance and quality, as well as what should be included in a comprehensive evaluation framework.Keywords: evaluation, framework, impact, research output
Procedia PDF Downloads 76886 Healthy Architecture Applied to Inclusive Design for People with Cognitive Disabilities
Authors: Santiago Quesada-García, María Lozano-Gómez, Pablo Valero-Flores
Abstract:
The recent digital revolution, together with modern technologies, is changing the environment and the way people interact with inhabited space. However, in society, the elderly are a very broad and varied group that presents serious difficulties in understanding these modern technologies. Outpatients with cognitive disabilities, such as those suffering from Alzheimer's disease (AD), are distinguished within this cluster. This population group is in constant growth, and they have specific requirements for their inhabited space. According to architecture, which is one of the health humanities, environments are designed to promote well-being and improve the quality of life for all. Buildings, as well as the tools and technologies integrated into them, must be accessible, inclusive, and foster health. In this new digital paradigm, artificial intelligence (AI) appears as an innovative resource to help this population group improve their autonomy and quality of life. Some experiences and solutions, such as those that interact with users through chatbots and voicebots, show the potential of AI in its practical application. In the design of healthy spaces, the integration of AI in architecture will allow the living environment to become a kind of 'exo-brain' that can make up for certain cognitive deficiencies in this population. The objective of this paper is to address, from the discipline of neuroarchitecture, how modern technologies can be integrated into everyday environments and be an accessible resource for people with cognitive disabilities. For this, the methodology has a mixed structure. On the one hand, from an empirical point of view, the research carries out a review of the existing literature about the applications of AI to build space, following the critical review foundations. As a unconventional architectural research, an experimental analysis is proposed based on people with AD as a resource of data to study how the environment in which they live influences their regular activities. The results presented in this communication are part of the progress achieved in the competitive R&D&I project ALZARQ (PID2020-115790RB-I00). These outcomes are aimed at the specific needs of people with cognitive disabilities, especially those with AD, since, due to the comfort and wellness that the solutions entail, they can also be extrapolated to the whole society. As a provisional conclusion, it can be stated that, in the immediate future, AI will be an essential element in the design and construction of healthy new environments. The discipline of architecture has the compositional resources to, through this emerging technology, build an 'exo-brain' capable of becoming a personal assistant for the inhabitants, with whom to interact proactively and contribute to their general well-being. The main objective of this work is to show how this is possible.Keywords: Alzheimer’s disease, artificial intelligence, healthy architecture, neuroarchitecture, architectural design
Procedia PDF Downloads 62885 Mikrophonie I (1964) by Karlheinz Stockhausen - Between Idea and Auditory Image
Authors: Justyna Humięcka-Jakubowska
Abstract:
1. Background in music analysis. Traditionally, when we think about a composer’s sketches, the chances are that we are thinking in terms of the working out of detail, rather than the evolution of an overall concept. Since music is a “time art’, it follows that questions of a form cannot be entirely detached from considerations of time. One could say that composers tend to regard time either as a place gradually and partially intuitively filled, or they can look for a specific strategy to occupy it. In my opinion, one thing that sheds light on Stockhausen's compositional thinking is his frequent use of 'form schemas', that is often a single-page representation of the entire structure of a piece. 2. Background in music technology. Sonic Visualiser is a program used to study a musical recording. It is an open source application for viewing, analysing, and annotating music audio files. It contains a number of visualisation tools, which are designed with useful default parameters for musical analysis. Additionally, the Vamp plugin format of SV supports to provide analysis such as for example structural segmentation. 3. Aims. The aim of my paper is to show how SV may be used to obtain a better understanding of the specific musical work, and how the compositional strategy does impact on musical structures and musical surfaces. I want to show that ‘traditional” music analytic methods don’t allow to indicate interrelationships between musical surface (which is perceived) and underlying musical/acoustical structure. 4. Main Contribution. Stockhausen had dealt with the most diverse musical problems by the most varied methods. A characteristic which he had never ceased to be placed at the center of his thought and works, it was the quest for a new balance founded upon an acute connection between speculation and intuition. In the case with Mikrophonie I (1964) for tam-tam and 6 players Stockhausen makes a distinction between the "connection scheme", which indicates the ground rules underlying all versions, and the form scheme, which is associated with a particular version. The preface to the published score includes both the connection scheme, and a single instance of a "form scheme", which is what one can hear on the CD recording. In the current study, the insight into the compositional strategy chosen by Stockhausen was been compared with auditory image, that is, with the perceived musical surface. Stockhausen's musical work is analyzed both in terms of melodic/voice and timbre evolution. 5. Implications The current study shows how musical structures have determined of musical surface. My general assumption is this, that while listening to music we can extract basic kinds of musical information from musical surfaces. It is shown that an interactive strategies of musical structure analysis can offer a very fruitful way of looking directly into certain structural features of music.Keywords: automated analysis, composer's strategy, mikrophonie I, musical surface, stockhausen
Procedia PDF Downloads 298