Search results for: speed power
1725 Sustainable Affordable Housing Development in Indonesia
Authors: Gina Cynthia Raphita Hasibuan
Abstract:
The housing sector in Indonesia is in critical condition where majority of low-income citizens live in substandard dwellings, and the number housing backlog is increasing every year. The housing problem becomes more urgent when the term 'sustainability' is considered, and sustainable affordable housing is yet to gain its successful implementation. Global urbanization develops fastest in developing countries like Indonesia where informal settlements are rapidly escalating, hence, making sustainable affordable housing strategies very critical in this context. The problem in developing countries like Indonesia lies on the institutional capacity of newly-established local governments having greater power to determine a development policy but apparently still lacking institutional capability and coordination with the central government and collaborative governance are still not established yet. The concept of upgrading informal settlements are seen changed over time and inconsistent. Despite much research on theme such as sustainable housing concept within Indonesian context, there has been a dearth of research examining the role of collaborative governance, as the current approach still shows fragmented approach between the stakeholders and the lack of community participation as the end user, and thus this research attempts to fill the gap on the aforementioned problems. By using case study with multi-methods conducted in Jakarta, this research has an overall aim to critically assess the role of collaborative governance in addressing sustainable affordable housing in Indonesia and to understand informal settlements and interventions in Indonesia rather than imposing a framework from western perspectives.Keywords: affordable housing, collaborative governance, sustainability, urban planning
Procedia PDF Downloads 4101724 Complex Network Analysis of Seismicity and Applications to Short-Term Earthquake Forecasting
Authors: Kahlil Fredrick Cui, Marissa Pastor
Abstract:
Earthquakes are complex phenomena, exhibiting complex correlations in space, time, and magnitude. Recently, the concept of complex networks has been used to shed light on the statistical and dynamical characteristics of regional seismicity. In this work, we study the relationships and interactions of seismic regions in Chile, Japan, and the Philippines through weighted and directed complex network analysis. Geographical areas are digitized into cells of fixed dimensions which in turn become the nodes of the network when an earthquake has occurred therein. Nodes are linked if a correlation exists between them as determined and measured by a correlation metric. The networks are found to be scale-free, exhibiting power-law behavior in the distributions of their different centrality measures: the in- and out-degree and the in- and out-strength. The evidence is also found of preferential interaction between seismically active regions through their degree-degree correlations suggesting that seismicity is dictated by the activity of a few active regions. The importance of a seismic region to the overall seismicity is measured using a generalized centrality metric taken to be an indicator of its activity or passivity. The spatial distribution of earthquake activity indicates the areas where strong earthquakes have occurred in the past while the passivity distribution points toward the likely locations an earthquake would occur whenever another one happens elsewhere. Finally, we propose a method that would project the location of the next possible earthquake using the generalized centralities coupled with correlations calculated between the latest earthquakes and a geographical point in the future.Keywords: complex networks, correlations, earthquake, hazard assessment
Procedia PDF Downloads 2121723 UEMG-FHR Coupling Analysis in Pregnancies Complicated by Pre-Eclampsia and Small for Gestational Age
Authors: Kun Chen, Yan Wang, Yangyu Zhao, Shufang Li, Lian Chen, Xiaoyue Guo, Jue Zhang, Jing Fang
Abstract:
The coupling strength between uterine electromyography (UEMG) and Fetal heart rate (FHR) signals during peripartum reflects the fetal biophysical activities. Therefore, UEMG-FHR coupling characterization is instructive in assessing placenta function. This study introduced a physiological marker named elevated frequency of UEMG-FHR coupling (E-UFC) and explored its predictive value for pregnancies complicated by pre-eclampsia and small for gestational age (SGA). Placental insufficiency patients (n=12) and healthy volunteers (n=24) were recruited and participated. UEMG and FHR were recorded non-invasively by a trans-abdominal device in women at term with singleton pregnancy (32-37 weeks) from 10:00 pm to 8:00 am. The product of the wavelet coherence and the wavelet cross-spectral power between UEMG and FHR was used to weight these two effects in order to quantify the degree of the UEMG-FHR coupling. E-UFC was exacted from the resultant spectrogram by calculating the mean value of the high-coherence (r > 0.5) frequency band. Results showed the high-coherence between UEMG and FHR was observed in the frequency band (1/512-1/16Hz). In addition, E-UFC in placental insufficiency patients was weaker compared to healthy controls (p < 0.001) at group level. These findings suggested the proposed approach could be used to quantitatively characterize the fetal biophysical activities, which is beneficial for early detection of placental insufficiency and reduces the occurrence of adverse pregnancy.Keywords: uterine electromyography, fetal heart rate, coupling analysis, wavelet analysis
Procedia PDF Downloads 2021722 Human Security through Human Rights in the Contemporary World
Authors: Shilpa Bagade Poharkar
Abstract:
The basis for traditional notion of security was the use of force to preserve vital interest which based on either realism or power politics. The modern approach to security extends beyond the traditional notions of security which focus on issues as development and respect for human rights. In global politics, the issue of human security plays a vital role in most of the policy matter. In modern era, the protection of human rights is now recognized as one of the main functions of any legitimate modern state. The research paper will explore the relationship between human rights and security. United Nations is facing major challenges like rampant poverty, refugee outflows, human trafficking, displacement, conflicts, terrorism, intra-inter ethnic conflicts, proliferation of small arms, genocide, piracy, climate change, health issues and so on. The methodology is observed in this paper is doctrinaire which includes analytical and descriptive comparative method. The hypothesis of the paper is the relationship between human rights and a goal of United Nations to attain peace and security. Although previous research has been done in this field but this research paper will try to find out the challenges in the human security through human rights in the contemporary world and will provide measures for it. The study will focus on the following research questions: What are the issues and challenges United Nations facing while advancing human security through human rights? What measures the international community would take for ensuring the protection of human rights while protecting state security and contribute in the attainment of goals of United Nations?Keywords: human rights, human security, peace, security, United Nations
Procedia PDF Downloads 2481721 Role of Collaborative Cultural Model to Step on Cleaner Energy: A Case of Kathmandu City Core
Authors: Bindu Shrestha, Sudarshan R. Tiwari, Sushil B. Bajracharya
Abstract:
Urban household cooking fuel choice is highly influenced by human behavior and energy culture parameters such as cognitive norms, material culture and practices. Although these parameters have a leading role in Kathmandu for cleaner households, they are not incorporated in the city’s energy policy. This paper aims to identify trade-offs to transform resident behavior in cooking pattern towards cleaner technology from the questionnaire survey, observation, mapping, interview, and quantitative analysis. The analysis recommends implementing a Collaborative Cultural Model (CCM) for changing impact on the neighborhood from the policy level. The results showed that each household produces 439.56 kg of carbon emission each year and 20 percent used unclean technology due to low-income level. Residents who used liquefied petroleum gas (LPG) as their cooking fuel suffered from an energy crisis every year that has created fuel hoarding, which ultimately creates more energy demand and carbon exposure. In conclusion, the carbon emission can be reduced by improving the residents’ energy consumption culture. It recommended the city to use holistic action of changing habits as soft power of collaboration in two-way participation approach within residents, private sectors, and government to change their energy culture and behavior in policy level.Keywords: energy consumption pattern, collaborative cultural model, energy culture, fuel stacking
Procedia PDF Downloads 1341720 Propeller Performance Modeling through a Computational Fluid Dynamics Analysis Method
Authors: Maxime Alex Junior Kuitche, Ruxandra Mihaela Botez, Jean-Chirstophe Maunand
Abstract:
The evolution of aircraft is closely linked to the study and improvement of propulsion systems. Determining the propulsion performance is a real challenge in aircraft modeling and design. In addition to theoretical methodologies, experimental procedures are used to obtain a good estimation of the propulsion performances. For piston-propeller propulsion, the propeller needs several experimental tests which could be extremely demanding in terms of time and money. This paper presents a new procedure to estimate the performance of a propeller from a numerical approach using computational fluid dynamic analysis. The propeller was initially scanned, and then, its 3D model was represented using CATIA. A structured meshing and Shear Stress Transition k-ω turbulence model were applied to describe accurately the flow pattern around the propeller. Thus, the Partial Differential Equations were solved using ANSYS FLUENT software. The method was applied on the UAS-S45’s propeller designed and manufactured by Hydra Technologies in Mexico. An extensive investigation was performed for several flight conditions in terms of altitudes and airspeeds with the aim to determine thrust coefficients, power coefficients and efficiency of the propeller. The Computational Fluid Dynamics results were compared with experimental data acquired from wind tunnel tests performed at the LARCASE Price-Paidoussis wind tunnel. The results of this comparison have demonstrated that our approach was highly accurate.Keywords: CFD analysis, propeller performance, unmanned aerial system propeller, UAS-S45
Procedia PDF Downloads 3531719 Reading and Writing of Biscriptal Children with and Without Reading Difficulties in Two Alphabetic Scripts
Authors: Baran Johansson
Abstract:
This PhD dissertation aimed to explore children’s writing and reading in L1 (Persian) and L2 (Swedish). It adds new perspectives to reading and writing studies of bilingual biscriptal children with and without reading and writing difficulties (RWD). The study used standardised tests to examine linguistic and cognitive skills related to word reading and writing fluency in both languages. Furthermore, all participants produced two texts (one descriptive and one narrative) in each language. The writing processes and the writing product of these children were explored using logging methodologies (Eye and Pen) for both languages. Furthermore, this study investigated how two bilingual children with RWD presented themselves through writing across their languages. To my knowledge, studies utilizing standardised tests and logging tools to investigate bilingual children’s word reading and writing fluency across two different alphabetic scripts are scarce. There have been few studies analysing how bilingual children construct meaning in their writing, and none have focused on children who write in two different alphabetic scripts or those with RWD. Therefore, some aspects of the systemic functional linguistics (SFL) perspective were employed to examine how two participants with RWD created meaning in their written texts in each language. The results revealed that children with and without RWD had higher writing fluency in all measures (e.g. text lengths, writing speed) in their L2 compared to their L1. Word reading abilities in both languages were found to influence their writing fluency. The findings also showed that bilingual children without reading difficulties performed 1 standard deviation below the mean when reading words in Persian. However, their reading performance in Swedish aligned with the expected age norms, suggesting greater efficient in reading Swedish than in Persian. Furthermore, the results showed that the level of orthographic depth, consistency between graphemes and phonemes, and orthographic features can probably explain these differences across languages. The analysis of meaning-making indicated that the participants with RWD exhibited varying levels of difficulty, which influenced their knowledge and usage of writing across languages. For example, the participant with poor word recognition (PWR) presented himself similarly across genres, irrespective of the language in which he wrote. He employed the listing technique similarly across his L1 and L2. However, the participant with mixed reading difficulties (MRD) had difficulties with both transcription and text production. He produced spelling errors and frequently paused in both languages. He also struggled with word retrieval and producing coherent texts, consistent with studies of monolingual children with poor comprehension or with developmental language disorder. The results suggest that the mother tongue instruction provided to the participants has not been sufficient for them to become balanced biscriptal readers and writers in both languages. Therefore, increasing the number of hours dedicated to mother tongue instruction and motivating the children to participate in these classes could be potential strategies to address this issue.Keywords: reading, writing, reading and writing difficulties, bilingual children, biscriptal
Procedia PDF Downloads 711718 Comparison of Gait Variability in Individuals with Trans-Tibial and Trans-Femoral Lower Limb Loss: A Pilot Study
Authors: Hilal Keklicek, Fatih Erbahceci, Elif Kirdi, Ali Yalcin, Semra Topuz, Ozlem Ulger, Gul Sener
Abstract:
Objectives and Goals: The stride-to-stride fluctuations in gait is a determinant of qualified locomotion as known as gait variability. Gait variability is an important predictive factor of fall risk and useful for monitoring the effects of therapeutic interventions and rehabilitation. Comparison of gait variability in individuals with trans-tibial lower limb loss and trans femoral lower limb loss was the aim of the study. Methods: Ten individuals with traumatic unilateral trans femoral limb loss(TF), 12 individuals with traumatic transtibial lower limb loss(TT) and 12 healthy individuals(HI) were the participants of the study. All participants were evaluated with treadmill. Gait characteristics including mean step length, step length variability, ambulation index, time on each foot of participants were evaluated with treadmill. Participants were walked at their preferred speed for six minutes. Data from 4th minutes to 6th minutes were selected for statistical analyses to eliminate learning effect. Results: There were differences between the groups in intact limb step length variation, time on each foot, ambulation index and mean age (p < .05) according to the Kruskal Wallis Test. Pairwise analyses showed that there were differences between the TT and TF in residual limb variation (p=.041), time on intact foot (p=.024), time on prosthetic foot(p=.024), ambulation index(p = .003) in favor of TT group. There were differences between the TT and HI group in intact limb variation (p = .002), time on intact foot (p<.001), time on prosthetic foot (p < .001), ambulation index result (p < .001) in favor of HI group. There were differences between the TF and HI group in intact limb variation (p = .001), time on intact foot (p=.01) ambulation index result (p < .001) in favor of HI group. There was difference between the groups in mean age result from HI group were younger (p < .05).There were similarity between the groups in step lengths (p>.05) and time of prosthesis using in individuals with lower limb loss (p > .05). Conclusions: The pilot study provided basic data about gait stability in individuals with traumatic lower limb loss. Results of the study showed that to evaluate the gait differences between in different amputation level, long-range gait analyses methods may be useful to get more valuable information. On the other hand, similarity in step length may be resulted from effective prosthetic using or effective gait rehabilitation, in conclusion, all participants with lower limb loss were already trained. The differences between the TT and HI; TF and HI may be resulted from the age related features, therefore, age matched population in HI were recommended future studies. Increasing the number of participants and comparison of age-matched groups also recommended to generalize these result.Keywords: lower limb loss, amputee, gait variability, gait analyses
Procedia PDF Downloads 2801717 Design of RF Generator and Its Testing in Heating of Nickel Ferrite Nanoparticles
Authors: D. Suman, M. Venkateshwara Rao
Abstract:
Cancer is a disease caused by an uncontrolled division of abnormal cells in a part of the body, which is affecting millions of people leading to death. Even though there have been tremendous developments taken place over the last few decades the effective therapy for cancer is still not a reality. The existing techniques of cancer therapy are chemotherapy and radio therapy which are having their limitations in terms of the side effects, patient discomfort, radiation hazards and the localization of treatment. This paper describes a novel method for cancer therapy by using RF-hyperthermia application of nanoparticles. We have synthesized ferromagnetic nanoparticles and characterized by using XRD and TEM. These nanoparticles after the biocompatibility studies will be injected in to the body with a suitable tracer element having affinity to the specific tumor site. When RF energy is applied to the nanoparticles at the tumor site it produces heat of excess room temperature and nearly 41-45°C is sufficient to kill the tumor cells. We have designed a RF source generator provided with a temperature feedback controller to control the radiation induced temperature of the tumor site. The temperature control is achieved through a negative feedback mechanism of the thermocouple and a relay connected to the power source of the RF generator. This method has advantages in terms of its effect like localized therapy, less radiation, and no side effects. It has several challenges in designing the RF source provided with coils suitable for the tumour site, biocompatibility of the nanomaterials, cooling system design for the RF coil. If we can overcome these challenges this method will be a huge benefit for the society.Keywords: hyperthermia, cancer therapy, RF source generator, nanoparticles
Procedia PDF Downloads 4601716 Critical Design Futures: A Foresight 3.0 Approach to Business Transformation and Innovation
Authors: Nadya Patel, Jawn Lim
Abstract:
Foresight 3.0 is a synergistic methodology that encompasses systems analysis, future studies, capacity building, and forward planning. These components are interconnected, fostering a collective anticipatory intelligence that promotes societal resilience (Ravetz, 2020). However, traditional applications of these strands can often fall short, leading to missed opportunities and narrow perspectives. Therefore, Foresight 3.0 champions a holistic approach to tackling complex issues, focusing on systemic transformations and power dynamics. Businesses are pivotal in preparing the workforce for an increasingly uncertain and complex world. This necessitates the adoption of innovative tools and methodologies, such as Foresight 3.0, that can better equip young employees to anticipate and navigate future challenges. Firstly, the incorporation of its methodology into workplace training can foster a holistic perspective among employees. This approach encourages employees to think beyond the present and consider wider social, economic, and environmental contexts, thereby enhancing their problem-solving skills and resilience. This paper discusses our research on integrating Foresight 3.0's transformative principles with a newly developed Critical Design Futures (CDF) framework to equip organisations with the ability to innovate for the world's most complex social problems. This approach is grounded in 'collective forward intelligence,' enabling mutual learning, co-innovation, and co-production among a diverse stakeholder community, where business transformation and innovation are achieved.Keywords: business transformation, innovation, foresight, critical design
Procedia PDF Downloads 811715 Theoretical and Experimental Investigation of Binder-free Trimetallic Phosphate Nanosheets
Authors: Iftikhar Hussain, Muhammad Ahmad, Xi Chen, Li Yuxiang
Abstract:
Transition metal phosphides and phosphates are newly emerged electrode material candidates in energy storage devices. For the first time, we report uniformly distributed, interconnected, and well-aligned two-dimensional nanosheets made from trimetallic Zn-Co-Ga phosphate (ZCGP) electrode materials with preserved crystal phase. It is found that the ZCGP electrode material exhibits about 2.85 and 1.66 times higher specific capacity than mono- and bimetallic phosphate electrode materials at the same current density. The trimetallic ZCGP electrode exhibits superior conductivity, lower internal resistance (IR) drop, and high Coulombic efficiency compared to mono- and bimetallic phosphate. The charge storage mechanism is studied for mono- bi- and trimetallic electrode materials, which illustrate the diffusion-dominated battery-type behavior. By means of density functional theory (DFT) calculations, ZCGP shows superior metallic conductivity due to the modified exchange splitting originating from 3d-orbitals of Co atoms in the presence of Zn and Ga. Moreover, a hybrid supercapacitor (ZCGP//rGO) device is engineered, which delivered a high energy density (ED) of 40 W h kg⁻¹ and a high-power density (PD) of 7,745 W kg⁻¹, lighting 5 different colors of light emitting diodes (LEDs). These outstanding results confirm the promising battery-type electrode materials for energy storage applications.Keywords: trimetallic phosphate, nanosheets, DFT calculations, hybrid supercapacitor, binder-free, synergistic effect
Procedia PDF Downloads 2101714 Comparison of the Effects of Continuous Flow Microwave Pre-Treatment with Different Intensities on the Anaerobic Digestion of Sewage Sludge for Sustainable Energy Recovery from Sewage Treatment Plant
Authors: D. Hephzibah, P. Kumaran, N. M. Saifuddin
Abstract:
Anaerobic digestion is a well-known technique for sustainable energy recovery from sewage sludge. However, sewage sludge digestion is restricted due to certain factors. Pre-treatment methods have been established in various publications as a promising technique to improve the digestibility of the sewage sludge and to enhance the biogas generated which can be used for energy recovery. In this study, continuous flow microwave (MW) pre-treatment with different intensities were compared by using 5 L semi-continuous digesters at a hydraulic retention time of 27 days. We focused on the effects of MW at different intensities on the sludge solubilization, sludge digestibility, and biogas production of the untreated and MW pre-treated sludge. The MW pre-treatment demonstrated an increase in the ratio of soluble chemical oxygen demand to total chemical oxygen demand (sCOD/tCOD) and volatile fatty acid (VFA) concentration. Besides that, the total volatile solid (TVS) removal efficiency and tCOD removal efficiency also increased during the digestion of the MW pre-treated sewage sludge compared to the untreated sewage sludge. Furthermore, the biogas yield also subsequently increases due to the pre-treatment effect. A higher MW power level and irradiation time generally enhanced the biogas generation which has potential for sustainable energy recovery from sewage treatment plant. However, the net energy balance tabulation shows that the MW pre-treatment leads to negative net energy production.Keywords: anaerobic digestion, biogas, microwave pre-treatment, sewage sludge
Procedia PDF Downloads 3201713 Design and Analysis of Crankshaft Using Al-Al2O3 Composite Material
Authors: Palanisamy Samyraj, Sriram Yogesh, Kishore Kumar, Vaishak Cibi
Abstract:
The project is about design and analysis of crankshaft using Al-Al2O3 composite material. The project is mainly concentrated across two areas one is to design and analyze the composite material, and the other is to work on the practical model. Growing competition and the growing concern for the environment has forced the automobile manufactures to meet conflicting demands such as increased power and performance, lower fuel consumption, lower pollution emission and decrease noise and vibration. Metal matrix composites offer good properties for a number of automotive components. The work reports on studies on Al-Al2O3 as the possible alternative material for a crank shaft. These material have been considered for use in various components in engines due to the high amount of strength to weight ratio. These materials are significantly taken into account for their light weight, high strength, high specific modulus, low co-efficient of thermal expansion, good air resistance properties. In addition high specific stiffness, superior high temperature, mechanical properties and oxidation resistance of Al2O3 have developed some advanced materials that are Al-Al2O3 composites. Crankshafts are used in automobile industries. Crankshaft is connected to the connecting rod for the movement of the piston which is subjected to high stresses which cause the wear of the crankshaft. Hence using composite material in crankshaft gives good fuel efficiency, low manufacturing cost, less weight.Keywords: metal matrix composites, Al-Al2O3, high specific modulus, strength to weight ratio
Procedia PDF Downloads 2751712 Event Data Representation Based on Time Stamp for Pedestrian Detection
Authors: Yuta Nakano, Kozo Kajiwara, Atsushi Hori, Takeshi Fujita
Abstract:
In association with the wave of electric vehicles (EV), low energy consumption systems have become more and more important. One of the key technologies to realize low energy consumption is a dynamic vision sensor (DVS), or we can call it an event sensor, neuromorphic vision sensor and so on. This sensor has several features, such as high temporal resolution, which can achieve 1 Mframe/s, and a high dynamic range (120 DB). However, the point that can contribute to low energy consumption the most is its sparsity; to be more specific, this sensor only captures the pixels that have intensity change. In other words, there is no signal in the area that does not have any intensity change. That is to say, this sensor is more energy efficient than conventional sensors such as RGB cameras because we can remove redundant data. On the other side of the advantages, it is difficult to handle the data because the data format is completely different from RGB image; for example, acquired signals are asynchronous and sparse, and each signal is composed of x-y coordinate, polarity (two values: +1 or -1) and time stamp, it does not include intensity such as RGB values. Therefore, as we cannot use existing algorithms straightforwardly, we have to design a new processing algorithm to cope with DVS data. In order to solve difficulties caused by data format differences, most of the prior arts make a frame data and feed it to deep learning such as Convolutional Neural Networks (CNN) for object detection and recognition purposes. However, even though we can feed the data, it is still difficult to achieve good performance due to a lack of intensity information. Although polarity is often used as intensity instead of RGB pixel value, it is apparent that polarity information is not rich enough. Considering this context, we proposed to use the timestamp information as a data representation that is fed to deep learning. Concretely, at first, we also make frame data divided by a certain time period, then give intensity value in response to the timestamp in each frame; for example, a high value is given on a recent signal. We expected that this data representation could capture the features, especially of moving objects, because timestamp represents the movement direction and speed. By using this proposal method, we made our own dataset by DVS fixed on a parked car to develop an application for a surveillance system that can detect persons around the car. We think DVS is one of the ideal sensors for surveillance purposes because this sensor can run for a long time with low energy consumption in a NOT dynamic situation. For comparison purposes, we reproduced state of the art method as a benchmark, which makes frames the same as us and feeds polarity information to CNN. Then, we measured the object detection performances of the benchmark and ours on the same dataset. As a result, our method achieved a maximum of 7 points greater than the benchmark in the F1 score.Keywords: event camera, dynamic vision sensor, deep learning, data representation, object recognition, low energy consumption
Procedia PDF Downloads 971711 Through Additive Manufacturing. A New Perspective for the Mass Production of Made in Italy Products
Authors: Elisabetta Cianfanelli, Paolo Pupparo, Maria Claudia Coppola
Abstract:
The recent evolutions in the innovation processes and in the intrinsic tendencies of the product development process, lead to new considerations on the design flow. The instability and complexity that contemporary life describes, defines new problems in the production of products, stimulating at the same time the adoption of new solutions across the entire design process. The advent of Additive Manufacturing, but also of IOT and AI technologies, continuously puts us in front of new paradigms regarding design as a social activity. The totality of these technologies from the point of view of application describes a whole series of problems and considerations immanent to design thinking. Addressing these problems may require some initial intuition and the use of some provisional set of rules or plausible strategies, i.e., heuristic reasoning. At the same time, however, the evolution of digital technology and the computational speed of new design tools describe a new and contrary design framework in which to operate. It is therefore interesting to understand the opportunities and boundaries of the new man-algorithm relationship. The contribution investigates the man-algorithm relationship starting from the state of the art of the Made in Italy model, the most known fields of application are described and then focus on specific cases in which the mutual relationship between man and AI becomes a new driving force of innovation for entire production chains. On the other hand, the use of algorithms could engulf many design phases, such as the definition of shape, dimensions, proportions, materials, static verifications, and simulations. Operating in this context, therefore, becomes a strategic action, capable of defining fundamental choices for the design of product systems in the near future. If there is a human-algorithm combination within a new integrated system, quantitative values can be controlled in relation to qualitative and material values. The trajectory that is described therefore becomes a new design horizon in which to operate, where it is interesting to highlight the good practices that already exist. In this context, the designer developing new forms can experiment with ways still unexpressed in the project and can define a new synthesis and simplification of algorithms, so that each artifact has a signature in order to define in all its parts, emotional and structural. This signature of the designer, a combination of values and design culture, will be internal to the algorithms and able to relate to digital technologies, creating a generative dialogue for design purposes. The result that is envisaged indicates a new vision of digital technologies, no longer understood only as of the custodians of vast quantities of information, but also as a valid integrated tool in close relationship with the design culture.Keywords: decision making, design euristics, product design, product design process, design paradigms
Procedia PDF Downloads 1191710 An A-Star Approach for the Quickest Path Problem with Time Windows
Authors: Christofas Stergianos, Jason Atkin, Herve Morvan
Abstract:
As air traffic increases, more airports are interested in utilizing optimization methods. Many processes happen in parallel at an airport, and complex models are needed in order to have a reliable solution that can be implemented for ground movement operations. The ground movement for aircraft in an airport, allocating a path to each aircraft to follow in order to reach their destination (e.g. runway or gate), is one process that could be optimized. The Quickest Path Problem with Time Windows (QPPTW) algorithm has been developed to provide a conflict-free routing of vehicles and has been applied to routing aircraft around an airport. It was subsequently modified to increase the accuracy for airport applications. These modifications take into consideration specific characteristics of the problem, such as: the pushback process, which considers the extra time that is needed for pushing back an aircraft and turning its engines on; stand holding where any waiting should be allocated to the stand; and runway sequencing, where the sequence of the aircraft that take off is optimized and has to be respected. QPPTW involves searching for the quickest path by expanding the search in all directions, similarly to Dijkstra’s algorithm. Finding a way to direct the expansion can potentially assist the search and achieve a better performance. We have further modified the QPPTW algorithm to use a heuristic approach in order to guide the search. This new algorithm is based on the A-star search method but estimates the remaining time (instead of distance) in order to assess how far the target is. It is important to consider the remaining time that it is needed to reach the target, so that delays that are caused by other aircraft can be part of the optimization method. All of the other characteristics are still considered and time windows are still used in order to route multiple aircraft rather than a single aircraft. In this way the quickest path is found for each aircraft while taking into account the movements of the previously routed aircraft. After running experiments using a week of real aircraft data from Zurich Airport, the new algorithm (A-star QPPTW) was found to route aircraft much more quickly, being especially fast in routing the departing aircraft where pushback delays are significant. On average A-star QPPTW could route a full day (755 to 837 aircraft movements) 56% faster than the original algorithm. In total the routing of a full week of aircraft took only 12 seconds with the new algorithm, 15 seconds faster than the original algorithm. For real time application, the algorithm needs to be very fast, and this speed increase will allow us to add additional features and complexity, allowing further integration with other processes in airports and leading to more optimized and environmentally friendly airports.Keywords: a-star search, airport operations, ground movement optimization, routing and scheduling
Procedia PDF Downloads 2311709 Corporate Fund Mobilization for Listed Companies and Economic Development: Case of Mongolian Stock Exchange
Authors: Ernest Nweke, Enkhtuya Bavuudorj
Abstract:
The Mongolia Stock Exchange (MSE) serves as a vehicle for executing the privatization policy of Mongolian Government as it transitioned from socialist to free market economy. It was also the intention of the Government to develop the investment and securities market through its establishment and to further boost the ailing Mongolian economy. This paper focuses on the contributions of the Mongolian Stock Exchange (MSE) to the industrial and economic development of Mongolia via Corporate fund mobilization for listed companies in Mongolia. A study of this nature is imperative as economic development in Mongolia has been accelerated by corporate investments. The key purpose of the research was to critically analyze the operations of the MSE to ascertain the extent to which the objectives for which it was established have been accomplished and to assess its contributions to industrial and economic development of Mongolia. In achieving this, secondary data on the activities of the MSE; its market capitalization over the years were collected and analyzed vis-à-vis the figures for Mongolia’s macro-economic data for the same time period to determine whether the progressive increase in market capitalization of the MSE has positively impacted on Mongolia’s economic growth. Regression analysis package was utilized in dissecting the data. It was proven that the Mongolian Stock Exchange has contributed positively and significantly to Mongolia’s economic development though not yet to the desired level. Against the findings of this research, recommendations were made to address, the problems facing the MSE and to enhance its performance and ultimately its contributions to industrial and economic development of the Mongolian nation.Keywords: Corporate Fund Mobilization, Gross Domestic Product (GDP), market capitalization, purchasing power, stock exchange
Procedia PDF Downloads 2531708 Multi-Temporal Mapping of Built-up Areas Using Daytime and Nighttime Satellite Images Based on Google Earth Engine Platform
Authors: S. Hutasavi, D. Chen
Abstract:
The built-up area is a significant proxy to measure regional economic growth and reflects the Gross Provincial Product (GPP). However, an up-to-date and reliable database of built-up areas is not always available, especially in developing countries. The cloud-based geospatial analysis platform such as Google Earth Engine (GEE) provides an opportunity with accessibility and computational power for those countries to generate the built-up data. Therefore, this study aims to extract the built-up areas in Eastern Economic Corridor (EEC), Thailand using day and nighttime satellite imagery based on GEE facilities. The normalized indices were generated from Landsat 8 surface reflectance dataset, including Normalized Difference Built-up Index (NDBI), Built-up Index (BUI), and Modified Built-up Index (MBUI). These indices were applied to identify built-up areas in EEC. The result shows that MBUI performs better than BUI and NDBI, with the highest accuracy of 0.85 and Kappa of 0.82. Moreover, the overall accuracy of classification was improved from 79% to 90%, and error of total built-up area was decreased from 29% to 0.7%, after night-time light data from the Visible and Infrared Imaging Suite (VIIRS) Day Night Band (DNB). The results suggest that MBUI with night-time light imagery is appropriate for built-up area extraction and be utilize for further study of socioeconomic impacts of regional development policy over the EEC region.Keywords: built-up area extraction, google earth engine, adaptive thresholding method, rapid mapping
Procedia PDF Downloads 1261707 Introgressive Hybridisation between Two Widespread Sharks in the East Pacific Region
Authors: Diana A. Pazmino, Lynne vanHerwerden, Colin A. Simpfendorfer, Claudia Junge, Stephen C. Donnellan, Mauricio Hoyos-Padilla, Clinton A. J. Duffy, Charlie Huveneers, Bronwyn Gillanders, Paul A. Butcher, Gregory E. Maes
Abstract:
With just a handful of documented cases of hybridisation in cartilaginous fishes, shark hybridisation remains poorly investigated. Small amounts of admixture have been detected between Galapagos (Carcharhinus galapagensis) and dusky (Carcharhinus obscurus) sharks previously, generating a hypothesis of ongoing hybridisation. We sampled a large number of individuals from areas where both species co-occur (contact zones) across the Pacific Ocean and used both mitochondrial and nuclear-encoded SNPs to examine genetic admixture and introgression between the two species. Using empirical, analytical approaches and simulations, we first developed a set of 1,873 highly informative and reliable diagnostic SNPs for these two species to evaluate the degree of admixture between them. Overall, results indicate a high discriminatory power of nuclear SNPs (FST=0.47, p < 0.05) between the two species, unlike mitochondrial DNA (ΦST = 0.00 p > 0.05), which failed to differentiate between these species. We identified four hybrid individuals (~1%) and detected bi-directional introgression between C. galapagensis and C. obscurus in the Gulf of California along the eastern Pacific coast of the Americas. We emphasize the importance of including a combination of mtDNA and diagnostic nuclear markers to properly assess species identification, detect patterns of hybridisation, and better inform management and conservation of these sharks, especially given the morphological similarities within the genus Carcharhinus.Keywords: elasmobranchs, single nucleotide polymorphisms, hybridisation, introgression, misidentification
Procedia PDF Downloads 1941706 Enhance Concurrent Design Approach through a Design Methodology Based on an Artificial Intelligence Framework: Guiding Group Decision Making to Balanced Preliminary Design Solution
Authors: Loris Franchi, Daniele Calvi, Sabrina Corpino
Abstract:
This paper presents a design methodology in which stakeholders are assisted with the exploration of a so-called negotiation space, aiming to the maximization of both group social welfare and single stakeholder’s perceived utility. The outcome results in less design iterations needed for design convergence while obtaining a higher solution effectiveness. During the early stage of a space project, not only the knowledge about the system but also the decision outcomes often are unknown. The scenario is exacerbated by the fact that decisions taken in this stage imply delayed costs associated with them. Hence, it is necessary to have a clear definition of the problem under analysis, especially in the initial definition. This can be obtained thanks to a robust generation and exploration of design alternatives. This process must consider that design usually involves various individuals, who take decisions affecting one another. An effective coordination among these decision-makers is critical. Finding mutual agreement solution will reduce the iterations involved in the design process. To handle this scenario, the paper proposes a design methodology which, aims to speed-up the process of pushing the mission’s concept maturity level. This push up is obtained thanks to a guided negotiation space exploration, which involves autonomously exploration and optimization of trade opportunities among stakeholders via Artificial Intelligence algorithms. The negotiation space is generated via a multidisciplinary collaborative optimization method, infused by game theory and multi-attribute utility theory. In particular, game theory is able to model the negotiation process to reach the equilibria among stakeholder needs. Because of the huge dimension of the negotiation space, a collaborative optimization framework with evolutionary algorithm has been integrated in order to guide the game process to efficiently and rapidly searching for the Pareto equilibria among stakeholders. At last, the concept of utility constituted the mechanism to bridge the language barrier between experts of different backgrounds and differing needs, using the elicited and modeled needs to evaluate a multitude of alternatives. To highlight the benefits of the proposed methodology, the paper presents the design of a CubeSat mission for the observation of lunar radiation environment. The derived solution results able to balance all stakeholders needs and guaranteeing the effectiveness of the selection mission concept thanks to its robustness in valuable changeability. The benefits provided by the proposed design methodology are highlighted, and further development proposed.Keywords: concurrent engineering, artificial intelligence, negotiation in engineering design, multidisciplinary optimization
Procedia PDF Downloads 1361705 Simulation and Assessment of Carbon Dioxide Separation by Piperazine Blended Solutions Using E-NRTL and Peng-Robinson Models: Study of Regeneration Heat Duty
Authors: Arash Esmaeili, Zhibang Liu, Yang Xiang, Jimmy Yun, Lei Shao
Abstract:
A high-pressure carbon dioxide (CO₂) absorption from a specific off-gas in a conventional column has been evaluated for the environmental concerns by the Aspen HYSYS simulator using a wide range of single absorbents and piperazine (PZ) blended solutions to estimate the outlet CO₂ concentration, CO₂ loading, reboiler power supply, and regeneration heat duty to choose the most efficient solution in terms of CO₂ removal and required heat duty. The property package, which is compatible with all applied solutions for the simulation in this study, estimates the properties based on the electrolyte non-random two-liquid (E-NRTL) model for electrolyte thermodynamics and Peng-Robinson equation of state for vapor phase and liquid hydrocarbon phase properties. The results of the simulation indicate that piperazine, in addition to the mixture of piperazine and monoethanolamine (MEA), demands the highest regeneration heat duty compared with other studied single and blended amine solutions, respectively. The blended amine solutions with the lowest PZ concentrations (5wt% and 10wt%) were considered and compared to reduce the cost of the process, among which the blended solution of 10wt%PZ+35wt%MDEA (methyldiethanolamine) was found as the most appropriate solution in terms of CO₂ content in the outlet gas, rich-CO₂ loading, and regeneration heat duty.Keywords: absorption, amine solutions, aspen HYSYS, CO₂ loading, piperazine, regeneration heat duty
Procedia PDF Downloads 1881704 Surface Modified Thermoplastic Polyurethane and Poly(Vinylidene Fluoride) Nanofiber Based Flexible Triboelectric Nanogenerator and Wearable Bio-Sensor
Authors: Sk Shamim Hasan Abir, Karen Lozano, Mohammed Jasim Uddin
Abstract:
Over the last few years, nanofiber-based triboelectric nanogenerator (TENG) has caught great attention among researchers all over the world due to its inherent capability of converting mechanical energy to usable electrical energy. In this study, poly(vinylidene fluoride) (PVDF) and thermoplastic polyurethane (TPU) nanofiber prepared by Forcespinning® (FS) technique were used to fabricate TENG for self-charging energy storage device and biomechanical body motion sensor. The surface of the TPU nanofiber was modified by uniform deposition of thin gold film to enhance the frictional properties; yielded 254 V open-circuit voltage (Voc) and 86 µA short circuit current (Isc), which were 2.12 and 1.87 times greater in contrast to bare PVDF-TPU TENG. Moreover, the as-fabricated PVDF-TPU/Au TENG was tested against variable capacitors and resistive load, and the results showed that with a 3.2 x 2.5 cm2 active contact area, it can quick charge up to 7.64 V within 30 seconds using a 1.0 µF capacitor and generate significant 2.54 mW power, enough to light 75 commercial LEDs (1.5 V each) by the hand tapping motion at 4 Hz (240 beats per minutes (bpm)) load frequency. Furthermore, the TENG was attached to different body parts to capture distinctive electrical signals for various body movements, elucidated the prospective usability of our prepared nanofiber-based TENG in wearable body motion sensor application.Keywords: biomotion sensor, forcespinning, nanofibers, triboelectric nanogenerator
Procedia PDF Downloads 1021703 A Simple, Precise and Cost Effective PTFE Container Design Capable to Work in Domestic Microwave Oven
Authors: Mehrdad Gholami, Shima Behkami, Sharifuddin B. Md. Zain, Firdaus A. B. Kamaruddin
Abstract:
Starting from the first application of a microwave oven for sample preparation in 1975 for the purpose of wet ashing of biological samples using a domestic microwave oven, many microwave-assisted dissolution vessels have been developed. The advanced vessels are armed with special safety valve that release the excess of pressure while the vessels are in critical conditions due to applying high power of microwave. Nevertheless, this releasing of pressure may cause lose of volatile elements. In this study Teflon bottles are designed with relatively thicker wall compared to commercial ones and a silicone based polymer was used to prepare an O-ring which plays the role of safety valve. In this design, eight vessels are located in an ABS holder to keep them stable and safe. The advantage of these vessels is that they need only 2 mL of HNO3 and 1mL H2O2 to digest different environmental samples, namely, sludge, apple leave, peach leave, spinach leave and tomato leave. In order to investigate the performance of this design an ICP-MS instrument was applied for multi elemental analysis of 20 elements on the SRM of above environmental samples both using this design and a commercial microwave digestion design. Very comparable recoveries were obtained from this simple design with the commercial one. Considering the price of ultrapure chemicals and the amount of them which normally is about 8-10 mL, these simple vessels with the procedures that will be discussed in detail are very cost effective and very suitable for environmental studies.Keywords: inductively coupled plasma mass spectroscopy (ICP-MS), PTFE vessels, Teflon bombs, microwave digestion, trace element
Procedia PDF Downloads 3411702 Study on Heat Transfer Capacity Limits of Heat Pipe with Working Fluids Ammonia and Water
Authors: M. Heydari, A. Ghanami
Abstract:
Heat pipe is simple heat transfer device which combines the conduction and phase change phenomena to control the heat transfer without any need for external power source. At hot surface of heat pipe, the liquid phase absorbs heat and changes to vapor phase. The vapor phase flows to condenser region and with the loss of heat changes to liquid phase. Due to gravitational force the liquid phase flows to evaporator section. In HVAC systems the working fluid is chosen based on the operating temperature. The heat pipe has significant capability to reduce the humidity in HVAC systems. Each HVAC system which uses heater, humidifier or dryer is a suitable nominate for the utilization of heat pipes. Generally heat pipes have three main sections: condenser, adiabatic region, and evaporator. Performance investigation and optimization of heat pipes operation in order to increase their efficiency is crucial. In the present article, a parametric study is performed to improve the heat pipe performance. Therefore, the heat capacity of heat pipe with respect to geometrical and confining parameters is investigated. For the better observation of heat pipe operation in HVAC systems, a CFD simulation in Eulerian- Eulerian multiphase approach is also performed. The results show that heat pipe heat transfer capacity is higher for water as working fluid with the operating temperature of 340 K. It is also showed that the vertical orientation of heat pipe enhances it’s heat transfer capacity.used in the abstract.Keywords: heat pipe, HVAC system, grooved heat pipe, heat pipe limits
Procedia PDF Downloads 4001701 Second Generation Biofuels: A Futuristic Green Deal for Lignocellulosic Waste
Authors: Nivedita Sharma
Abstract:
The global demand for fossil fuels is very high, but their use is not sustainable since its reserves are declining. Additionally, fossil fuels are responsible for the accumulation of greenhouse gases. The emission of greenhouse gases from the transport sector can be reduced by substituting fossil fuels by biofuels. Thus, renewable fuels capable of sequestering carbon dioxide are in high demand. Second‐generation biofuels, which require lignocellulosic biomass as a substrate and ultimately producing ethanol, fall largely in this category. Bioethanol is a favorable and near carbon-neutral renewable biofuel leading to reduction in tailpipe pollutant emission and improving the ambient air quality. Lignocellulose consists of three main components: cellulose, hemicellulose and lignin which can be converted to ethanol with the help of microbial enzymes. Enzymatic hydrolysis of lignocellulosic biomass in 1st step is considered as the most efficient and least polluting methods for generating fermentable hexose and pentose sugars which subsequently are fermented to power alcohol by yeasts in 2nd step of the process. In the present technology, a complete bioconversion process i.e. potential hydrolytic enzymes i.e. cellulase and xylanase producing microorganisms have been isolated from different niches, screened for enzyme production, identified using phenotyping and genotyping, enzyme production, purification and application of enzymes for saccharification of different lignocellulosic biomass followed by fermentation of hydrolysate to ethanol with high yield is to be presented in detail.Keywords: cellulase, xylanase, lignocellulose, bioethanol, microbial enzymes
Procedia PDF Downloads 981700 In-Plume H₂O, CO₂, H₂S and SO₂ in the Fumarolic Field of La Fossa Cone (Vulcano Island, Aeolian Archipelago)
Authors: Cinzia Federico, Gaetano Giudice, Salvatore Inguaggiato, Marco Liuzzo, Maria Pedone, Fabio Vita, Christoph Kern, Leonardo La Pica, Giovannella Pecoraino, Lorenzo Calderone, Vincenzo Francofonte
Abstract:
The periods of increased fumarolic activity at La Fossa volcano have been characterized, since early 80's, by changes in the gas chemistry and in the output rate of fumaroles. Excepting the direct measurements of the steam output from fumaroles performed from 1983 to 1995, the mass output of the single gas species has been recently measured, with various methods, only sporadically or for short periods. Since 2008, a scanning DOAS system is operating in the Palizzi area for the remote measurement of the in-plume SO₂ flux. On these grounds, the need of a cross-comparison of different methods for the in situ measurement of the output rate of different gas species is envisaged. In 2015, two field campaigns have been carried out, aimed at: 1. The mapping of the concentration of CO₂, H₂S and SO₂ in the fumarolic plume at 1 m from the surface, by using specific open-path diode tunable lasers (GasFinder Boreal Europe Ltd.) and an Active DOAS for SO₂, respectively; these measurements, coupled to simultaneous ultrasonic wind speed and meteorological data, have been elaborated to obtain the dispersion map and the output rate of single species in the overall fumarolic field; 2. The mapping of the concentrations of CO₂, H₂S, SO₂, H₂O in the fumarolic plume at 0.5 m from the soil, by using an integrated system, including IR spectrometers and specific electrochemical sensors; this has provided the concentration ratios of the analysed gas species and their distribution in the fumarolic field; 3. The in-fumarole sampling of vapour and measurement of the steam output, to validate the remote measurements. The dispersion map of CO₂, obtained from the tunable laser measurements, shows a maximum CO₂ concentration at 1m from the soil of 1000 ppmv along the rim, and 1800 ppmv in the inner slopes. As observed, the largest contribution derives from a wide fumarole of the inner-slope, despite its present outlet temperature of 230°C, almost 200°C lower than those measured at the rim fumaroles. Actually, fumaroles in the inner slopes are among those emitting the largest amount of magmatic vapour and, during the 1989-1991 crisis, reached the temperature of 690°C. The estimated CO₂ and H₂S fluxes are 400 t/d and 4.4 t/d, respectively. The coeval SO₂ flux, measured by the scanning DOAS system, is 9±1 t/d. The steam output, recomputed from CO₂ flux measurements, is about 2000 t/d. The various direct and remote methods (as described at points 1-3) have produced coherent results, which encourage to the use of daily and automatic DOAS SO₂ data, coupled with periodic in-plume measurements of different acidic gases, to obtain the total mass rates.Keywords: DOAS, fumaroles, plume, tunable laser
Procedia PDF Downloads 3991699 Islamic Perception of Modern Democratic System
Authors: Muhammad Khubaib
Abstract:
The Holy Quran purport is to establish a democratic system in which Allah has the right to special authority and He who has the supreme power or sovereignty. The supreme leader, Allah ceded the right to govern to his prophet and whoever would ever rule he would have to govern as a deputy of Prophet of Allah and he will not have the right to deviate from the basic rules of law and constitution. Centuries before the birth of prevailing democracy, Muslim scholars and researchers continuously keep using the term of “Jamhür” (majority) in their books. Islam gives the basic importance to the public opinion to establish a government and make the public confidence necessary for the government. The most effective way to gain the trust of the people in the present to build national institutions is through the vote. Vote testifies in favor of the candidate and majority tells us who is more honest and talented. Each voter stands at the position of trustworthy. To vote a cruel person would be tantamount to treason and even not to vote would be considered as a national offence. After transparent process, the selected member of government would be seemed a fine example of the saying of Muhammad (S.A.W) in which he said; the majority of my people will never be agreed at misleading. In short in this article, there would be discussed democracy in the Islamic perception, while elaborating the western democracy so that it can be cleared that in which way the Holy Quran supported the democracy and what gestures Muhammad (S.A.W) made to spread the democracy and on the basis of those gestures, and how come those gestures are being followed to choose the sacred caliphate. It's hoped that this research would be helpful to refine the democratic system and support to meet the challenges Muslim world are facing.Keywords: democracy, modern democratic system, respect of majority opinion, vote casting
Procedia PDF Downloads 1941698 Acoustic Modeling of a Data Center with a Hot Aisle Containment System
Authors: Arshad Alfoqaha, Seth Bard, Dustin Demetriou
Abstract:
A new multi-physics acoustic modeling approach using ANSYS Mechanical FEA and FLUENT CFD methods is developed for modeling servers mounted to racks, such as IBM Z and IBM Power Systems, in data centers. This new approach allows users to determine the thermal and acoustic conditions that people are exposed to within the data center. The sound pressure level (SPL) exposure for a human working inside a hot aisle containment system inside the data center is studied. The SPL is analyzed at the noise source, at the human body, on the rack walls, on the containment walls, and on the ceiling and flooring plenum walls. In the acoustic CFD simulation, it is assumed that a four-inch diameter sphere with monopole acoustic radiation, placed in the middle of each rack, provides a single-source representation of all noise sources within the rack. Ffowcs Williams & Hawkings (FWH) acoustic model is employed. The target frequency is 1000 Hz, and the total simulation time for the transient analysis is 1.4 seconds, with a very small time step of 3e-5 seconds and 10 iterations to ensure convergence and accuracy. A User Defined Function (UDF) is developed to accurately simulate the acoustic noise source, and a Dynamic Mesh is applied to ensure acoustic wave propagation. Initial validation of the acoustic CFD simulation using a closed-form solution for the spherical propagation of an acoustic point source is performed.Keywords: data centers, FLUENT, acoustics, sound pressure level, SPL, hot aisle containment, IBM
Procedia PDF Downloads 1761697 Design Optimization of a Micro Compressor for Micro Gas Turbine Using Computational Fluid Dynamics
Authors: Kamran Siddique, Hiroyuki Asada, Yoshifumi Ogami
Abstract:
The use of Micro Gas Turbine (MGT) as the engine in Unmanned Aerobic Vehicles (UAVs) and power source in Robotics is widespread these days. Research has been conducted in the past decade or so to improve the performance of different components of MGT. This type of engine has interrelated components which have non-linear characteristics. Therefore, the overall engine performance depends on the individual engine element’s performance. Computational Fluid Dynamics (CFD) is one of the simulation method tools used to analyze or even optimize MGT system performance. In this study, the compressor of the MGT is designed, and performance optimization is being done using CFD. Performance of the micro compressor is improved in order to increase the overall performance of MGT. A high value of pressure ratio is to be achieved by studying the effect of change of different operating parameters like mass flow rate and revolutions per minute (RPM) and aerodynamical and geometrical parameters on the pressure ratio of the compressor. Two types of compressor designs are considered in this study; 3D centrifugal and ‘planar’ designs. For a 10 mm impeller, the planar model is the simplest compressor model with the ease in manufacturability. On the other hand, 3D centrifugal model, although more efficient, is very difficult to manufacture using current microfabrication resources. Therefore, the planar model is the best-suited model for a micro compressor. So. a planar micro compressor has been designed that has a good pressure ratio, and it is easy to manufacture using current microfabrication technologies. Future work is to fabricate the compressor to get experimental results and validate the theoretical model.Keywords: computational fluid dynamics, microfabrication, MEMS, unmanned aerobic vehicles
Procedia PDF Downloads 1441696 Critical Behaviour and Filed Dependence of Magnetic Entropy Change in K Doped Manganites Pr₀.₈Na₀.₂−ₓKₓMnO₃ (X = .10 And .15)
Authors: H. Ben Khlifa, W. Cheikhrouhou-Koubaa, A. Cheikhrouhou
Abstract:
The orthorhombic Pr₀.₈Na₀.₂−ₓKₓMnO₃ (x = 0.10 and 0.15) manganites are prepared by using the solid-state reaction at high temperatures. The critical exponents (β, γ, δ) are investigated through various techniques such as modified Arrott plot, Kouvel-Fisher method, and critical isotherm analysis based on the data of the magnetic measurements recorded around the Curie temperature. The critical exponents are derived from the magnetization data using the Kouvel-Fisher method, are found to be β = 0.32(4) and γ = 1.29(2) at TC ~ 123 K for x = 0.10 and β = 0.31(1) and γ = 1.25(2) at TC ~ 133 K for x = 0.15. The critical exponent values obtained for both samples are comparable to the values predicted by the 3D-Ising model and have also been verified by the scaling equation of state. Such results demonstrate the existence of ferromagnetic short-range order in our materials. The magnetic entropy changes of polycrystalline samples with a second-order phase transition are investigated. A large magnetic entropy change deduced from isothermal magnetization curves, is observed in our samples with a peak centered on their respective Curie temperatures (TC). The field dependence of the magnetic entropy changes are analyzed, which shows power-law dependence ΔSmax ≈ a(μ0 H)n at the transition temperature. The values of n obey the Curie Weiss law above the transition temperature. It is shown that for the investigated materials, the magnetic entropy change follows a master curve behavior. The rescaled magnetic entropy change curves for different applied fields collapse onto a single curve for both samples.Keywords: manganites, critical exponents, magnetization, magnetocaloric, master curve
Procedia PDF Downloads 164