Search results for: space efficiency
7220 Investigating Informal Vending Practices and Social Encounters along Commercial Streets in Cairo, Egypt
Authors: Dalya M. Hassan
Abstract:
Marketplaces and commercial streets represent some of the most used and lively urban public spaces. Not only do they provide an outlet for commercial exchange, but they also facilitate social and recreational encounters. Such encounters can be influenced by both formal as well as informal vending activities. This paper explores and documents forms of informal vending practices and how they relate to social patterns that occur along the sidewalks of Commercial Streets in Cairo. A qualitative single case study approach of ‘Midan El Gami’ marketplace in Heliopolis, Cairo is adopted. The methodology applied includes direct and walk-by observations for two main commercial streets in the marketplace. Four zoomed-in activity maps are also done for three sidewalk segments that displayed varying vending and social features. Main findings include a documentation and classification of types of informal vending practices as well as a documentation of vendors’ distribution patterns in the urban space. Informal vending activities mainly included informal street vendors and shop spillovers, either as product or seating spillovers. Results indicated that staying and lingering activities were more prevalent in sidewalks that had certain physical features, such as diversity of shops, shaded areas, open frontages, and product or seating spillovers. Moreover, differences in social activity patterns were noted between sidewalks with street vendors and sidewalks with spillovers. While the first displayed more buying, selling, and people watching activities, the latter displayed more social relations and bonds amongst traders’ communities and café patrons. Ultimately, this paper provides a documentation, which suggests that informal vending can have a positive influence on creating a lively commercial street and on resulting patterns of use on the sidewalk space. The results can provide a basis for further investigations and analysis concerning this topic. This could aid in better accommodating informal vending activities within the design of future commercial streets.Keywords: commercial streets, informal vending practices, sidewalks, social encounters
Procedia PDF Downloads 1637219 A Study on Evaluation for Performance Verification of Ni-63 Radioisotope Betavoltaic Battery
Authors: Youngmok Yun, Bosung Kim, Sungho Lee, Kyeongsu Jeon, Hyunwook Hwangbo, Byounggun Choi
Abstract:
A betavoltaic battery converts nuclear energy released as beta particles (β-) directly into electrical energy. Betavoltaic cells are analogous to photovoltaic cells. The beta particle’s kinetic energy enters a p-n junction and creates electron-hole pairs. Subsequently, the built-in potential of the p-n junction accelerates the electrons and ions to their respective collectors. The major challenges are electrical conversion efficiencies and exact evaluation. In this study, the performance of betavoltaic battery was evaluated. The betavoltaic cell was evaluated in the same condition as radiation from radioactive isotope using by FE-SEM(field emission scanning electron microscope). The average energy of the radiation emitted from the Ni-63 radioisotope is 17.42 keV. FE-SEM is capable of emitting an electron beam of 1-30keV. Therefore, it is possible to evaluate betavoltaic cell without radioactive isotopes. The betavoltaic battery consists of radioisotope that is physically connected on the surface of Si-based PN diode. The performance of betavoltaic battery can be estimated by the efficiency of PN diode unit cell. The current generated by scanning electron microscope with fixed accelerating voltage (17keV) was measured by using faraday cup. Electrical characterization of the p-n junction diode was performed by using Nano Probe Work Station and I-V measurement system. The output value of the betavoltaic cells developed by this research team was 0.162 μw/cm2 and the efficiency was 1.14%.Keywords: betavoltaic, nuclear, battery, Ni-63, radio-isotope
Procedia PDF Downloads 2587218 Power Iteration Clustering Based on Deflation Technique on Large Scale Graphs
Authors: Taysir Soliman
Abstract:
One of the current popular clustering techniques is Spectral Clustering (SC) because of its advantages over conventional approaches such as hierarchical clustering, k-means, etc. and other techniques as well. However, one of the disadvantages of SC is the time consuming process because it requires computing the eigenvectors. In the past to overcome this disadvantage, a number of attempts have been proposed such as the Power Iteration Clustering (PIC) technique, which is one of versions from SC; some of PIC advantages are: 1) its scalability and efficiency, 2) finding one pseudo-eigenvectors instead of computing eigenvectors, and 3) linear combination of the eigenvectors in linear time. However, its worst disadvantage is an inter-class collision problem because it used only one pseudo-eigenvectors which is not enough. Previous researchers developed Deflation-based Power Iteration Clustering (DPIC) to overcome problems of PIC technique on inter-class collision with the same efficiency of PIC. In this paper, we developed Parallel DPIC (PDPIC) to improve the time and memory complexity which is run on apache spark framework using sparse matrix. To test the performance of PDPIC, we compared it to SC, ESCG, ESCALG algorithms on four small graph benchmark datasets and nine large graph benchmark datasets, where PDPIC proved higher accuracy and better time consuming than other compared algorithms.Keywords: spectral clustering, power iteration clustering, deflation-based power iteration clustering, Apache spark, large graph
Procedia PDF Downloads 1897217 Investigating the Effects of Cylinder Disablement on Diesel Engine Fuel Economy and Exhaust Temperature Management
Authors: Hasan Ustun Basaran
Abstract:
Diesel engines are widely used in transportation sector due to their high thermal efficiency. However, they also release high rates of NOₓ and PM (particulate matter) emissions into the environment which have hazardous effects on human health. Therefore, environmental protection agencies have issued strict emission regulations on automotive diesel engines. Recently, these regulations are even increasingly strengthened. Engine producers search novel on-engine methods such as advanced combustion techniques, utilization of renewable fuels, exhaust gas recirculation, advanced fuel injection methods or use exhaust after-treatment (EAT) systems in order to reduce emission rates on diesel engines. Although those aforementioned on-engine methods are effective to curb emission rates, they result in inefficiency or cannot decrease emission rates satisfactorily at all operating conditions. Therefore, engine manufacturers apply both on-engine techniques and EAT systems to meet the stringent emission norms. EAT systems are highly effective to diminish emission rates, however, they perform inefficiently at low loads due to low exhaust gas temperatures (below 250°C). Therefore, the objective of this study is to demonstrate that engine-out temperatures can be elevated above 250°C at low-loaded cases via cylinder disablement. The engine studied and modeled via Lotus Engine Simulation (LES) software is a six-cylinder turbocharged and intercooled diesel engine. Exhaust temperatures and mass flow rates are predicted at 1200 rpm engine speed and several low loaded conditions using LES program. It is seen that cylinder deactivation results in a considerable exhaust temperature rise (up to 100°C) at low loads which ensures effective EAT management. The method also improves fuel efficiency through reduced total pumping loss. Decreased total air induction due to inactive cylinders is thought to be responsible for improved engine pumping loss. The technique reduces exhaust gas flow rate as air flow is cut off on disabled cylinders. Still, heat transfer rates to the after-treatment catalyst bed do not decrease that much since exhaust temperatures are increased sufficiently. Simulation results are promising; however, further experimental studies are needed to identify the true potential of the method on fuel consumption and EAT improvement.Keywords: cylinder disablement, diesel engines, exhaust after-treatment, exhaust temperature, fuel efficiency
Procedia PDF Downloads 1767216 Estimation of the Exergy-Aggregated Value Generated by a Manufacturing Process Using the Theory of the Exergetic Cost
Authors: German Osma, Gabriel Ordonez
Abstract:
The production of metal-rubber spares for vehicles is a sequential process that consists in the transformation of raw material through cutting activities and chemical and thermal treatments, which demand electricity and fossil fuels. The energy efficiency analysis for these cases is mostly focused on studying of each machine or production step, but is not common to study of the quality of the production process achieves from aggregated value viewpoint, which can be used as a quality measurement for determining of impact on the environment. In this paper, the theory of exergetic cost is used for determining of aggregated exergy to three metal-rubber spares, from an exergy analysis and thermoeconomic analysis. The manufacturing processing of these spares is based into batch production technique, and therefore is proposed the use of this theory for discontinuous flows from of single models of workstations; subsequently, the complete exergy model of each product is built using flowcharts. These models are a representation of exergy flows between components into the machines according to electrical, mechanical and/or thermal expressions; they determine the demanded exergy to produce the effective transformation in raw materials (aggregated exergy value), the exergy losses caused by equipment and irreversibilities. The energy resources of manufacturing process are electricity and natural gas. The workstations considered are lathes, punching presses, cutters, zinc machine, chemical treatment tanks, hydraulic vulcanizing presses and rubber mixer. The thermoeconomic analysis was done by workstation and by spare; first of them describes the operation of the components of each machine and where the exergy losses are; while the second of them estimates the exergy-aggregated value for finished product and wasted feedstock. Results indicate that exergy efficiency of a mechanical workstation is between 10% and 60% while this value in the thermal workstations is less than 5%; also that each effective exergy-aggregated value is one-thirtieth of total exergy required for operation of manufacturing process, which amounts approximately to 2 MJ. These troubles are caused mainly by technical limitations of machines, oversizing of metal feedstock that demands more mechanical transformation work, and low thermal insulation of chemical treatment tanks and hydraulic vulcanizing presses. From established information, in this case, it is possible to appreciate the usefulness of theory of exergetic cost for analyzing of aggregated value in manufacturing processes.Keywords: exergy-aggregated value, exergy efficiency, thermoeconomics, exergy modeling
Procedia PDF Downloads 1707215 Factors in a Sustainability Assessment of New Types of Closed Cavity Facades
Authors: Zoran Veršić, Josip Galić, Marin Binički, Lucija Stepinac
Abstract:
With the current increase in CO₂ emissions and global warming, the sustainability of both existing and new solutions must be assessed on a wide scale. As the implementation of closed cavity facades (CCF) is on the rise, a variety of factors must be included in the analysis of new types of CCF. This paper aims to cover the relevant factors included in the sustainability assessment of new types of CCF. Several mathematical models are being used to describe the physical behavior of CCF. Depending on the type of CCF, they cover the main factors which affect the durability of the façade: thermal behavior of various elements in the façade, stress, and deflection of the glass panels, pressure inside a cavity, exchange rate, and the moisture buildup in the cavity. CCF itself represents a complex system in which all mentioned factors must be considered mutually. Still, the façade is only an envelope of a more complex system, the building. Choice of the façade dictates the heat loss and the heat gain, thermal comfort of inner space, natural lighting, and ventilation. Annual consumption of energy for heating, cooling, lighting, and maintenance costs will present the operational advantages or disadvantages of the chosen façade system in both the economic and environmental aspects. Still, the only operational viewpoint is not all-inclusive. As the building codes constantly demand higher energy efficiency as well as transfer to renewable energy sources, the ratio of embodied and lifetime operational energy footprint of buildings is changing. With the drop in operational energy CO₂ emissions, embodied energy emissions present a larger and larger share in the lifecycle emissions of the building. Taken all into account, the sustainability assessment of a façade, as well as other major building elements, should include all mentioned factors during the lifecycle of an element. The challenge of such an approach is a timescale. Depending on the climatic conditions on the building site, the expected lifetime of CCF can exceed 25 years. In such a time span, some of the factors can be estimated more precisely than others. The ones depending on the socio-economic conditions are more likely to be harder to predict than the natural ones like the climatic load. This work recognizes and summarizes the relevant factors needed for the assessment of new types of CCF, considering the entire lifetime of a façade element and economic and environmental aspects.Keywords: assessment, closed cavity façade, life cycle, sustainability
Procedia PDF Downloads 1927214 Significance of High Specific Speed in Circulating Water Pump, Which Can Cause Cavitation, Noise and Vibration
Authors: Chandra Gupt Porwal
Abstract:
Excessive vibration means increased wear, increased repair efforts, bad product selection & quality and high energy consumption. This may be sometimes experienced by cavitation or suction/discharge re-circulation which could occur only when net positive suction head available NPSHA drops below the net positive suction head required NPSHR. Cavitation can cause axial surging if it is excessive, will damage mechanical seals, bearings, possibly other pump components frequently and shorten the life of the impeller. Efforts have been made to explain Suction Energy (SE), Specific Speed (Ns), Suction Specific Speed (Nss), NPSHA, NPSHR & their significance, possible reasons of cavitation /internal re-circulation, its diagnostics and remedial measures to arrest and prevent cavitation in this paper. A case study is presented by the author highlighting that the root cause of unwanted noise and vibration is due to cavitation, caused by high specific speeds or inadequate net- positive suction head available which results in damages to material surfaces of impeller & suction bells and degradation of machine performance, its capacity and efficiency too. The author strongly recommends revisiting the technical specifications of CW pumps to provide sufficient NPSH margin ratios > 1.5, for future projects and Nss be limited to 8500 -9000 for cavitation free operation.Keywords: best efficiency point (BEP), net positive suction head NPSHA, NPSHR, specific speed NS, suction specific speed NSS
Procedia PDF Downloads 2547213 Rd-PLS Regression: From the Analysis of Two Blocks of Variables to Path Modeling
Authors: E. Tchandao Mangamana, V. Cariou, E. Vigneau, R. Glele Kakai, E. M. Qannari
Abstract:
A new definition of a latent variable associated with a dataset makes it possible to propose variants of the PLS2 regression and the multi-block PLS (MB-PLS). We shall refer to these variants as Rd-PLS regression and Rd-MB-PLS respectively because they are inspired by both Redundancy analysis and PLS regression. Usually, a latent variable t associated with a dataset Z is defined as a linear combination of the variables of Z with the constraint that the length of the loading weights vector equals 1. Formally, t=Zw with ‖w‖=1. Denoting by Z' the transpose of Z, we define herein, a latent variable by t=ZZ’q with the constraint that the auxiliary variable q has a norm equal to 1. This new definition of a latent variable entails that, as previously, t is a linear combination of the variables in Z and, in addition, the loading vector w=Z’q is constrained to be a linear combination of the rows of Z. More importantly, t could be interpreted as a kind of projection of the auxiliary variable q onto the space generated by the variables in Z, since it is collinear to the first PLS1 component of q onto Z. Consider the situation in which we aim to predict a dataset Y from another dataset X. These two datasets relate to the same individuals and are assumed to be centered. Let us consider a latent variable u=YY’q to which we associate the variable t= XX’YY’q. Rd-PLS consists in seeking q (and therefore u and t) so that the covariance between t and u is maximum. The solution to this problem is straightforward and consists in setting q to the eigenvector of YY’XX’YY’ associated with the largest eigenvalue. For the determination of higher order components, we deflate X and Y with respect to the latent variable t. Extending Rd-PLS to the context of multi-block data is relatively easy. Starting from a latent variable u=YY’q, we consider its ‘projection’ on the space generated by the variables of each block Xk (k=1, ..., K) namely, tk= XkXk'YY’q. Thereafter, Rd-MB-PLS seeks q in order to maximize the average of the covariances of u with tk (k=1, ..., K). The solution to this problem is given by q, eigenvector of YY’XX’YY’, where X is the dataset obtained by horizontally merging datasets Xk (k=1, ..., K). For the determination of latent variables of order higher than 1, we use a deflation of Y and Xk with respect to the variable t= XX’YY’q. In the same vein, extending Rd-MB-PLS to the path modeling setting is straightforward. Methods are illustrated on the basis of case studies and performance of Rd-PLS and Rd-MB-PLS in terms of prediction is compared to that of PLS2 and MB-PLS.Keywords: multiblock data analysis, partial least squares regression, path modeling, redundancy analysis
Procedia PDF Downloads 1477212 Thermo-Hydro-Mechanical-Chemical Coupling in Enhanced Geothermal Systems: Challenges and Opportunities
Authors: Esmael Makarian, Ayub Elyasi, Fatemeh Saberi, Olusegun Stanley Tomomewo
Abstract:
Geothermal reservoirs (GTRs) have garnered global recognition as a sustainable energy source. The Thermo-Hydro-Mechanical-Chemical (THMC) integration coupling proves to be a practical and effective method for optimizing production in GTRs. The study outcomes demonstrate that THMC coupling serves as a versatile and valuable tool, offering in-depth insights into GTRs and enhancing their operational efficiency. This is achieved through temperature analysis and pressure changes and their impacts on mechanical properties, structural integrity, fracture aperture, permeability, and heat extraction efficiency. Moreover, THMC coupling facilitates potential benefits assessment and risks associated with different geothermal technologies, considering the complex thermal, hydraulic, mechanical, and chemical interactions within the reservoirs. However, THMC-coupling utilization in GTRs presents a multitude of challenges. These challenges include accurately modeling and predicting behavior due to the interconnected nature of processes, limited data availability leading to uncertainties, induced seismic events risks to nearby communities, scaling and mineral deposition reducing operational efficiency, and reservoirs' long-term sustainability. In addition, material degradation, environmental impacts, technical challenges in monitoring and control, accurate assessment of resource potential, and regulatory and social acceptance further complicate geothermal projects. Addressing these multifaceted challenges is crucial for successful geothermal energy resources sustainable utilization. This paper aims to illuminate the challenges and opportunities associated with THMC coupling in enhanced geothermal systems. Practical solutions and strategies for mitigating these challenges are discussed, emphasizing the need for interdisciplinary approaches, improved data collection and modeling techniques, and advanced monitoring and control systems. Overcoming these challenges is imperative for unlocking the full potential of geothermal energy making a substantial contribution to the global energy transition and sustainable development.Keywords: geothermal reservoirs, THMC coupling, interdisciplinary approaches, challenges and opportunities, sustainable utilization
Procedia PDF Downloads 697211 Corporate In-Kind Donations and Economic Efficiency: The Case of Surplus Food Recovery and Donation
Authors: Sedef Sert, Paola Garrone, Marco Melacini, Alessandro Perego
Abstract:
This paper is aimed at enhancing our current understanding of motivations behind corporate in-kind donations and to find out whether economic efficiency may be a major driver. Our empirical setting is consisted of surplus food recovery and donation by companies from food supply chain. This choice of empirical setting is motivated by growing attention on the paradox of food insecurity and food waste i.e. a total of 842 million people worldwide were estimated to be suffering from regularly not getting enough food, while approximately 1.3 billion tons per year food is wasted globally. Recently, many authors have started considering surplus food donation to nonprofit organizations as a way to cope with social issue of food insecurity and environmental issue of food waste. In corporate philanthropy literature the motivations behind the corporate donations for social purposes, such as altruistic motivations, enhancements to employee morale, the organization’s image, supplier/customer relationships, local community support, have been examined. However, the relationship with economic efficiency is not studied and in many cases the pure economic efficiency as a decision making factor is neglected. Although in literature there are some studies give us the clue on economic value creation of surplus food donation such as saving landfill fees or getting tax deductions, so far there is no study focusing deeply on this phenomenon. In this paper, we develop a conceptual framework which explores the economic barriers and drivers towards alternative surplus food management options i.e. discounts, secondary markets, feeding animals, composting, energy recovery, disposal. The case study methodology is used to conduct the research. Protocols for semi structured interviews are prepared based on an extensive literature review and adapted after expert opinions. The interviews are conducted mostly with the supply chain and logistics managers of 20 companies in food sector operating in Italy, in particular in Lombardy region. The results shows that in current situation, the food manufacturing companies can experience cost saving by recovering and donating the surplus food with respect to other methods especially considering the disposal option. On the other hand, retail and food service sectors are not economically incentivized to recover and donate surplus food to disfavored population. The paper shows that not only strategic and moral motivations, but also economic motivations play an important role in managerial decision making process in surplus food management. We also believe that our research while rooted in the surplus food management topic delivers some interesting implications to more general research on corporate in-kind donations. It also shows that there is a huge room for policy making favoring the recovery and donation of surplus products.Keywords: corporate philanthropy, donation, recovery, surplus food
Procedia PDF Downloads 3127210 Enhanced Retrieval-Augmented Generation (RAG) Method with Knowledge Graph and Graph Neural Network (GNN) for Automated QA Systems
Authors: Zhihao Zheng, Zhilin Wang, Linxin Liu
Abstract:
In the research of automated knowledge question-answering systems, accuracy and efficiency are critical challenges. This paper proposes a knowledge graph-enhanced Retrieval-Augmented Generation (RAG) method, combined with a Graph Neural Network (GNN) structure, to automatically determine the correctness of knowledge competition questions. First, a domain-specific knowledge graph was constructed from a large corpus of academic journal literature, with key entities and relationships extracted using Natural Language Processing (NLP) techniques. Then, the RAG method's retrieval module was expanded to simultaneously query both text databases and the knowledge graph, leveraging the GNN to further extract structured information from the knowledge graph. During answer generation, contextual information provided by the knowledge graph and GNN is incorporated to improve the accuracy and consistency of the answers. Experimental results demonstrate that the knowledge graph and GNN-enhanced RAG method perform excellently in determining the correctness of questions, achieving an accuracy rate of 95%. Particularly in cases involving ambiguity or requiring contextual information, the structured knowledge provided by the knowledge graph and GNN significantly enhances the RAG method's performance. This approach not only demonstrates significant advantages in improving the accuracy and efficiency of automated knowledge question-answering systems but also offers new directions and ideas for future research and practical applications.Keywords: knowledge graph, graph neural network, retrieval-augmented generation, NLP
Procedia PDF Downloads 397209 Roof Integrated Photo Voltaic with Air Collection on Glasgow School of Art Campus Building: A Feasibility Study
Authors: Rosalie Menon, Angela Reid
Abstract:
Building integrated photovoltaic systems with air collectors (hybrid PV-T) have proved successful however there are few examples of their application in the UK. The opportunity to pull heat from behind the PV system to contribute to a building’s heating system is an efficient use of waste energy and its potential to improve the performance of the PV array is well documented. As part of Glasgow School of Art’s estate expansion, the purchase and redevelopment of an existing 1950’s college building was used as a testing vehicle for the hybrid PV-T system as an integrated element of the upper floor and roof. The primary objective of the feasibility study was to determine if hybrid PV-T was technically and financially suitable for the refurbished building. The key consideration was whether the heat recovered from the PV panels (to increase the electrical efficiency) can be usefully deployed as a heat source within the building. Dynamic thermal modelling (IES) and RetScreen Software were used to carry out the feasibility study not only to simulate overshadowing and optimise the PV-T locations but also to predict the atrium temperature profile; predict the air load for the proposed new 4 No. roof mounted air handling units and to predict the dynamic electrical efficiency of the PV element. The feasibility study demonstrates that there is an energy reduction and carbon saving to be achieved with each hybrid PV-T option however the systems are subject to lengthy payback periods and highlights the need for enhanced government subsidy schemes to reward innovation with this technology in the UK.Keywords: building integrated, photovoltatic thermal, pre-heat air, ventilation
Procedia PDF Downloads 1717208 Data-Driven Performance Evaluation of Surgical Doctors Based on Fuzzy Analytic Hierarchy Processes
Authors: Yuguang Gao, Qiang Yang, Yanpeng Zhang, Mingtao Deng
Abstract:
To enhance the safety, quality and efficiency of healthcare services provided by surgical doctors, we propose a comprehensive approach to the performance evaluation of individual doctors by incorporating insights from performance data as well as views of different stakeholders in the hospital. Exploratory factor analysis was first performed on collective multidimensional performance data of surgical doctors, where key factors were extracted that encompass assessment of professional experience and service performance. A two-level indicator system was then constructed, for which we developed a weighted interval-valued spherical fuzzy analytic hierarchy process to analyze the relative importance of the indicators while handling subjectivity and disparity in the decision-making of multiple parties involved. Our analytical results reveal that, for the key factors identified as instrumental for evaluating surgical doctors’ performance, the overall importance of clinical workload and complexity of service are valued more than capacity of service and professional experience, while the efficiency of resource consumption ranks comparatively the lowest in importance. We also provide a retrospective case study to illustrate the effectiveness and robustness of our quantitative evaluation model by assigning meaningful performance ratings to individual doctors based on the weights developed through our approach.Keywords: analytic hierarchy processes, factor analysis, fuzzy logic, performance evaluation
Procedia PDF Downloads 587207 A Goal-Oriented Approach for Supporting Input/Output Factor Determination in the Regulation of Brazilian Electricity Transmission
Authors: Bruno de Almeida Vilela, Heinz Ahn, Ana Lúcia Miranda Lopes, Marcelo Azevedo Costa
Abstract:
Benchmarking public utilities such as transmission system operators (TSOs) is one of the main strategies employed by regulators in order to fix monopolistic companies’ revenues. Since 2007 the Brazilian regulator has been utilizing Data Envelopment Analysis (DEA) to benchmark TSOs. Despite the application of DEA to improve the transmission sector’s efficiency, some problems can be pointed out, such as the high price of electricity in Brazil; the limitation of the benchmarking only to operational expenses (OPEX); the absence of variables that represent the outcomes of the transmission service; and the presence of extremely low and high efficiencies. As an alternative to the current concept of benchmarking the Brazilian regulator uses, we propose a goal-oriented approach. Our proposal supports input/output selection by taking traditional organizational goals and measures as a basis for the selection of factors for benchmarking purposes. As the main advantage, it resolves the classical DEA problems of input/output selection, undesirable and dual-role factors. We also provide a demonstration of our goal-oriented concept regarding service quality. As a result, most TSOs’ efficiencies in Brazil might improve when considering quality as important in their efficiency estimation.Keywords: decision making, goal-oriented benchmarking, input/output factor determination, TSO regulation
Procedia PDF Downloads 1967206 Impregnation Reduction Method for the Preparation of Platinum-Nickel/Carbon Black Alloy Nanoparticles as Faor Electrocatalyst
Authors: Maryam Kiani
Abstract:
In order to enhance the efficiency and stability of an electrocatalyst for formic acid electro-oxidation reaction (FAOR), we developed a method to create Pt/Ni nanoparticles with carbon black. These nanoparticles were prepared using a simple impregnation reduction technique. During the observation, it was found that the nanoparticles had a spherical shape. Additionally, the average particle size remained consistent, falling within the range of about 4 nm. This approach aimed to obtain a loaded Pt-based electrocatalyst that would exhibit improved performance and stability when used in FAOR applications. By utilizing the impregnation reduction method and incorporating Ni nanoparticles along with Pt, we sought to enhance the catalytic properties of the material. By incorporating Ni atoms into the Pt structure, the electronic properties of Pt are modified, resulting in a delay in the chemisorption of harmful CO intermediate species. This modification also promotes the dehydrogenation pathway of the formic acid oxidation reaction (FAOR). Through electrochemical analysis, it has been observed that the Pt3Ni-C catalyst exhibits enhanced performance in FAOR compared to traditional Pt catalysts. This means that the addition of Ni atoms improves the efficiency and effectiveness of the Pt3Ni-C catalyst in facilitating the FAOR process. Overall, the utilization of these alloy nanoparticles as electrocatalysts represents a significant advancement in fuel cell technology.Keywords: electrocatalyst, impregnation reduction method, formic acid electro-oxidation reaction, fuel cells
Procedia PDF Downloads 1277205 Computationally Efficient Stacking Sequence Blending for Composite Structures with a Large Number of Design Regions Using Cellular Automata
Authors: Ellen Van Den Oord, Julien Marie Jan Ferdinand Van Campen
Abstract:
This article introduces a computationally efficient method for stacking sequence blending of composite structures. The computational efficiency makes the presented method especially interesting for composite structures with a large number of design regions. Optimization of composite structures with an unequal load distribution may lead to locally optimized thicknesses and ply orientations that are incompatible with one another. Blending constraints can be enforced to achieve structural continuity. In literature, many methods can be found to implement structural continuity by means of stacking sequence blending in one way or another. The complexity of the problem makes the blending of a structure with a large number of adjacent design regions, and thus stacking sequences, prohibitive. In this work the local stacking sequence optimization is preconditioned using a method found in the literature that couples the mechanical behavior of the laminate, in the form of lamination parameters, to blending constraints, yielding near-optimal easy-to-blend designs. The preconditioned design is then fed to the scheme using cellular automata that have been developed by the authors. The method is applied to the benchmark 18-panel horseshoe blending problem to demonstrate its performance. The computational efficiency of the proposed method makes it especially suited for composite structures with a large number of design regions.Keywords: composite, blending, optimization, lamination parameters
Procedia PDF Downloads 2277204 Construction 4.0: The Future of the Construction Industry in South Africa
Authors: Temidayo. O. Osunsanmi, Clinton Aigbavboa, Ayodeji Oke
Abstract:
The construction industry is a renowned latecomer to the efficiency offered by the adoption of information technology. Whereas, the banking, manufacturing, retailing industries have keyed into the future by using digitization and information technology as a new approach for ensuring competitive gain and efficiency. The construction industry has yet to fully realize similar benefits because the adoption of ICT is still at the infancy stage with a major concentration on the use of software. Thus, this study evaluates the awareness and readiness of construction professionals towards embracing a full digitalization of the construction industry using construction 4.0. The term ‘construction 4.0’ was coined from the industry 4.0 concept which is regarded as the fourth industrial revolution that originated from Germany. A questionnaire was utilized for sourcing data distributed to practicing construction professionals through a convenience sampling method. Using SPSS v24, the hypotheses posed were tested with the Mann Whitney test. The result revealed that there are no differences between the consulting and contracting organizations on the readiness for adopting construction 4.0 concepts in the construction industry. Using factor analysis, the study discovers that adopting construction 4.0 will improve the performance of the construction industry regarding cost and time savings and also create sustainable buildings. In conclusion, the study determined that construction professionals have a low awareness towards construction 4.0 concepts. The study recommends an increase in awareness of construction 4.0 concepts through seminars, workshops and training, while construction professionals should take hold of the benefits of adopting construction 4.0 concepts. The study contributes to the roadmap for the implementation of construction industry 4.0 concepts in the South African construction industry.Keywords: building information technology, Construction 4.0, Industry 4.0, smart site
Procedia PDF Downloads 4107203 Production Optimization under Geological Uncertainty Using Distance-Based Clustering
Authors: Byeongcheol Kang, Junyi Kim, Hyungsik Jung, Hyungjun Yang, Jaewoo An, Jonggeun Choe
Abstract:
It is important to figure out reservoir properties for better production management. Due to the limited information, there are geological uncertainties on very heterogeneous or channel reservoir. One of the solutions is to generate multiple equi-probable realizations using geostatistical methods. However, some models have wrong properties, which need to be excluded for simulation efficiency and reliability. We propose a novel method of model selection scheme, based on distance-based clustering for reliable application of production optimization algorithm. Distance is defined as a degree of dissimilarity between the data. We calculate Hausdorff distance to classify the models based on their similarity. Hausdorff distance is useful for shape matching of the reservoir models. We use multi-dimensional scaling (MDS) to describe the models on two dimensional space and group them by K-means clustering. Rather than simulating all models, we choose one representative model from each cluster and find out the best model, which has the similar production rates with the true values. From the process, we can select good reservoir models near the best model with high confidence. We make 100 channel reservoir models using single normal equation simulation (SNESIM). Since oil and gas prefer to flow through the sand facies, it is critical to characterize pattern and connectivity of the channels in the reservoir. After calculating Hausdorff distances and projecting the models by MDS, we can see that the models assemble depending on their channel patterns. These channel distributions affect operation controls of each production well so that the model selection scheme improves management optimization process. We use one of useful global search algorithms, particle swarm optimization (PSO), for our production optimization. PSO is good to find global optimum of objective function, but it takes too much time due to its usage of many particles and iterations. In addition, if we use multiple reservoir models, the simulation time for PSO will be soared. By using the proposed method, we can select good and reliable models that already matches production data. Considering geological uncertainty of the reservoir, we can get well-optimized production controls for maximum net present value. The proposed method shows one of novel solutions to select good cases among the various probabilities. The model selection schemes can be applied to not only production optimization but also history matching or other ensemble-based methods for efficient simulations.Keywords: distance-based clustering, geological uncertainty, particle swarm optimization (PSO), production optimization
Procedia PDF Downloads 1437202 Hazardous Effects of Metal Ions on the Thermal Stability of Hydroxylammonium Nitrate
Authors: Shweta Hoyani, Charlie Oommen
Abstract:
HAN-based liquid propellants are perceived as potential substitute for hydrazine in space propulsion. Storage stability for long service life in orbit is one of the key concerns for HAN-based monopropellants because of its reactivity with metallic and non-metallic impurities which could entrain from the surface of fuel tanks and the tubes. The end result of this reactivity directly affects the handling, performance and storability of the liquid propellant. Gaseous products resulting from the decomposition of the propellant can lead to deleterious pressure build up in storage vessels. The partial loss of an energetic component can change the ignition and the combustion behavior and alter the performance of the thruster. The effect of largely plausible metals- iron, copper, chromium, nickel, manganese, molybdenum, zinc, titanium and cadmium on the thermal decomposition mechanism of HAN has been investigated in this context. Studies involving different concentrations of metal ions and HAN at different preheat temperatures have been carried out. Effect of metal ions on the decomposition behavior of HAN has been studied earlier in the context of use of HAN as gun propellant. However the current investigation pertains to the decomposition mechanism of HAN in the context of use of HAN as monopropellant for space propulsion. Decomposition onset temperature, rate of weight loss, heat of reaction were studied using DTA- TGA and total pressure rise and rate of pressure rise during decomposition were evaluated using an in-house built constant volume batch reactor. Besides, reaction mechanism and product profile were studied using TGA-FTIR setup. Iron and copper displayed the maximum reaction. Initial results indicate that iron and copper shows sensitizing effect at concentrations as low as 50 ppm with 60% HAN solution at 80°C. On the other hand 50 ppm zinc does not display any effect on the thermal decomposition of even 90% HAN solution at 80°C.Keywords: hydroxylammonium nitrate, monopropellant, reaction mechanism, thermal stability
Procedia PDF Downloads 4227201 Contribution to the Study of Automatic Epileptiform Pattern Recognition in Long Term EEG Signals
Authors: Christine F. Boos, Fernando M. Azevedo
Abstract:
Electroencephalogram (EEG) is a record of the electrical activity of the brain that has many applications, such as monitoring alertness, coma and brain death; locating damaged areas of the brain after head injury, stroke and tumor; monitoring anesthesia depth; researching physiology and sleep disorders; researching epilepsy and localizing the seizure focus. Epilepsy is a chronic condition, or a group of diseases of high prevalence, still poorly explained by science and whose diagnosis is still predominantly clinical. The EEG recording is considered an important test for epilepsy investigation and its visual analysis is very often applied for clinical confirmation of epilepsy diagnosis. Moreover, this EEG analysis can also be used to help define the types of epileptic syndrome, determine epileptiform zone, assist in the planning of drug treatment and provide additional information about the feasibility of surgical intervention. In the context of diagnosis confirmation the analysis is made using long term EEG recordings with at least 24 hours long and acquired by a minimum of 24 electrodes in which the neurophysiologists perform a thorough visual evaluation of EEG screens in search of specific electrographic patterns called epileptiform discharges. Considering that the EEG screens usually display 10 seconds of the recording, the neurophysiologist has to evaluate 360 screens per hour of EEG or a minimum of 8,640 screens per long term EEG recording. Analyzing thousands of EEG screens in search patterns that have a maximum duration of 200 ms is a very time consuming, complex and exhaustive task. Because of this, over the years several studies have proposed automated methodologies that could facilitate the neurophysiologists’ task of identifying epileptiform discharges and a large number of methodologies used neural networks for the pattern classification. One of the differences between all of these methodologies is the type of input stimuli presented to the networks, i.e., how the EEG signal is introduced in the network. Five types of input stimuli have been commonly found in literature: raw EEG signal, morphological descriptors (i.e. parameters related to the signal’s morphology), Fast Fourier Transform (FFT) spectrum, Short-Time Fourier Transform (STFT) spectrograms and Wavelet Transform features. This study evaluates the application of these five types of input stimuli and compares the classification results of neural networks that were implemented using each of these inputs. The performance of using raw signal varied between 43 and 84% efficiency. The results of FFT spectrum and STFT spectrograms were quite similar with average efficiency being 73 and 77%, respectively. The efficiency of Wavelet Transform features varied between 57 and 81% while the descriptors presented efficiency values between 62 and 93%. After simulations we could observe that the best results were achieved when either morphological descriptors or Wavelet features were used as input stimuli.Keywords: Artificial neural network, electroencephalogram signal, pattern recognition, signal processing
Procedia PDF Downloads 5287200 A Palmprint Identification System Based Multi-Layer Perceptron
Authors: David P. Tantua, Abdulkader Helwan
Abstract:
Biometrics has been recently used for the human identification systems using the biological traits such as the fingerprints and iris scanning. Identification systems based biometrics show great efficiency and accuracy in such human identification applications. However, these types of systems are so far based on some image processing techniques only, which may decrease the efficiency of such applications. Thus, this paper aims to develop a human palmprint identification system using multi-layer perceptron neural network which has the capability to learn using a backpropagation learning algorithms. The developed system uses images obtained from a public database available on the internet (CASIA). The processing system is as follows: image filtering using median filter, image adjustment, image skeletonizing, edge detection using canny operator to extract features, clear unwanted components of the image. The second phase is to feed those processed images into a neural network classifier which will adaptively learn and create a class for each different image. 100 different images are used for training the system. Since this is an identification system, it should be tested with the same images. Therefore, the same 100 images are used for testing it, and any image out of the training set should be unrecognized. The experimental results shows that this developed system has a great accuracy 100% and it can be implemented in real life applications.Keywords: biometrics, biological traits, multi-layer perceptron neural network, image skeletonizing, edge detection using canny operator
Procedia PDF Downloads 3717199 The Interoperability between CNC Machine Tools and Robot Handling Systems Based on an Object-Oriented Framework
Authors: Pouyan Jahanbin, Mahmoud Houshmand, Omid Fatahi Valilai
Abstract:
A flexible manufacturing system (FMS) is a manufacturing system having the capability of handling the variations of products features that is the result of ever-changing customer demands. The flexibility of the manufacturing systems help to utilize the resources in a more effective manner. However, the control of such systems would be complicated and challenging. FMS needs CNC machines and robots and other resources for establishing the flexibility and enhancing the efficiency of the whole system. Also it needs to integrate the resources to reach required efficiency and flexibility. In order to reach this goal, an integrator framework is proposed in which the machining data of CNC machine tools is received through a STEP-NC file. The interoperability of the system is achieved by the information system. This paper proposes an information system that its data model is designed based on object oriented approach and is implemented through a knowledge-based system. The framework is connected to a database which is filled with robot’s control commands. The framework programs the robots by rules embedded in its knowledge based system. It also controls the interactions of CNC machine tools for loading and unloading actions by robot. As a result, the proposed framework improves the integration of manufacturing resources in Flexible Manufacturing Systems.Keywords: CNC machine tools, industrial robots, knowledge-based systems, manufacturing recourses integration, flexible manufacturing system (FMS), object-oriented data model
Procedia PDF Downloads 4557198 Undersea Communications Infrastructure: Risks, Opportunities, and Geopolitical Considerations
Authors: Lori W. Gordon, Karen A. Jones
Abstract:
Today’s high-speed data connectivity depends on a vast global network of infrastructure across space, air, land, and sea, with undersea cable infrastructure (UCI) serving as the primary means for intercontinental and ‘long-haul’ communications. The UCI landscape is changing and includes an increasing variety of state actors, such as the growing economies of Brazil, Russia, India, China, and South Africa. Non-state commercial actors, such as hyper-scale content providers including Google, Facebook, Microsoft, and Amazon, are also seeking to control their data and networks through significant investments in submarine cables. Active investments by both state and non-state actors will invariably influence the growth, geopolitics, and security of this sector. Beyond these hyper-scale content providers, there are new commercial satellite communication providers. These new players include traditional geosynchronous (GEO) satellites that offer broad coverage, high throughput GEO satellites offering high capacity with spot beam technology, low earth orbit (LEO) ‘mega constellations’ – global broadband services. And potential new entrants such as High Altitude Platforms (HAPS) offer low latency connectivity, LEO constellations offer high-speed optical mesh networks, i.e., ‘fiber in the sky.’ This paper focuses on understanding the role of submarine cables within the larger context of the global data commons, spanning space, terrestrial, air, and sea networks, including an analysis of national security policy and geopolitical implications. As network operators and commercial and government stakeholders plan for emerging technologies and architectures, hedging risks for future connectivity will ensure that our data backbone will be secure for years to come.Keywords: communications, global, infrastructure, technology
Procedia PDF Downloads 877197 Re-Conceptualizing the Indigenous Learning Space for Children in Bangladesh Placing Built Environment as Third Teacher
Authors: Md. Mahamud Hassan, Shantanu Biswas Linkon, Nur Mohammad Khan
Abstract:
Over the last three decades, the primary education system in Bangladesh has experienced significant improvement, but it has failed to cope with different social and cultural aspects, which present many challenges for children, families, and the public school system. Neglecting our own contextual learning environment, it is a matter of sorrow that much attention has been paid to the more physical outcome-focused model, which is nothing but mere infrastructural development, and less subtle to the environment that suits the child's psychology and improves their social, emotional, physical, and moral competency. In South Asia, the symbol of education was never the little red house of colonial architecture but “A Guru sitting under a tree", whereas a responsive and inclusive design approach could help to create more innovative learning environments. Such an approach incorporates how the built, natural, and cultural environment shapes the learner; in turn, learners shape the learning. This research will be conducted to, i) identify the major issues and drawbacks of government policy for primary education development programs; ii) explore and evaluate the morphology of the conventional model of school, and iii) propose an alternative model in a collaborative design process with the stakeholders for maximizing the relationship between the physical learning environments and learners by treating “the built environment” as “the third teacher.” Based on observation, this research will try to find out to what extent built, and natural environments can be utilized as a teaching tool for a more optimal learning environment. It should also be evident that there is a significant gap in the state policy, predetermined educational specifications, and implementation process in response to stakeholders’ involvement. The outcome of this research will contribute to a people-place sensitive design approach through a more thoughtful and responsive architectural process.Keywords: built environment, conventional planning, indigenous learning space, responsive design
Procedia PDF Downloads 1077196 Research on the Effect of Accelerated Aging Illumination Mode on Bifacial Solar Modules
Authors: T. H. Huang, C. L. Fern, Y. K. Tseng
Abstract:
The design and reliability of solar photovoltaic modules are crucial to the development of solar energy, and efforts are still being made to extend the life of photovoltaic modules to improve their efficiency because natural aging is time-consuming and does not provide manufacturers and investors with timely information, accelerated aging is currently the best way to estimate the life of photovoltaic modules. Bifacial solar cells not only absorb light from the front side but also absorb light reflected from the ground on the back side, surpassing the performance of single-sided solar cells. Due to the asymmetry of the two sides of the light, in addition to the difference in photovoltaic conversion efficiency, there will also be differences in heat distribution, which will affect the electrical properties and material structure of the bifacial solar cell itself. In this study, there are two types of experimental samples: packaged and unpackaged and then irradiated with UVC light sources and halogen lamps for accelerated aging, as well as a control group without aging. After two weeks of accelerated aging, the bifacial solar cells were visual observation, and infrared thermal images were taken; then, the samples were subjected to IV measurement, and samples were taken for SEM, Raman, and XRD analyses in order to identify the defects that lead to failure and chemical changes, as well as to analyze the reasons for the degradation of their characteristics. From the results of the analysis, it is found that aging will cause carbonization of the polymer material on the surface of bifacial solar cells, and the crystal structure will be affected.Keywords: bifacial solar cell, accelerated aging, temperature, characterization, electrical measurement
Procedia PDF Downloads 1127195 Theoretical-Methodological Model to Study Vulnerability of Death in the Past from a Bioarchaeological Approach
Authors: Geraldine G. Granados Vazquez
Abstract:
Every human being is exposed to the risk of dying; wherein some of them are more susceptible than others depending on the cause. Therefore, the cause could be the hazard to die that a group or individual has, making this irreversible damage the condition of vulnerability. Risk is a dynamic concept; which means that it depends on the environmental, social, economic and political conditions. Thus vulnerability may only be evaluated in terms of relative parameters. This research is focusing specifically on building a model that evaluate the risk or propensity of death in past urban societies in connection with the everyday life of individuals, considering that death can be a consequence of two coexisting issues: hazard and the deterioration of the resistance to destruction. One of the most important discussions in bioarchaeology refers to health and life conditions in ancient groups; the researchers are looking for more flexible models that evaluate these topics. In that way, this research proposes a theoretical-methodological model that assess the vulnerability of death in past urban groups. This model pretends to be useful to evaluate the risk of death, considering their sociohistorical context, and their intrinsic biological features. This theoretical and methodological model, propose four areas to assess vulnerability. The first three areas use statistical methods or quantitative analysis. While the last and fourth area, which corresponds to the embodiment, is based on qualitative analysis. The four areas and their techniques proposed are a) Demographic dynamics. From the distribution of age at the time of death, the analysis of mortality will be performed using life tables. From here, four aspects may be inferred: population structure, fertility, mortality-survival, and productivity-migration, b) Frailty. Selective mortality and heterogeneity in frailty can be assessed through the relationship between characteristics and the age at death. There are two indicators used in contemporary populations to evaluate stress: height and linear enamel hypoplasias. Height estimates may account for the individual’s nutrition and health history in specific groups; while enamel hypoplasias are an account of the individual’s first years of life, c) Inequality. Space reflects various sectors of society, also in ancient cities. In general terms, the spatial analysis uses measures of association to show the relationship between frail variables and space, d) Embodiment. The story of everyone leaves some evidence on the body, even in the bones. That led us to think about the dynamic individual's relations in terms of time and space; consequently, the micro analysis of persons will assess vulnerability from the everyday life, where the symbolic meaning also plays a major role. In sum, using some Mesoamerica examples, as study cases, this research demonstrates that not only the intrinsic characteristics related to the age and sex of individuals are conducive to vulnerability, but also the social and historical context that determines their state of frailty before death. An attenuating factor for past groups is that some basic aspects –such as the role they played in everyday life– escape our comprehension, and are still under discussion.Keywords: bioarchaeology, frailty, Mesoamerica, vulnerability
Procedia PDF Downloads 2257194 A Study on the Urban Design Path of Historical Block in the Ancient City of Suzhou, China
Abstract:
In recent years, with the gradual change of Chinese urban development mode from 'incremental development' to 'stock-based renewal', the urban design method of ‘grand scene’ in the past could only cope with the planning and construction of incremental spaces such as new towns and new districts, while the problems involved in the renewal of the stock lands such as historic blocks of ancient cities are more complex. 'Simplified' large-scale demolition and construction may lead to the damage of the ancient city's texture and the overall cultural atmosphere; thus it is necessary to re-explore the urban design path of historical blocks in the conservation context of the ancient city. Through the study of the cultural context of the ancient city of Suzhou in China and the interpretation of its current characteristics, this paper explores the methods and paths for the renewal of historical and cultural blocks in the ancient city. It takes No. 12 and No. 13 historical blocks in the ancient city of Suzhou as examples, coordinating the spatial layout and the landscape and shaping the regional characteristics to improve the quality of the ancient city's life. This paper analyses the idea of conservation and regeneration from the aspects of culture, life, business form, and transport. Guided by the planning concept of ‘block repair and cultural infiltration’, it puts forward the urban design path of ‘conservation priority, activation and utilization, organic renewal and strengthening guidance’, with a view to continuing the cultural context and stimulating the vitality of ancient city, so as to realize the integration of history, modernity, space and culture. As a rare research on urban design in the scope of Suzhou ancient city, the paper expects to explore the concepts and methods of urban design for the historic blocks on the basis of the conservation of the history, space, and culture and provides a reference for other similar types of urban construction.Keywords: historical block, Suzhou ancient city, stock-based renewal, urban design
Procedia PDF Downloads 1447193 A Case Study of Kinesthetic Intelligence Development Intervention on One Asperger Child
Authors: Chingwen Yeh, I. Chen Huang
Abstract:
This paper aims to conduct a case study on kinesthetic intelligence development intervention with a child who has Asperger symptom identified by physician. First, the characteristics of Asperger were defined based on the related literature. Some Asperger's people are born with outstanding insight and are good at solving complex and difficult problems. In contrast to high-functioning autistic, Asperger children do not lose their ability to express themselves verbally. However in the cognitive function, they focus mainly on the things they are interested in instead of paying attention to the whole surrounding situation. Thus it is difficult for them not only to focus on things that they are not interesting in, but also to interact with people. Secondly, 8-weeks of kinesthetic intelligence development courses were designed within a series of physical action that including the following sections: limbs coordination, various parts of body rhythm changes, strength and space awareness and breathing practice. In classroom observations were recorded both on words and with video as the qualitative research data. Finally, in-depth interview with the case child’s teachers, parents and other in class observers were documented on a weekly base in order to examine the effectiveness of before and after the kinesthetic intelligence development course and to testify the usefulness of the lesson plan. This research found that the case child has improved significantly in terms of attention span and body movement creativity. In the beginning of intervention, the case child made less eyes contact with others. The instructor needed to face the case child to confirm the eyes contact. And the instructor also used various adjective words as guiding language for all kinds of movement sequence practice. The result can cause the case child’s attention and learning motivation. And the case child understand what to do to enhance kinesthetic intelligence. These authors hope findings of this study can contribute as reference for the further research on the related topic.Keywords: asperger symptom, body rhythm, kinesthetic intelligence, space awareness
Procedia PDF Downloads 2397192 Investigation of User Position Accuracy for Stand-Alone and Hybrid Modes of the Indian Navigation with Indian Constellation Satellite System
Authors: Naveen Kumar Perumalla, Devadas Kuna, Mohammed Akhter Ali
Abstract:
Satellite Navigation System such as the United States Global Positioning System (GPS) plays a significant role in determining the user position. Similar to that of GPS, Indian Regional Navigation Satellite System (IRNSS) is a Satellite Navigation System indigenously developed by Indian Space Research Organization (ISRO), India, to meet the country’s navigation applications. This system is also known as Navigation with Indian Constellation (NavIC). The NavIC system’s main objective, is to offer Positioning, Navigation and Timing (PNT) services to users in its two service areas i.e., covering the Indian landmass and the Indian Ocean. Six NavIC satellites are already deployed in the space and their receivers are in the performance evaluation stage. Four NavIC dual frequency receivers are installed in the ‘Advanced GNSS Research Laboratory’ (AGRL) in the Department of Electronics and Communication Engineering, University College of Engineering, Osmania University, India. The NavIC receivers can be operated in two positioning modes: Stand-alone IRNSS and Hybrid (IRNSS+GPS) modes. In this paper, analysis of various parameters such as Dilution of Precision (DoP), three Dimension (3D) Root Mean Square (RMS) Position Error and Horizontal Position Error with respect to Visibility of Satellites is being carried out using the real-time IRNSS data, obtained by operating the receiver in both positioning modes. Two typical days (6th July 2017 and 7th July 2017) are considered for Hyderabad (Latitude-17°24'28.07’N, Longitude-78°31'4.26’E) station are analyzed. It is found that with respect to the considered parameters, the Hybrid mode operation of NavIC receiver is giving better results than that of the standalone positioning mode. This work finds application in development of NavIC receivers for civilian navigation applications.Keywords: DoP, GPS, IRNSS, GNSS, position error, satellite visibility
Procedia PDF Downloads 2137191 Formulation Development and Evaluation Chlorpheniramine Maleate Containing Nanoparticles Loaded Thermo Sensitive in situ Gel for Treatment of Allergic Rhinitis
Authors: Vipin Saini, Manish Kumar, Shailendra Bhatt, A. Pandurangan
Abstract:
The aim of the present study was to fabricate a thermo sensitive gel containing Chlorpheniramine maleate (CPM) loaded nanoparticles following intranasal administration for effective treatment of allergic rhinitis. Chitosan based nanoparticles were prepared by precipitation method followed by the addition of developed NPs within the Poloxamer 407 and carbopol 934P based mucoadhesive thermo-reversible gel. Developed formulations were evaluated for Particle size, PDI, % entrapment efficiency and % cumulative drug permeation. NP3 formulation was found to be optimized on the basis of minimum particle size (143.9 nm), maximum entrapment efficiency (80.10±0.414 %) and highest drug permeation (90.92±0.531 %). The optimized formulation NP3 was then formulated into thermo reversible in situ gel. This intensifies the contact between nasal mucosa and the drug, increases and facilitates the drug absorption which results in increased bioavailability. G4 formulation was selected as the optimize on the basis of gelation ability and mucoadhesive strength. Histology was carried out to examine the damage caused by the optimized G4 formulation. Results revealed no visual signs of tissue damage thus indicated safe nasal delivery of nanoparticulate in situ gel formulation G4. Thus, intranasal CPM NP-loaded in situ gel was found to be a promising formulation for the treatment of allergic rhinitis.Keywords: chitosan, nanoparticles, in situ gel, chlorpheniramine maleate, poloxamer 407
Procedia PDF Downloads 178