Search results for: space efficiency
7491 Facial Recognition of University Entrance Exam Candidates using FaceMatch Software in Iran
Authors: Mahshid Arabi
Abstract:
In recent years, remarkable advancements in the fields of artificial intelligence and machine learning have led to the development of facial recognition technologies. These technologies are now employed in a wide range of applications, including security, surveillance, healthcare, and education. In the field of education, the identification of university entrance exam candidates has been one of the fundamental challenges. Traditional methods such as using ID cards and handwritten signatures are not only inefficient and prone to fraud but also susceptible to errors. In this context, utilizing advanced technologies like facial recognition can be an effective and efficient solution to increase the accuracy and reliability of identity verification in entrance exams. This article examines the use of FaceMatch software for recognizing the faces of university entrance exam candidates in Iran. The main objective of this research is to evaluate the efficiency and accuracy of FaceMatch software in identifying university entrance exam candidates to prevent fraud and ensure the authenticity of individuals' identities. Additionally, this research investigates the advantages and challenges of using this technology in Iran's educational systems. This research was conducted using an experimental method and random sampling. In this study, 1000 university entrance exam candidates in Iran were selected as samples. The facial images of these candidates were processed and analyzed using FaceMatch software. The software's accuracy and efficiency were evaluated using various metrics, including accuracy rate, error rate, and processing time. The research results indicated that FaceMatch software could accurately identify candidates with a precision of 98.5%. The software's error rate was less than 1.5%, demonstrating its high efficiency in facial recognition. Additionally, the average processing time for each candidate's image was less than 2 seconds, indicating the software's high efficiency. Statistical evaluation of the results using precise statistical tests, including analysis of variance (ANOVA) and t-test, showed that the observed differences were significant, and the software's accuracy in identity verification is high. The findings of this research suggest that FaceMatch software can be effectively used as a tool for identifying university entrance exam candidates in Iran. This technology not only enhances security and prevents fraud but also simplifies and streamlines the exam administration process. However, challenges such as preserving candidates' privacy and the costs of implementation must also be considered. The use of facial recognition technology with FaceMatch software in Iran's educational systems can be an effective solution for preventing fraud and ensuring the authenticity of university entrance exam candidates' identities. Given the promising results of this research, it is recommended that this technology be more widely implemented and utilized in the country's educational systems.Keywords: facial recognition, FaceMatch software, Iran, university entrance exam
Procedia PDF Downloads 477490 Insight into Localized Fertilizer Placement in Major Cereal Crops
Authors: Solomon Yokamo, Dianjun Lu, Xiaoqin Chen, Huoyan Wang
Abstract:
The current ‘high input-high output’ nutrient management model based on homogenous spreading over the entire soil surface remains a key challenge in China’s farming systems, leading to low fertilizer use efficiency and environmental pollution. Localized placement of fertilizer (LPF) to crop root zones has been proposed as a viable approach to boost crop production while protecting environmental pollution. To assess the potential benefits of LPF on three major crops—wheat, rice, and maize—a comprehensive meta-analysis was conducted, encompassing 85 field studies published from 2002-2023. We further validated the practicability and feasibility of one-time root zone N management based on LPF for the three field crops. The meta-analysis revealed that LPF significantly increased the yields of the selected crops (13.62%) and nitrogen recovery efficiency (REN) (33.09%) while reducing cumulative nitrous oxide (N₂O) emission (17.37%) and ammonia (NH₃) volatilization (60.14%) compared to the conventional surface application (CSA). Higher grain yield and REN were achieved with an optimal fertilization depth (FD) of 5-15 cm, moderate N rates, combined NPK application, one-time deep fertilization, and coarse-textured and slightly acidic soils. Field validation experiments showed that localized one-time root zone N management without topdressing increased maize (6.2%), rice (34.6%), and wheat (2.9%) yields while saving N fertilizer (3%) and also increased the net economic benefits (23.71%) compared to CSA. A soil incubation study further proved the potential of LPF to enhance the retention and availability of mineral N in the root zone over an extended period. Thus, LPF could be an important fertilizer management strategy and should be extended to other less-developed and developing regions to win the triple benefit of food security, environmental quality, and economic gains.Keywords: grain yield, LPF, NH₃ volatilization, N₂O emission, N recovery efficiency
Procedia PDF Downloads 197489 Direct Fed Microbes: A Better Approach to Maximize Utilization of Roughages in Tropical Ruminants
Authors: Muhammad Adeel Arshad, Shaukat Ali Bhatti, Faiz-ul Hassan
Abstract:
Manipulating microbial ecosystem in the rumen is considered as an important strategy to optimize production efficiency in ruminants. In the past, antibiotics and synthetic chemical compounds have been used for the manipulation of rumen fermentation. However, since the non-therapeutic use of antibiotics has been banned, efforts are being focused to search out safe alternative products. In tropics, crop residues and forage grazing are major dietary sources for ruminants. Poor digestibility and utilization of these feedstuffs by animals is a limiting factor to exploit the full potential of ruminants in this area. Hence, there is a need to enhance the utilization of these available feeding resources. One of the potential strategies in this regard is the use of direct-fed microbes. Bacteria and fungi are mostly used as direct-fed microbes to improve animal health and productivity. Commonly used bacterial species include lactic acid-producing and utilizing bacteria (Lactobacillus, Streptococcus, Enterococcus, Bifidobacterium, and Bacillus) and fungal species of yeast are Saccharomyces and Aspergillus. Direct-fed microbes modulate microbial balance in the gastrointestinal tract through the competitive exclusion of pathogenic species and favoring beneficial microbes. Improvement in weight gain and feed efficiency has been observed as a result of feeding direct-fed bacteria. The use of fungi as a direct-fed microbe may prevent excessive production of lactate and harmful oxygen in the rumen leading to better feed digestibility. However, the mechanistic mode of action for bacterial or fungal direct-fed microbes has not been established yet. Various reports have confirmed an increase in dry matter intake, milk yield, and milk contents in response to the administration of direct-fed microbes. However, the application of a direct-fed microbe has shown variable responses mainly attributed to dosages and strains of microbes. Nonetheless, it is concluded that the inclusion of direct-fed microbes may mediate the rumen ecosystem to manage lactic acid production and utilization in both clinical and sub-acute rumen acidosis.Keywords: microbes, roughages, rumen, feed efficiency, production, fermentation
Procedia PDF Downloads 1387488 Assessing the Mass Concentration of Microplastics and Nanoplastics in Wastewater Treatment Plants by Pyrolysis Gas Chromatography−Mass Spectrometry
Authors: Yanghui Xu, Qin Ou, Xintu Wang, Feng Hou, Peng Li, Jan Peter van der Hoek, Gang Liu
Abstract:
The level and removal of microplastics (MPs) in wastewater treatment plants (WWTPs) has been well evaluated by the particle number, while the mass concentration of MPs and especially nanoplastics (NPs) remains unclear. In this study, microfiltration, ultrafiltration and hydrogen peroxide digestion were used to extract MPs and NPs with different size ranges (0.01−1, 1−50, and 50−1000 μm) across the whole treatment schemes in two WWTPs. By identifying specific pyrolysis products, pyrolysis gas chromatography−mass spectrometry were used to quantify their mass concentrations of selected six types of polymers (i.e., polymethyl methacrylate (PMMA), polypropylene (PP), polystyrene (PS), polyethylene (PE), polyethylene terephthalate (PET), and polyamide (PA)). The mass concentrations of total MPs and NPs decreased from 26.23 and 11.28 μg/L in the influent to 1.75 and 0.71 μg/L in the effluent, with removal rates of 93.3 and 93.7% in plants A and B, respectively. Among them, PP, PET and PE were the dominant polymer types in wastewater, while PMMA, PS and PA only accounted for a small part. The mass concentrations of NPs (0.01−1 μm) were much lower than those of MPs (>1 μm), accounting for 12.0−17.9 and 5.6− 19.5% of the total MPs and NPs, respectively. Notably, the removal efficiency differed with the polymer type and size range. The low-density MPs (e.g., PP and PE) had lower removal efficiency than high-density PET in both plants. Since particles with smaller size could pass the tertiary sand filter or membrane filter more easily, the removal efficiency of NPs was lower than that of MPs with larger particle size. Based on annual wastewater effluent discharge, it is estimated that about 0.321 and 0.052 tons of MPs and NPs were released into the river each year. Overall, this study investigated the mass concentration of MPs and NPs with a wide size range of 0.01−1000 μm in wastewater, which provided valuable information regarding the pollution level and distribution characteristics of MPs, especially NPs, in WWTPs. However, there are limitations and uncertainties in the current study, especially regarding the sample collection and MP/NP detection. The used plastic items (e.g., sampling buckets, ultrafiltration membranes, centrifugal tubes, and pipette tips) may introduce potential contamination. Additionally, the proposed method caused loss of MPs, especially NPs, which can lead to underestimation of MPs/NPs. Further studies are recommended to address these challenges about MPs/NPs in wastewater.Keywords: microplastics, nanoplastics, mass concentration, WWTPs, Py-GC/MS
Procedia PDF Downloads 2817487 Unpacking the Summarising Event in Trauma Emergencies: The Case of Pre-briefings
Authors: Professor Jo Angouri, Polina Mesinioti, Chris Turner
Abstract:
In order for a group of ad-hoc professional to perform as a team, a shared understanding of the problem at hand and an agreed action plan are necessary components. This is particularly significant in complex, time sensitive professional settings such as in trauma emergencies. In this context, team briefings prior to the patient arrival (pre-briefings) constitute a critical event for the performance of the team; they provide the necessary space for co-constructing a shared understanding of the situation through summarising information available to the team: yet the act of summarising is widely assumed in medical practice but not systematically researched. In the vast teamwork literature, terms such as ‘shared mental model’, ‘mental space’ and ‘cognate labelling’ are used extensively, and loosely, to denote the outcome of the summarising process, but how exactly this is done interactionally remains under researched. This paper reports on the forms and functions of pre-briefings in a major trauma centre in the UK. Taking an interactional approach, we draw on 30 simulated and real-life trauma emergencies (15 from each dataset) and zoom in on the use of pre-briefings, which we consider focal points in the management of trauma emergencies. We show how ad hoc teams negotiate sharedness of future orientation through summarising, synthesising information, and establishing common understanding of the situation. We illustrate the role, characteristics, and structure of pre-briefing sequences that have been evaluated as ‘efficient’ in our data and the impact (in)effective pre-briefings have on teamwork. Our work shows that the key roles in the event own the act of summarising and we problematise the implications for leadership in trauma emergencies. We close the paper with a model for pre-briefing and provide recommendations for clinical practice, arguing that effective pre-briefing practice is teachable.Keywords: summarising, medical emergencies, interaction analysis, shared/mental models
Procedia PDF Downloads 947486 Investigation on the Capacitive Deionization of Functionalized Carbon Nanotubes (F-CNTs) and Silver-Decorated F-CNTs for Water Softening
Authors: Khrizelle Angelique Sablan, Rizalinda De Leon, Jaeyoung Lee, Joey Ocon
Abstract:
The impending water shortage drives us to find alternative sources of water. One of the possible solutions is desalination of seawater. There are numerous processes by which it can be done and one if which is capacitive deionization. Capacitive deionization is a relatively new technique for water desalination. It utilizes the electric double layer for ion adsorption. Carbon-based materials are commonly used as electrodes for capacitive deionization. In this study, carbon nanotubes (CNTs) were treated in a mixture of nitric and sulfuric acid. The silver addition was also facilitated to incorporate antimicrobial action. The acid-treated carbon nanotubes (f-CNTs) and silver-decorated f-CNTs (Ag@f-CNTs) were used as electrode materials for seawater deionization and compared with CNT and acid-treated CNT. The synthesized materials were characterized using TEM, EDS, XRD, XPS and BET. The electrochemical performance was evaluated using cyclic voltammetry, and the deionization performance was tested on a single cell with water containing 64mg/L NaCl. The results showed that the synthesized Ag@f-CNT-10 H could have better performance than CNT and a-CNT with a maximum ion removal efficiency of 50.22% and a corresponding adsorption capacity of 3.21 mg/g. It also showed antimicrobial activity against E. coli. However, the said material lacks stability as the efficiency decreases with repeated usage of the electrode.Keywords: capacitive deionization, carbon nanotubes, desalination, acid functionalization, silver
Procedia PDF Downloads 2317485 A Microwave Heating Model for Endothermic Reaction in the Cement Industry
Authors: Sofia N. Gonçalves, Duarte M. S. Albuquerque, José C. F. Pereira
Abstract:
Microwave technology has been gaining importance in contributing to decarbonization processes in high energy demand industries. Despite the several numerical models presented in the literature, a proper Verification and Validation exercise is still lacking. This is important and required to evaluate the physical process model accuracy and adequacy. Another issue addresses impedance matching, which is an important mechanism used in microwave experiments to increase electromagnetic efficiency. Such mechanism is not available in current computational tools, thus requiring an external numerical procedure. A numerical model was implemented to study the continuous processing of limestone with microwave heating. This process requires the material to be heated until a certain temperature that will prompt a highly endothermic reaction. Both a 2D and 3D model were built in COMSOL Multiphysics to solve the two-way coupling between Maxwell and Energy equations, along with the coupling between both heat transfer phenomena and limestone endothermic reaction. The 2D model was used to study and evaluate the required numerical procedure, being also a benchmark test, allowing other authors to implement impedance matching procedures. To achieve this goal, a controller built in MATLAB was used to continuously matching the cavity impedance and predicting the required energy for the system, thus successfully avoiding energy inefficiencies. The 3D model reproduces realistic results and therefore supports the main conclusions of this work. Limestone was modeled as a continuous flow under the transport of concentrated species, whose material and kinetics properties were taken from literature. Verification and Validation of the coupled model was taken separately from the chemical kinetic model. The chemical kinetic model was found to correctly describe the chosen kinetic equation by comparing numerical results with experimental data. A solution verification was made for the electromagnetic interface, where second order and fourth order accurate schemes were found for linear and quadratic elements, respectively, with numerical uncertainty lower than 0.03%. Regarding the coupled model, it was demonstrated that the numerical error would diverge for the heat transfer interface with the mapped mesh. Results showed numerical stability for the triangular mesh, and the numerical uncertainty was less than 0.1%. This study evaluated limestone velocity, heat transfer, and load influence on thermal decomposition and overall process efficiency. The velocity and heat transfer coefficient were studied with the 2D model, while different loads of material were studied with the 3D model. Both models demonstrated to be highly unstable when solving non-linear temperature distributions. High velocity flows exhibited propensity to thermal runways, and the thermal efficiency showed the tendency to stabilize for the higher velocities and higher filling ratio. Microwave efficiency denoted an optimal velocity for each heat transfer coefficient, pointing out that electromagnetic efficiency is a consequence of energy distribution uniformity. The 3D results indicated the inefficient development of the electric field for low filling ratios. Thermal efficiencies higher than 90% were found for the higher loads and microwave efficiencies up to 75% were accomplished. The 80% fill ratio was demonstrated to be the optimal load with an associated global efficiency of 70%.Keywords: multiphysics modeling, microwave heating, verification and validation, endothermic reactions modeling, impedance matching, limestone continuous processing
Procedia PDF Downloads 1407484 Efficacy of Pisum sativum and Arbuscular Mycorrhizal Symbiosis for Phytoextraction of Heavy Metalloids from Soil
Authors: Ritu Chaturvedi, Manoj Paul
Abstract:
A pot experiment was conducted to investigate the effect of Arbuscular mycorrhizal fungus (AMF) on metal(loid) uptake and accumulation efficiency of Pisum sativum along with physiological and biochemical response. Plants were grown in soil spiked with 50 and 100 mg kg-1 Pb, 25 and 50 mg kg-1 Cd, 50 and 100 mg kg-1 As and a combination of all three metal(loid)s. A parallel set was maintained and inoculated with arbuscular mycorrhizal fungus for comparison. After 60 days, plants were harvested and analysed for metal(loid) content. A steady increase in metal(loid) accumulation was observed on increment of metal(loid) dose and also on AMF inoculation. Plant height, biomass, chlorophyll, carotenoid and carbohydrate content reduced upon metal(loid) exposure. Increase in enzymatic (CAT, SOD and APX) and nonenzymatic (Proline) defence proteins was observed on metal(loid) exposure. AMF inoculation leads to an increase in plant height, biomass, chlorophyll, carotenoids, carbohydrate and enzymatic defence proteins (p≤0.001) under study; whereas proline content was reduced. Considering the accumulation efficiency and adaptive response of plants and alleviation of stress by AMF, this symbiosis can be applied for on-site remediation of Pb and Cd contaminated soil.Keywords: heavy metal, mycorrhiza, pea, phyroremediation
Procedia PDF Downloads 2347483 Analysis of Cultural Influences on Quality Management by Comparison of Japanese and German Enterprises
Authors: Hermann Luecken, Young Won Park, Judith M. Puetter
Abstract:
Quality is known to be the accordance of product characteristics and customer requirements. Both the customer requirements and the assessment of the characteristics of the product with regard to the fulfillment of customer requirements are subject to cultural influences. Of course, the processes itself which lead to product manufacturing is also subject to cultural influences. In the first point, the cultural background of the customer influences the quality, in the second point, it is the cultural background of the employees and the company that influences the process itself. In times of globalization products are manufactured at different locations around the world, but typically the quality management system of the country in which the mother company is based is used. This leads to significantly different results in terms of productivity, product quality and process efficiency at the different locations, although the same quality management system is in use. The aim of an efficient and effective quality management system is therefore not doing the same at all locations, but to have the same result at all locations. In the past, standardization was used to achieve the same results. Recent investigations show that this is not the best way to achieve the same characteristics of product quality and production performance. In the present work, it is shown that the consideration of cultural aspects in the design of processes, production systems, and quality management systems results in a significantly higher efficiency and a quality improvement. Both Japanese and German companies were investigated with comparative interviews. The background of this selection is that in most cases the cultural difference regarding industrial processes between Germany and Japan is high. At the same time, however, the customer expectations regarding the product quality are very similar. Interviews were conducted with experts from German and Japanese companies; in particular, companies were selected that operate production facilities both in Germany and in Japan. The comparison shows that the cultural influence on the respective production performance is significant. Companies that adapt the design of their quality management and production systems to the country where the production site is located have a significantly higher productivity and a significantly higher quality of the product than companies that work with a centralized system.Keywords: comparison of German and Japanese production systems, cultural influence on quality management, expert interviews, process efficiency
Procedia PDF Downloads 1607482 Research on Configuration of Large-Scale Linear Array Feeder Truss Parabolic Cylindrical Antenna of Satellite
Authors: Chen Chuanzhi, Guo Yunyun
Abstract:
The large linear array feeding parabolic cylindrical antenna of the satellite has the ability of large-area line focusing, multi-directional beam clusters simultaneously in a certain azimuth plane and elevation plane, corresponding quickly to different orientations and different directions in a wide frequency range, dual aiming of frequency and direction, and combining space power. Therefore, the large-diameter parabolic cylindrical antenna has become one of the new development directions of spaceborne antennas. Limited by the size of the rocked fairing, the large-diameter spaceborne antenna is required to be small mass and have a deployment function. After being orbited, the antenna can be deployed by expanding and be stabilized. However, few types of structures can be used to construct large cylindrical shell structures in existing structures, which greatly limits the development and application of such antennas. Aiming at high structural efficiency, the geometrical characteristics of parabolic cylinders and mechanism topological mapping law to the expandable truss are studied, and the basic configuration of deployable truss with cylindrical shell is structured. Then a modular truss parabolic cylindrical antenna is designed in this paper. The antenna has the characteristics of stable structure, high precision of reflecting surface formation, controllable motion process, high storage rate, and lightweight, etc. On the basis of the overall configuration comprehensive theory and optimization method, the structural stiffness of the modular truss parabolic cylindrical antenna is improved. And the bearing density and impact resistance of support structure are improved based on the internal tension optimal distribution method of reflector forming. Finally, a truss-type cylindrical deployable support structure with high constriction-deployment ratio, high stiffness, controllable deployment, and low mass is successfully developed, laying the foundation for the application of large-diameter parabolic cylindrical antennas in satellite antennas.Keywords: linear array feed antenna, truss type, parabolic cylindrical antenna, spaceborne antenna
Procedia PDF Downloads 1587481 A Wideband CMOS Power Amplifier with 23.3 dB S21, 10.6 dBm Psat and 12.3% PAE for 60 GHz WPAN and 77 GHz Automobile Radar Systems
Authors: Yo-Sheng Lin, Chien-Chin Wang, Yun-Wen Lin, Chien-Yo Lee
Abstract:
A wide band power amplifier (PA) for 60 GHz and 77 GHz direct-conversion transceiver using standard 90 nm CMOS technology is reported. The PA comprises a cascode input stage with a wide band T-type input-matching network and inductive interconnection and load, followed by a common-source (CS) gain stage and a CS output stage. To increase the saturated output power (PSAT) and power-added efficiency (PAE), the output stage adopts a two-way power dividing and combining architecture. Instead of the area-consumed Wilkinson power divider and combiner, miniature low-loss transmission-line inductors are used at the input and output terminals of each of the output stages for wide band input and output impedance matching to 100 ohm. This in turn results in further PSAT and PAE enhancement. The PA consumes 92.2 mW and achieves maximum power gain (S21) of 23.3 dB at 56 GHz, and S21 of 21.7 dB and 14 dB, respectively, at 60 GHz and 77 GHz. In addition, the PA achieves excellent saturated output power (PSAT) of 10.6 dB and maximum power added efficiency (PAE) of 12.3% at 60 GHz. At 77 GHz, the PA achieves excellent PSAT of 10.4 dB and maximum PAE of 6%. These results demonstrate the proposed wide band PA architecture is very promising for 60 GHz wireless personal local network (WPAN) and 77 GHz automobile radar systems.Keywords: 60 GHz, 77 GHz, PA, WPAN, automotive radar
Procedia PDF Downloads 5757480 Application of Space Technology at Cadestral Level and Land Resources Management with Special Reference to Bhoomi Sena Project of Uttar Pradesh, India
Authors: A. K. Srivastava, Sandeep K. Singh, A. K. Kulshetra
Abstract:
Agriculture is the backbone of developing countries of Asian sub-continent like India. Uttar Pradesh is the most populous and fifth largest State of India. Total population of the state is 19.95 crore, which is 16.49% of the country that is more than that of many other countries of the world. Uttar Pradesh occupies only 7.36% of the total area of India. It is a well-established fact that agriculture has virtually been the lifeline of the State’s economy in the past for long and its predominance is likely to continue for a fairly long time in future. The total geographical area of the state is 242.01 lakh hectares, out of which 120.44 lakh hectares is facing various land degradation problems. This needs to be put under various conservation and reclamation measures at much faster pace in order to enhance agriculture productivity in the State. Keeping in view the above scenario Department of Agriculture, Government of Uttar Pradesh has formulated a multi-purpose project namely Bhoomi Sena for the entire state. The main objective of the project is to improve the land degradation using low cost technology available at village level. The total outlay of the project is Rs. 39643.75 Lakhs for an area of about 226000 ha included in the 12th Five Year Plan (2012-13 to 2016-17). It is expected that the total man days would be 310.60 lakh. An attempt has been made to use the space technology like remote sensing, geographical information system, at cadastral level for the overall management of agriculture engineering work which is required for the treatment of degradation of the land. After integration of thematic maps a proposed action plan map has been prepared for the future work.Keywords: GPS, GIS, remote sensing, topographic survey, cadestral mapping
Procedia PDF Downloads 3097479 Application of Nanoparticles on Surface of Commercial Carbon-Based Adsorbent for Removal of Contaminants from Water
Authors: Ahmad Kayvani Fard, Gordon Mckay, Muataz Hussien
Abstract:
Adsorption/sorption is believed to be one of the optimal processes for the removal of heavy metals from water due to its low operational and capital cost as well as its high removal efficiency. Different materials have been reported in literature as adsorbent for heavy metal removal in waste water such as natural sorbents, organic polymers (synthetic) and mineral materials (inorganic). The selection of adsorbents and development of new functional materials that can achieve good removal of heavy metals from water is an important practice and depends on many factors, such as the availability of the material, cost of material, and material safety and etc. In this study we reported the synthesis of doped Activated carbon and Carbon nanotube (CNT) with different loading of metal oxide nanoparticles such as Fe2O3, Fe3O4, Al2O3, TiO2, SiO2 and Ag nanoparticles and their application in removal of heavy metals, hydrocarbon, and organics from waste water. Commercial AC and CNT with different loadings of mentioned nanoparticle were prepared and effect of pH, adsorbent dosage, sorption kinetic, and concentration effects are studied and optimum condition for removal of heavy metals from water is reported. The prepared composite sorbent is characterized using field emission scanning electron microscopy (FE-SEM), high transmission electron microscopy (HR-TEM), thermogravimetric analysis (TGA), X-ray diffractometer (XRD), the Brunauer, Emmett and Teller (BET) nitrogen adsorption technique, and Zeta potential. The composite materials showed higher removal efficiency and superior adsorption capacity compared to commercially available carbon based adsorbent. The specific surface area of AC increased by 50% reaching up to 2000 m2/g while the CNT specific surface area of CNT increased by more than 8 times reaching value of 890 m2/g. The increased surface area is one of the key parameters along with surface charge of the material determining the removal efficiency and removal efficiency. Moreover, the surface charge density of the impregnated CNT and AC have enhanced significantly where can benefit the adsorption process. The nanoparticles also enhance the catalytic activity of material and reduce the agglomeration and aggregation of material which provides more active site for adsorbing the contaminant from water. Some of the results for treating wastewater includes 100% removal of BTEX, arsenic, strontium, barium, phenolic compounds, and oil from water. The results obtained are promising for the use of AC and CNT loaded with metal oxide nanoparticle in treatment and pretreatment of waste water and produced water before desalination process. Adsorption can be very efficient with low energy consumption and economic feasibility.Keywords: carbon nanotube, activated carbon, adsorption, heavy metal, water treatment
Procedia PDF Downloads 2347478 Criticism and Theorizing of Architecture and Urbanism in the Creativity Cinematographic Film
Authors: Wafeek Mohamed Ibrahim Mohamed
Abstract:
In the era of globalization, the camera of the cinematographic film plays a very important role in terms of monitoring and documenting what it was and distinguished the built environment of architectural and Urbanism. Moving the audience to the out-going backward through the cinematographic film and its stereophonic screen by which the picture appears at its best and its coexistence reached now its third dimension. The camera has indicated to the city shape with its paths, (alley) lanes, buildings and its architectural style. We have seen the architectural styles in its cinematic scenes which remained a remembrance in its history, in spite of the fact that some of which has been disappearing as what happened to ‘Boulak Bridge’ in Cairo built by ‘Eiffel’ and it has been demolished, but it remains a remembrance we can see it in the films of ’Usta Hassan’and A Crime in the Quiet Neighborhood. The purpose of the fundamental research is an attempt to reach a critical view of the idea of criticism and theorizing for Architecture and Urbanism in the cinematographic film and their relationship and reflection on the ‘audience’ understanding of the public opinion related to our built environment of Architectural and Urbanism with its problems and hardness. It is like as a trial to study the Architecture and Urbanism of the built environment in the cinematographic film and hooking up (linking) a realistic view of the governing conceptual significance thereof. The aesthetic thought of our traditional environment, in a psychological and anthropological framework, derives from the cinematic concept of the Architecture and Urbanism of the place and the dynamics of the space. The architectural space considers the foundation stone of the cinematic story and the main background of the events therein, which integrate the audience into a romantic trip to the city through its symbolized image of the spaces, lanes [alley], etc. This will be done through two main branches: firstly, Reviewing during time pursuit of the Architecture and Urbanism in the cinematographic films the thirties ago in the Egyptian cinema [onset from the film ‘Bab El Hadid’ to the American University at a film of ‘Saidi at the American University’]. The research concludes the importance of the need to study the cinematic films which deal with our societies, their architectural and Urbanism concerns whether the traditional ones or the contemporary and their crisis (such as the housing crisis in the film of ‘Krakoun in the street’, etc) to study the built environment with its architectural dynamic spaces through a modernist view. In addition, using the cinema as an important Media for spreading the ideas, documenting and monitoring the current changes in the built environment through its various dramas and comedies, etc. The cinema is considered as a mirror of the society and its built environment over the epochs. It assured the unique case constituted by cinema with the audience (public opinion) through a sense of emptiness and forming the mental image related to the city and the built environment.Keywords: architectural and urbanism, cinematographic architectural, film, space in the film, media
Procedia PDF Downloads 2377477 The Effect of Teaching Science Strategies Curriculum and Evaluating on Developing the Efficiency of Academic Self in Science and the Teaching Motivation for the Student Teachers of the Primary Years
Authors: Amani M. Al-Hussan
Abstract:
The current study aimed to explore the effects of science teaching strategies course (CURR422) on developing academic self efficacy and motivation towards teaching it in female primary classroom teachers in College of Education in Princess Nora Bint AbdulRahman University. The study sample consisted (48) female student teachers. To achieve the study aims, the researcher designed two instruments: Academic Self Efficacy Scale & Motivation towards Teaching Science Scale while maintaining the validity and reliability of these instruments.. Several statistical procedures were conducted i.e. Independent Sample T-test, Eta Square, Cohen D effect size. The results reveal that there were statistically significant differences between means of pre and post test for the sample in favor of post test. For academic self efficacy scale, Eta square was 0.99 and the effect size was 27.26. While for the motivation towards teaching science scale, Eta was 0.99 and the effect size was 51.72. These results indicated high effects of independent variable on the dependent variable.Keywords: academic self efficiency, achievement, motivation, primary classroom teacher, science teaching strategies course, evaluation
Procedia PDF Downloads 4997476 Production Factor Coefficients Transition through the Lens of State Space Model
Authors: Kanokwan Chancharoenchai
Abstract:
Economic growth can be considered as an important element of countries’ development process. For developing countries, like Thailand, to ensure the continuous growth of the economy, the Thai government usually implements various policies to stimulate economic growth. They may take the form of fiscal, monetary, trade, and other policies. Because of these different aspects, understanding factors relating to economic growth could allow the government to introduce the proper plan for the future economic stimulating scheme. Consequently, this issue has caught interest of not only policymakers but also academics. This study, therefore, investigates explanatory variables for economic growth in Thailand from 2005 to 2017 with a total of 52 quarters. The findings would contribute to the field of economic growth and become helpful information to policymakers. The investigation is estimated throughout the production function with non-linear Cobb-Douglas equation. The rate of growth is indicated by the change of GDP in the natural logarithmic form. The relevant factors included in the estimation cover three traditional means of production and implicit effects, such as human capital, international activity and technological transfer from developed countries. Besides, this investigation takes the internal and external instabilities into account as proxied by the unobserved inflation estimation and the real effective exchange rate (REER) of the Thai baht, respectively. The unobserved inflation series are obtained from the AR(1)-ARCH(1) model, while the unobserved REER of Thai baht is gathered from naive OLS-GARCH(1,1) model. According to empirical results, the AR(|2|) equation which includes seven significant variables, namely capital stock, labor, the imports of capital goods, trade openness, the REER of Thai baht uncertainty, one previous GDP, and the world financial crisis in 2009 dummy, presents the most suitable model. The autoregressive model is assumed constant estimator that would somehow cause the unbias. However, this is not the case of the recursive coefficient model from the state space model that allows the transition of coefficients. With the powerful state space model, it provides the productivity or effect of each significant factor more in detail. The state coefficients are estimated based on the AR(|2|) with the exception of the one previous GDP and the 2009 world financial crisis dummy. The findings shed the light that those factors seem to be stable through time since the occurrence of the world financial crisis together with the political situation in Thailand. These two events could lower the confidence in the Thai economy. Moreover, state coefficients highlight the sluggish rate of machinery replacement and quite low technology of capital goods imported from abroad. The Thai government should apply proactive policies via taxation and specific credit policy to improve technological advancement, for instance. Another interesting evidence is the issue of trade openness which shows the negative transition effect along the sample period. This could be explained by the loss of price competitiveness to imported goods, especially under the widespread implementation of free trade agreement. The Thai government should carefully handle with regulations and the investment incentive policy by focusing on strengthening small and medium enterprises.Keywords: autoregressive model, economic growth, state space model, Thailand
Procedia PDF Downloads 1517475 Effect of Using PCMs and Transparency Rations on Energy Efficiency and Thermal Performance of Buildings in Hot Climatic Regions. A Simulation-Based Evaluation
Authors: Eda K. Murathan, Gulten Manioglu
Abstract:
In the building design process, reducing heating and cooling energy consumption according to the climatic region conditions of the building are important issues to be considered in order to provide thermal comfort conditions in the indoor environment. Applying a phase-change material (PCM) on the surface of a building envelope is the new approach for controlling heat transfer through the building envelope during the year. The transparency ratios of the window are also the determinants of the amount of solar radiation gain in the space, thus thermal comfort and energy expenditure. In this study, a simulation-based evaluation was carried out by using Energyplus to determine the effect of coupling PCM and transparency ratio when integrated into the building envelope. A three-storey building, a 30m x 30m sized floor area and 10m x 10m sized courtyard are taken as an example of the courtyard building model, which is frequently seen in the traditional architecture of hot climatic regions. 8 zones (10m x10m sized) with 2 exterior façades oriented in different directions on each floor were obtained. The percentage of transparent components on the PCM applied surface was increased at every step (%30, %40, %50). For every zone differently oriented, annual heating, cooling energy consumptions, and thermal comfort based on the Fanger method were calculated. All calculations are made for the zones of the intermediate floor of the building. The study was carried out for Diyarbakır provinces representing the hot-dry climate region and Antalya representing the hot-humid climate region. The increase in the transparency ratio has led to a decrease in heating energy consumption but an increase in cooling energy consumption for both provinces. When PCM is applied to all developed options, It was observed that heating and cooling energy consumption decreased in both Antalya (6.06%-19.78% and %1-%3.74) and Diyarbakır (2.79%-3.43% and 2.32%-4.64%) respectively. When the considered building is evaluated under passive conditions for the 21st of July, which represents the hottest day of the year, it is seen that the user feels comfortable between 11 pm-10 am with the effect of night ventilation for both provinces.Keywords: building envelope, heating and cooling energy consumptions, phase change material, transparency ratio
Procedia PDF Downloads 1767474 Efficient Photocatalytic Degradation of Tetracycline Hydrochloride Using Modified Carbon Nitride CCN/Bi₂WO₆ Heterojunction
Authors: Syed Najeeb-Uz-Zaman Haider, Yang Juan
Abstract:
Antibiotic overuse raises environmental concerns, boosting the demand for efficient removal from pharmaceutical wastewater. Photocatalysis, particularly using semiconductor photocatalysts, offers a promising solution and garners significant scientific interest. In this study, a Z-scheme 0.15BWO/CCN heterojunction was developed, analyzed, and employed for the photocatalytic degradation of tetracycline hydrochloride (TC) under visible light. The study revealed that the dosage of 0.15BWO@CCN and the presence of coexisting ions significantly influenced the degradation efficiency, achieving up to 87% within 20 minutes under optimal conditions (at pH 9-11/strongly basic conditions) while maintaining 84% efficiency under standard conditions (unaltered pH). Photoinduced electrons gathered on the conduction band of BWO while holes accumulated on the valence band of CCN, creating more favorable conditions to produce superoxide and hydroxyl radicals. Additionally, through comprehensive experimental analysis, the degradation pathway and mechanism were thoroughly explored. The superior photocatalytic performance of 0.15BWO@CCN was attributed to its Z-scheme heterojunction structure, which significantly reduced the recombination of photoinduced electrons and holes. The radicals produced were identified using ESR, and their involvement in tetracycline degradation was further analyzed through active species trapping experiments.Keywords: CCN, Bi₂WO₆, TC, photocatalytic degradation, heterojunction
Procedia PDF Downloads 447473 Solar Panel Design Aspects and Challenges for a Lunar Mission
Authors: Mannika Garg, N. Srinivas Murthy, Sunish Nair
Abstract:
TeamIndus is only Indian team participated in the Google Lunar X Prize (GLXP). GLXP is an incentive prize space competition which is organized by the XPrize Foundation and sponsored by Google. The main objective of the mission is to soft land a rover on the moon surface, travel minimum displacement of 500 meters and transmit HD and NRT videos and images to the Earth. Team Indus is designing a Lunar Lander which carries Rover with it and deliver onto the surface of the moon with a soft landing. For lander to survive throughout the mission, energy is required to operate all attitude control sensors, actuators, heaters and other necessary components. Photovoltaic solar array systems are the most common and primary source of power generation for any spacecraft. The scope of this paper is to provide a system-level approach for designing the solar array systems of the lander to generate required power to accomplish the mission. For this mission, the direction of design effort is to higher efficiency, high reliability and high specific power. Towards this approach, highly efficient multi-junction cells have been considered. The design is influenced by other constraints also like; mission profile, chosen spacecraft attitude, overall lander configuration, cost effectiveness and sizing requirements. This paper also addresses the various solar array design challenges such as operating temperature, shadowing, radiation environment and mission life and strategy of supporting required power levels (peak and average). The challenge to generate sufficient power at the time of surface touchdown, due to low sun elevation (El) and azimuth (Az) angle which depends on Lunar landing site, has also been showcased in this paper. To achieve this goal, energy balance analysis has been carried out to study the impact of the above-mentioned factors and to meet the requirements and has been discussed in this paper.Keywords: energy balance analysis, multi junction solar cells, photovoltaic, reliability, spacecraft attitude
Procedia PDF Downloads 2307472 CompPSA: A Component-Based Pairwise RNA Secondary Structure Alignment Algorithm
Authors: Ghada Badr, Arwa Alturki
Abstract:
The biological function of an RNA molecule depends on its structure. The objective of the alignment is finding the homology between two or more RNA secondary structures. Knowing the common functionalities between two RNA structures allows a better understanding and a discovery of other relationships between them. Besides, identifying non-coding RNAs -that is not translated into a protein- is a popular application in which RNA structural alignment is the first step A few methods for RNA structure-to-structure alignment have been developed. Most of these methods are partial structure-to-structure, sequence-to-structure, or structure-to-sequence alignment. Less attention is given in the literature to the use of efficient RNA structure representation and the structure-to-structure alignment methods are lacking. In this paper, we introduce an O(N2) Component-based Pairwise RNA Structure Alignment (CompPSA) algorithm, where structures are given as a component-based representation and where N is the maximum number of components in the two structures. The proposed algorithm compares the two RNA secondary structures based on their weighted component features rather than on their base-pair details. Extensive experiments are conducted illustrating the efficiency of the CompPSA algorithm when compared to other approaches and on different real and simulated datasets. The CompPSA algorithm shows an accurate similarity measure between components. The algorithm gives the flexibility for the user to align the two RNA structures based on their weighted features (position, full length, and/or stem length). Moreover, the algorithm proves scalability and efficiency in time and memory performance.Keywords: alignment, RNA secondary structure, pairwise, component-based, data mining
Procedia PDF Downloads 4587471 Various Models of Quality Management Systems
Authors: Mehrnoosh Askarizadeh
Abstract:
People, process and IT are the most important assets of any organization. Optimal utilization of these resources has been the question of research in business for many decades. The business world have responded by inventing various methodologies that can be used for addressing problems of quality improvement, efficiency of processes, continuous improvement, reduction of waste, automation, strategy alignments etc. Some of these methodologies can be commonly called as Business Process Quality Management methodologies (BPQM). In essence, the first references to the process management can be traced back to Frederick Taylor and scientific management. Time and motion study was addressed to improvement of manufacturing process efficiency. The ideas of scientific management were in use for quite a long period until more advanced quality management techniques were developed in Japan and USA. One of the first prominent methods had been Total Quality Management (TQM) which evolved during 1980’s. About the same time, Six Sigma (SS) originated at Motorola as a separate method. SS spread and evolved; and later joined with ideas of Lean manufacturing to form Lean Six Sigma. In 1990’s due to emerging IT technologies, beginning of globalization, and strengthening of competition, companies recognized the need for better process and quality management. Business Process Management (BPM) emerged as a novel methodology that has taken all this into account and helped to align IT technologies with business processes and quality management. In this article we will study various aspects of above mentioned methods and identified their relations.Keywords: e-process, quality, TQM, BPM, lean, six sigma, CPI, information technology, management
Procedia PDF Downloads 4407470 A Controlled-Release Nanofertilizer Improves Tomato Growth and Minimizes Nitrogen Consumption
Authors: Mohamed I. D. Helal, Mohamed M. El-Mogy, Hassan A. Khater, Muhammad A. Fathy, Fatma E. Ibrahim, Yuncong C. Li, Zhaohui Tong, Karima F. Abdelgawad
Abstract:
Minimizing the consumption of agrochemicals, particularly nitrogen, is the ultimate goal for achieving sustainable agricultural production with low cost and high economic and environmental returns. The use of biopolymers instead of petroleum-based synthetic polymers for CRFs can significantly improve the sustainability of crop production since biopolymers are biodegradable and not harmful to soil quality. Lignin is one of the most abundant biopolymers that naturally exist. In this study, controlled-release fertilizers were developed using a biobased nanocomposite of lignin and bentonite clay mineral as a coating material for urea to increase nitrogen use efficiency. Five types of controlled-release urea (CRU) were prepared using two ratios of modified bentonite as well as techniques. The efficiency of the five controlled-release nano-urea (CRU) fertilizers in improving the growth of tomato plants was studied under field conditions. The CRU was applied to the tomato plants at three N levels representing 100, 50, and 25% of the recommended dose of conventional urea. The results showed that all CRU treatments at the three N levels significantly enhanced plant growth parameters, including plant height, number of leaves, fresh weight, and dry weight, compared to the control. Additionally, most CRU fertilizers increased total yield and fruit characteristics (weight, length, and diameter) compared to the control. Additionally, marketable yield was improved by CRU fertilizers. Fruit firmness and acidity of CRU treatments at 25 and 50% N levels were much higher than both the 100% CRU treatment and the control. The vitamin C values of all CRU treatments were lower than the control. Nitrogen uptake efficiencies (NUpE) of CRU treatments were 47–88%, which is significantly higher than that of the control (33%). In conclusion, all CRU treatments at an N level of 25% of the recommended dose showed better plant growth, yield, and fruit quality of tomatoes than the conventional fertilizer.Keywords: nitrogen use efficiency, quality, urea, nano particles, ecofriendly
Procedia PDF Downloads 767469 Speeding Up Lenia: A Comparative Study Between Existing Implementations and CUDA C++ with OpenGL Interop
Authors: L. Diogo, A. Legrand, J. Nguyen-Cao, J. Rogeau, S. Bornhofen
Abstract:
Lenia is a system of cellular automata with continuous states, space and time, which surprises not only with the emergence of interesting life-like structures but also with its beauty. This paper reports ongoing research on a GPU implementation of Lenia using CUDA C++ and OpenGL Interoperability. We demonstrate how CUDA as a low-level GPU programming paradigm allows optimizing performance and memory usage of the Lenia algorithm. A comparative analysis through experimental runs with existing implementations shows that the CUDA implementation outperforms the others by one order of magnitude or more. Cellular automata hold significant interest due to their ability to model complex phenomena in systems with simple rules and structures. They allow exploring emergent behavior such as self-organization and adaptation, and find applications in various fields, including computer science, physics, biology, and sociology. Unlike classic cellular automata which rely on discrete cells and values, Lenia generalizes the concept of cellular automata to continuous space, time and states, thus providing additional fluidity and richness in emerging phenomena. In the current literature, there are many implementations of Lenia utilizing various programming languages and visualization libraries. However, each implementation also presents certain drawbacks, which serve as motivation for further research and development. In particular, speed is a critical factor when studying Lenia, for several reasons. Rapid simulation allows researchers to observe the emergence of patterns and behaviors in more configurations, on bigger grids and over longer periods without annoying waiting times. Thereby, they enable the exploration and discovery of new species within the Lenia ecosystem more efficiently. Moreover, faster simulations are beneficial when we include additional time-consuming algorithms such as computer vision or machine learning to evolve and optimize specific Lenia configurations. We developed a Lenia implementation for GPU using the C++ and CUDA programming languages, and CUDA/OpenGL Interoperability for immediate rendering. The goal of our experiment is to benchmark this implementation compared to the existing ones in terms of speed, memory usage, configurability and scalability. In our comparison we focus on the most important Lenia implementations, selected for their prominence, accessibility and widespread use in the scientific community. The implementations include MATLAB, JavaScript, ShaderToy GLSL, Jupyter, Rust and R. The list is not exhaustive but provides a broad view of the principal current approaches and their respective strengths and weaknesses. Our comparison primarily considers computational performance and memory efficiency, as these factors are critical for large-scale simulations, but we also investigate the ease of use and configurability. The experimental runs conducted so far demonstrate that the CUDA C++ implementation outperforms the other implementations by one order of magnitude or more. The benefits of using the GPU become apparent especially with larger grids and convolution kernels. However, our research is still ongoing. We are currently exploring the impact of several software design choices and optimization techniques, such as convolution with Fast Fourier Transforms (FFT), various GPU memory management scenarios, and the trade-off between speed and accuracy using single versus double precision floating point arithmetic. The results will give valuable insights into the practice of parallel programming of the Lenia algorithm, and all conclusions will be thoroughly presented in the conference paper. The final version of our CUDA C++ implementation will be published on github and made freely accessible to the Alife community for further development.Keywords: artificial life, cellular automaton, GPU optimization, Lenia, comparative analysis.
Procedia PDF Downloads 417468 Energy Certification Labels and Comfort Assessment for Dwellings Located in a Mild Climate
Authors: Silvia A. Magalhaes, Vasco P. De Freitas, Jose L. Alexandre
Abstract:
Most of the European literature concerning energy efficiency and thermal comfort of dwellings assumes permanent heating and focuses on energy-saving measures. European National regulations are designed for those permanent comfort conditions. On the other hand, very few studies focus on the effect of the improvement measures in comfort reduction, for free-floating conditions or intermittent heating, in fuel poverty vulnerable countries. In Portugal, only 21% of the household energy consumptions (and 10% of the cost) are spent in space heating, while, on average European bills, this value rises to 67%. The mild climate, but mainly fuel poverty and cultural background, justifies these low heating practices. This study proposes a “passive discomfort” index definition, considering free-floating temperatures or with intermittent heating profiles (more realistic conditions), putting the focus on comfort rather than energy consumption (which is low for these countries). The aim is to compare both energy (regarding the legal framework of national regulation) and comfort (considering realistic conditions of use) to identify some correlation. It was developed an experimental campaign of indoor thermal conditions in a 19th building located in Porto with several apartments. One dwelling was chosen as a case study to carry out a sensitivity analysis. The results are discussed comparing both theoretical energy consumption (energy rates from national regulation) and discomfort (new index defined), for different insulation thicknesses, orientations, and intermittent heating profiles. The results show that the different passive options (walls insulation and glazing options) have a small impact on winter discomfort, which is always high for low heating profiles. Moreover, it was shown that the insulation thickness on walls has no influence, and the minimum insulation thickness considered is enough to achieve the same impact on discomfort reduction. Plus, for these low heating profiles, other conditions are critical, as the orientation. Finally, there isn’t an unequivocal relation between the energy label and the discomfort index. These and other results are surprising when compared with the most usual approaches, which assume permanent heating.Keywords: dwellings in historical buildings, low-heating countries, mild climates, thermal comfort
Procedia PDF Downloads 1497467 A Graph-Based Retrieval Model for Passage Search
Authors: Junjie Zhong, Kai Hong, Lei Wang
Abstract:
Passage Retrieval (PR) plays an important role in many Natural Language Processing (NLP) tasks. Traditional efficient retrieval models relying on exact term-matching, such as TF-IDF or BM25, have nowadays been exceeded by pre-trained language models which match by semantics. Though they gain effectiveness, deep language models often require large memory as well as time cost. To tackle the trade-off between efficiency and effectiveness in PR, this paper proposes Graph Passage Retriever (GraphPR), a graph-based model inspired by the development of graph learning techniques. Different from existing works, GraphPR is end-to-end and integrates both term-matching information and semantics. GraphPR constructs a passage-level graph from BM25 retrieval results and trains a GCN-like model on the graph with graph-based objectives. Passages were regarded as nodes in the constructed graph and were embedded in dense vectors. PR can then be implemented using embeddings and a fast vector-similarity search. Experiments on a variety of real-world retrieval datasets show that the proposed model outperforms related models in several evaluation metrics (e.g., mean reciprocal rank, accuracy, F1-scores) while maintaining a relatively low query latency and memory usage.Keywords: efficiency, effectiveness, graph learning, language model, passage retrieval, term-matching model
Procedia PDF Downloads 1507466 An Automated Business Process Management for Smart Medical Records
Authors: K. Malak, A. Nourah, S.Liyakathunisa
Abstract:
Nowadays, healthcare services are facing many challenges since they are becoming more complex and more needed. Every detail of a patient’s interactions with health care providers is maintained in Electronic Health Records (ECR) and Healthcare information systems (HIS). However, most of the existing systems are often focused on documenting what happens in manual health care process, rather than providing the highest quality patient care. Healthcare business processes and stakeholders can no longer rely on manual processes, to provide better patient care and efficient utilization of resources, Healthcare processes must be automated wherever it is possible. In this research, a detail survey and analysis is performed on the existing health care systems in Saudi Arabia, and an automated smart medical healthcare business process model is proposed. The business process management methods and rules are followed in discovering, collecting information, analysis, redesign, implementation and performance improvement analysis in terms of time and cost. From the simulation results, it is evident that our proposed smart medical records system can improve the quality of the service by reducing the time and cost and increasing efficiencyKeywords: business process management, electronic health records, efficiency, cost, time
Procedia PDF Downloads 3417465 The Church of San Paolo in Ferrara, Restoration and Accessibility
Authors: Benedetta Caglioti
Abstract:
The ecclesiastical complex of San Paolo in Ferrara represents a monument of great historical, religious and architectural importance. Its long and articulated story, over time, is already manifested by the mere reading of its planimetric and altimetric configuration, apparently unitary but, in reality, marked by modifications and repeated additions, even of high quality. It follows, in terms of protection, restoration and enhancement, a commitment of due respect for how the ancient building was built and enriched over its centuries of life. Hence a rigorous methodological approach, while being aware of the fact that every monument, in order to live and make use of the indispensable maintenance, must always be enjoyed and visited, therefore it must enjoy, in the right measure and compatibly with its nature, the possibility of improvements and functional, distributive, technological adjustments and related to the safety of people and things. The methodological approach substantiates the different elements of the project (such as distribution functionality, safety, structural solidity, environmental comfort, the character of the site, building and urban planning regulations, financial resources and materials, the same organization methods of the construction site) through the guiding principles of restoration, defined for a long time: the 'minimum intervention,' the 'recognisability' or 'distinguishability' of old and new, the Physico-chemical and figurative 'compatibility,' the 'durability' and the, at least potential, 'reversibility' of what is done, leading to the definition of appropriate "critical choices." The project tackles, together with the strictly functional ones, also the directly conservative and restoration issues, of a static, structural and material technology nature, with special attention to precious architectural surfaces, In order to ensure the best architectural quality through conscious enhancement, the project involves a redistribution of the interior and service spaces, an accurate lighting system inside and outside the church and a reorganization of the adjacent urban space. The reorganization of the interior is designed with particular attention to the issue of accessibility for people with disabilities. To accompany the community to regain possession of the use of the church's own space, already in its construction phase, the project proposal has hypothesized a permeability and flexibility in the management of the works such as to allow the perception of the found Monument to gradually become more and more familiar at the citizenship. Once the interventions have been completed, it is expected that the Church of San Paolo, second in importance only to the Cathedral, from which it is a few steps away, will be inserted in an already existing circuit of use of the city which over the years has systematized the different aspects of culture, the environment and tourism for the creation of greater awareness in the perception of what Ferrara can offer in cultural terms.Keywords: conservation, accessibility, regeneration, urban space
Procedia PDF Downloads 1087464 Assesment of Financial Performance: An Empirical Study of Crude Oil and Natural Gas Companies in India
Authors: Palash Bandyopadhyay
Abstract:
Background and significance of the study: Crude oil and natural gas is of crucial importance due to its increasing demand in India. The demand has been increased because of change of lifestyle overtime. Since India has poor utilization of oil production capacity, constantly the import of it has been increased progressively day by day. This ultimately hit the foreign exchange reserves of India, however it negatively affect the Indian economy as well. The financial performance of crude oil and natural gas companies in India has been trimmed down year after year because of underutilization of production capacity, enhancement of demand, change in life style, and change in import bill and outflows of foreign currencies. In this background, the current study seeks to measure the financial performance of crude oil and natural gas companies of India in the post liberalization period. Keeping in view of this, this study assesses the financial performance in terms of liquidity management, solvency, efficiency, financial stability, and profitability of the companies under study. Methodology: This research work is encircled on yearly ratio data collected from Centre for Monitoring Indian Economy (CMIE) Prowess database for the periods between 1993-94 and 2012-13 with 20 observations using liquidity, solvency and efficiency indicators, profitability indicators and financial stability indicators of all the major crude oil and natural gas companies in India. In the course of analysis, descriptive statistics, correlation statistics, and linear regression test have been utilized. Major findings: Descriptive statistics indicate that liquidity position is satisfactory in case of three crude oil and natural gas companies (Oil and Natural Gas Companies Videsh Limited, Oil India Limited and Selan exploration and transportation Limited) out of selected companies under study but solvency position is satisfactory only for one company (Oil and Natural Gas Companies Videsh Limited). However, efficiency analysis points out that Oil and Natural Gas Companies Videsh Limited performs effectively the management of inventory, receivables, and payables, but the overall liquidity management is not well. Profitability position is very much satisfactory in case of all the companies except Tata Petrodyne Limited, but profitability management is not satisfactory for all the companies under study. Financial stability analysis shows that all the companies are more dependent on debt capital, which bears a financial risk. Correlation and regression test results illustrates that profitability is positively and negatively associated with liquidity, solvency, efficiency, and financial stability indicators. Concluding statement: Management of liquidity and profitability of crude oil and natural gas companies in India should have been improved through controlling unnecessary imports in spite of the heavy demand of crude oil and natural gas in India and proper utilization of domestic oil reserves. At the same time, Indian government has to concern about rupee depreciation and interest rates.Keywords: financial performance, crude oil and natural gas companies, India, linear regression
Procedia PDF Downloads 3227463 An Analysis of New Service Interchange Designs
Authors: Joseph E. Hummer
Abstract:
An efficient freeway system will be essential to the development of Africa, and interchanges are a key to that efficiency. Around the world, many interchanges between freeways and surface streets, called service interchanges, are of the diamond configuration, and interchanges using roundabouts or loop ramps are also popular. However, many diamond interchanges have serious operational problems, interchanges with roundabouts fail at high demand levels, and loops use lots of expensive land. Newer service interchange designs provide other options. The most popular new interchange design in the US at the moment is the double crossover diamond (DCD), also known as the diverging diamond. The DCD has enormous potential, but also has several significant limitations. The objectives of this paper are to review new service interchange options and to highlight some of the main features of those alternatives. The paper tests four conventional and seven unconventional designs using seven measures related to efficiency, cost, and safety. The results show that there is no superior design in all measures investigated. The DCD is better than most designs tested on most measures examined. However, the DCD was only superior to all other designs for bridge width. The DCD performed relatively poorly for capacity and for serving pedestrians. Based on the results, African freeway designers are encouraged to investigate the full range of alternatives that could work at the spot of interest. Diamonds and DCDs have their niches, but some of the other designs investigated could be optimum at some spots.Keywords: interchange, diamond, diverging diamond, capacity, safety, cost
Procedia PDF Downloads 2527462 TessPy – Spatial Tessellation Made Easy
Authors: Jonas Hamann, Siavash Saki, Tobias Hagen
Abstract:
Discretization of urban areas is a crucial aspect in many spatial analyses. The process of discretization of space into subspaces without overlaps and gaps is called tessellation. It helps understanding spatial space and provides a framework for analyzing geospatial data. Tessellation methods can be divided into two groups: regular tessellations and irregular tessellations. While regular tessellation methods, like squares-grids or hexagons-grids, are suitable for addressing pure geometry problems, they cannot take the unique characteristics of different subareas into account. However, irregular tessellation methods allow the border between the subareas to be defined more realistically based on urban features like a road network or Points of Interest (POI). Even though Python is one of the most used programming languages when it comes to spatial analysis, there is currently no library that combines different tessellation methods to enable users and researchers to compare different techniques. To close this gap, we are proposing TessPy, an open-source Python package, which combines all above-mentioned tessellation methods and makes them easily accessible to everyone. The core functions of TessPy represent the five different tessellation methods: squares, hexagons, adaptive squares, Voronoi polygons, and city blocks. By using regular methods, users can set the resolution of the tessellation which defines the finesse of the discretization and the desired number of tiles. Irregular tessellation methods allow users to define which spatial data to consider (e.g., amenity, building, office) and how fine the tessellation should be. The spatial data used is open-source and provided by OpenStreetMap. This data can be easily extracted and used for further analyses. Besides the methodology of the different techniques, the state-of-the-art, including examples and future work, will be discussed. All dependencies can be installed using conda or pip; however, the former is more recommended.Keywords: geospatial data science, geospatial data analysis, tessellations, urban studies
Procedia PDF Downloads 128