Search results for: carbon emission efficiency
1611 The Agri-Environmental Instruments in Agricultural Policy to Reduce Nitrogen Pollution
Authors: Flavio Gazzani
Abstract:
Nitrogen is an important agricultural input that is critical for the production. However, the introduction of large amounts of nitrogen into the environment has a number of undesirable impacts such as: the loss of biodiversity, eutrophication of waters and soils, drinking water pollution, acidification, greenhouse gas emissions, human health risks. It is a challenge to sustain or increase food production and at the same time reduce losses of reactive nitrogen to the environment, but there are many potential benefits associated with improving nitrogen use efficiency. Reducing nutrient losses from agriculture is crucial to the successful implementation of agricultural policy. Traditional regulatory instruments applied to implement environmental policies to reduce environmental impacts from nitrogen fertilizers, despite some successes, failed to address many environmental challenges and imposed high costs on the society to achieve environmental quality objectives. As a result, economic instruments started to be recognized for their flexibility and cost-effectiveness. The objective of the research project is to analyze the potential for increased use of market-based instruments in nitrogen control policy. The report reviews existing knowledge, bringing different studies together to assess the global nitrogen situation and the most relevant environmental management policy that aims to reduce pollution in a sustainable way without affect negatively agriculture production and food price. This analysis provides some guidance on how different market based instruments might be orchestrated in an overall policy framework to the development and assessment of sustainable nitrogen management from the economics, environmental and food security point of view.Keywords: nitrogen emissions, chemical fertilizers, eutrophication, non-point of source pollution, dairy farm
Procedia PDF Downloads 3291610 Expansion of Cord Blood Cells Using a Mix of Neurotrophic Factors
Authors: Francisco Dos Santos, Diogo Fonseca-Pereira, Sílvia Arroz-Madeira, Henrique Veiga-Fernandes
Abstract:
Haematopoiesis is a developmental process that generates all blood cell lineages in health and disease. This relies on quiescent haematopoietic stem cells (HSCs) that are able to differentiate, self renew and expand upon physiological demand. HSCs have great interest in regenerative medicine, including haematological malignancies, immunodeficiencies and metabolic disorders. However, the limited yield from existing HSC sources drives the global need for reliable techniques to expand harvested HSCs at high quality and sufficient quantities. With the extensive use of cord blood progenitors for clinical applications, there is a demand for a safe and efficient expansion protocol that is able to overcome the limitations of the cord blood as a source of HSC. StemCell2MAXTM developed a technology that enhances the survival, proliferation and transplantation efficiency of HSC, leading the way to a more widespread use of HSC for research and clinical purposes. StemCell2MAXTM MIX is a solution that improves HSC expansion up to 20x, while preserving stemness, when compared to state-of-the-art. In a recent study by a leading cord blood bank, StemCell2MAX MIX was shown to support a selective 100-fold expansion of CD34+ Hematopoietic Stem and Progenitor Cells (when compared to a 10-fold expansion of Total Nucleated Cells), while maintaining their multipotent differentiative potential as assessed by CFU assays. The technology developed by StemCell2MAXTM opens new horizons for the usage of expanded hematopoietic progenitors for both research purposes (including quality and functional assays in Cord Blood Banks) and clinical applications.Keywords: cord blood, expansion, hematopoietic stem cell, transplantation
Procedia PDF Downloads 2671609 Digital Twin Smart Hospital: A Guide for Implementation and Improvements
Authors: Enido Fabiano de Ramos, Ieda Kanashiro Makiya, Francisco I. Giocondo Cesar
Abstract:
This study investigates the application of Digital Twins (DT) in Smart Hospital Environments (SHE), through a bibliometric study and literature review, including comparison with the principles of Industry 4.0. It aims to analyze the current state of the implementation of digital twins in clinical and non-clinical operations in healthcare settings, identifying trends and challenges, comparing these practices with Industry 4.0 concepts and technologies, in order to present a basic framework including stages and maturity levels. The bibliometric methodology will allow mapping the existing scientific production on the theme, while the literature review will synthesize and critically analyze the relevant studies, highlighting pertinent methodologies and results, additionally the comparison with Industry 4.0 will provide insights on how the principles of automation, interconnectivity and digitalization can be applied in healthcare environments/operations, aiming at improvements in operational efficiency and quality of care. The results of this study will contribute to a deeper understanding of the potential of Digital Twins in Smart Hospitals, in addition to the future potential from the effective integration of Industry 4.0 concepts in this specific environment, presented through the practical framework, after all, the urgent need for changes addressed in this article is undeniable, as well as all their value contribution to human sustainability, designed in SDG3 – Health and well-being: ensuring that all citizens have a healthy life and well-being, at all ages and in all situations. We know that the validity of these relationships will be constantly discussed, and technology can always change the rules of the game.Keywords: digital twin, smart hospital, healthcare operations, industry 4.0, SDG3, technology
Procedia PDF Downloads 531608 A Methodology Based on Image Processing and Deep Learning for Automatic Characterization of Graphene Oxide
Authors: Rafael do Amaral Teodoro, Leandro Augusto da Silva
Abstract:
Originated from graphite, graphene is a two-dimensional (2D) material that promises to revolutionize technology in many different areas, such as energy, telecommunications, civil construction, aviation, textile, and medicine. This is possible because its structure, formed by carbon bonds, provides desirable optical, thermal, and mechanical characteristics that are interesting to multiple areas of the market. Thus, several research and development centers are studying different manufacturing methods and material applications of graphene, which are often compromised by the scarcity of more agile and accurate methodologies to characterize the material – that is to determine its composition, shape, size, and the number of layers and crystals. To engage in this search, this study proposes a computational methodology that applies deep learning to identify graphene oxide crystals in order to characterize samples by crystal sizes. To achieve this, a fully convolutional neural network called U-net has been trained to segment SEM graphene oxide images. The segmentation generated by the U-net is fine-tuned with a standard deviation technique by classes, which allows crystals to be distinguished with different labels through an object delimitation algorithm. As a next step, the characteristics of the position, area, perimeter, and lateral measures of each detected crystal are extracted from the images. This information generates a database with the dimensions of the crystals that compose the samples. Finally, graphs are automatically created showing the frequency distributions by area size and perimeter of the crystals. This methodological process resulted in a high capacity of segmentation of graphene oxide crystals, presenting accuracy and F-score equal to 95% and 94%, respectively, over the test set. Such performance demonstrates a high generalization capacity of the method in crystal segmentation, since its performance considers significant changes in image extraction quality. The measurement of non-overlapping crystals presented an average error of 6% for the different measurement metrics, thus suggesting that the model provides a high-performance measurement for non-overlapping segmentations. For overlapping crystals, however, a limitation of the model was identified. To overcome this limitation, it is important to ensure that the samples to be analyzed are properly prepared. This will minimize crystal overlap in the SEM image acquisition and guarantee a lower error in the measurements without greater efforts for data handling. All in all, the method developed is a time optimizer with a high measurement value, considering that it is capable of measuring hundreds of graphene oxide crystals in seconds, saving weeks of manual work.Keywords: characterization, graphene oxide, nanomaterials, U-net, deep learning
Procedia PDF Downloads 1601607 The Use of Industrial Ecology Principles in the Production of Solar Cells and Solar Modules
Authors: Julius Denafas, Irina Kliopova, Gintaras Denafas
Abstract:
Three opportunities for implementation of industrial ecology principles in the real industrial production of c-Si solar cells and modules are presented in this study. It includes: material flow dematerialisation, product modification and industrial symbiosis. Firstly, it is shown how the collaboration between R&D institutes and industry helps to achieve significant reduction of material consumption by a) refuse from phosphor silicate glass cleaning process and b) shortening of SiNx coating production step. This work was performed in the frame of Eco-Solar project, where Soli Tek R&D is collaborating together with the partners from ISC-Konstanz institute. Secondly, it was shown how the modification of solar module design can reduce the CO2 footprint for this product and enhance waste prevention. It was achieved by implementing a frameless glass/glass solar module design instead of glass/backsheet with aluminium frame. Such a design change is possible without purchasing new equipment and without loss of main product properties like efficiency, rigidity and longevity. Thirdly, industrial symbiosis in the solar cell production is possible in such case when manufacturing waste (silicon wafer and solar cell breakage) are collected, sorted and supplied as raw-materials to other companies involved in the production chain of c-Si solar cells. The obtained results showed that solar cells produced from recycled silicon can have a comparable electrical parameters like produced from standard, commercial silicon wafers. The above mentioned work was performed at solar cell producer Soli Tek R&D in the frame of H2020 projects CABRISS and Eco-Solar.Keywords: solar cells and solar modules, manufacturing, waste prevention, recycling
Procedia PDF Downloads 2131606 Investigating the Process Kinetics and Nitrogen Gas Production in Anammox Hybrid Reactor with Special Emphasis on the Role of Filter Media
Authors: Swati Tomar, Sunil Kumar Gupta
Abstract:
Anammox is a novel and promising technology that has changed the traditional concept of biological nitrogen removal. The process facilitates direct oxidation of ammonical nitrogen under anaerobic conditions with nitrite as an electron acceptor without the addition of external carbon sources. The present study investigated the feasibility of anammox hybrid reactor (AHR) combining the dual advantages of suspended and attached growth media for biodegradation of ammonical nitrogen in wastewater. The experimental unit consisted of 4 nos. of 5L capacity AHR inoculated with mixed seed culture containing anoxic and activated sludge (1:1). The process was established by feeding the reactors with synthetic wastewater containing NH4-H and NO2-N in the ratio 1:1 at HRT (hydraulic retention time) of 1 day. The reactors were gradually acclimated to higher ammonium concentration till it attained pseudo steady state removal at a total nitrogen concentration of 1200 mg/l. During this period, the performance of the AHR was monitored at twelve different HRTs varying from 0.25-3.0 d with increasing NLR from 0.4 to 4.8 kg N/m3d. AHR demonstrated significantly higher nitrogen removal (95.1%) at optimal HRT of 1 day. Filter media in AHR contributed an additional 27.2% ammonium removal in addition to 72% reduction in the sludge washout rate. This may be attributed to the functional mechanism of filter media which acts as a mechanical sieve and reduces the sludge washout rate many folds. This enhances the biomass retention capacity of the reactor by 25%, which is the key parameter for successful operation of high rate bioreactors. The effluent nitrate concentration, which is one of the bottlenecks of anammox process was also minimised significantly (42.3-52.3 mg/L). Process kinetics was evaluated using first order and Grau-second order models. The first-order substrate removal rate constant was found as 13.0 d-1. Model validation revealed that Grau second order model was more precise and predicted effluent nitrogen concentration with least error (1.84±10%). A new mathematical model based on mass balance was developed to predict N2 gas in AHR. The mass balance model derived from total nitrogen dictated significantly higher correlation (R2=0.986) and predicted N2 gas with least error of precision (0.12±8.49%). SEM study of biomass indicated the presence of the heterogeneous population of cocci and rod shaped bacteria of average diameter varying from 1.2-1.5 mm. Owing to enhanced NRE coupled with meagre production of effluent nitrate and its ability to retain high biomass, AHR proved to be the most competitive reactor configuration for dealing with nitrogen laden wastewater.Keywords: anammox, filter media, kinetics, nitrogen removal
Procedia PDF Downloads 3821605 Weakly Solving Kalah Game Using Artificial Intelligence and Game Theory
Authors: Hiba El Assibi
Abstract:
This study aims to weakly solve Kalah, a two-player board game, by developing a start-to-finish winning strategy using an optimized Minimax algorithm with Alpha-Beta Pruning. In weakly solving Kalah, our focus is on creating an optimal strategy from the game's beginning rather than analyzing every possible position. The project will explore additional enhancements like symmetry checking and code optimizations to speed up the decision-making process. This approach is expected to give insights into efficient strategy formulation in board games and potentially help create games with a fair distribution of outcomes. Furthermore, this research provides a unique perspective on human versus Artificial Intelligence decision-making in strategic games. By comparing the AI-generated optimal moves with human choices, we can explore how seemingly advantageous moves can, in the long run, be harmful, thereby offering a deeper understanding of strategic thinking and foresight in games. Moreover, this paper discusses the evaluation of our strategy against existing methods, providing insights on performance and computational efficiency. We also discuss the scalability of our approach to the game, considering different board sizes (number of pits and stones) and rules (different variations) and studying how that affects performance and complexity. The findings have potential implications for the development of AI applications in strategic game planning, enhancing our understanding of human cognitive processes in game settings, and offer insights into creating balanced and engaging game experiences.Keywords: minimax, alpha beta pruning, transposition tables, weakly solving, game theory
Procedia PDF Downloads 551604 Study on Hydrogen Isotope Permeability of High Entropy Alloy Coating
Authors: Long Wang, Yongjin Feng, Xiaofang Luo
Abstract:
Tritium permeation through structural materials is a significant issue for fusion demonstration (DEMO) reactor blankets in terms of fuel cycle efficiency and radiological safety. Reduced activation ferritic (RAFM) steel CLF-1 is a prime candidate for the China’s CFETR blanket structural material, facing high permeability of hydrogen isotopes at reactor operational temperature. To confine tritium as much as possible in the reactor, surface modification of the steels including fabrication of tritium permeation barrier (TPB) attracts much attention. As a new alloy system, high entropy alloy (HEA) contains at least five principal elements, each of which ranges from 5 at% to 35 at%. This high mixing effect entitles HEA extraordinary comprehensive performance. So it is attractive to lead HEA into surface alloying for protective use. At present, studies on the hydrogen isotope permeability of HEA coatings is still insufficient and corresponding mechanism isn’t clear. In our study, we prepared three kinds of HEA coatings, including AlCrTaTiZr, (AlCrTaTiZr)N and (AlCrTaTiZr)O. After comprehensive characterization of SEM, XPS, AFM, XRD and TEM, the structure and composition of the HEA coatings were obtained. Deuterium permeation tests were conducted to evaluate the hydrogen isotope permeability of AlCrTaTiZr, (AlCrTaTiZr)N and (AlCrTaTiZr)O HEA coatings. Results proved that the (AlCrTaTiZr)N and (AlCrTaTiZr)O HEA coatings had better hydrogen isotope permeation resistance. Through analyzing and characterizing the hydrogen isotope permeation results of the corroded samples, an internal link between hydrogen isotope permeation behavior and structure of HEA coatings was established. The results provide valuable reference in engineering design of structural and TPB materials for future fusion device.Keywords: high entropy alloy, hydrogen isotope permeability, tritium permeation barrier, fusion demonstration reactor
Procedia PDF Downloads 1721603 An Efficient Machine Learning Model to Detect Metastatic Cancer in Pathology Scans Using Principal Component Analysis Algorithm, Genetic Algorithm, and Classification Algorithms
Authors: Bliss Singhal
Abstract:
Machine learning (ML) is a branch of Artificial Intelligence (AI) where computers analyze data and find patterns in the data. The study focuses on the detection of metastatic cancer using ML. Metastatic cancer is the stage where cancer has spread to other parts of the body and is the cause of approximately 90% of cancer-related deaths. Normally, pathologists spend hours each day to manually classifying whether tumors are benign or malignant. This tedious task contributes to mislabeling metastasis being over 60% of the time and emphasizes the importance of being aware of human error and other inefficiencies. ML is a good candidate to improve the correct identification of metastatic cancer, saving thousands of lives and can also improve the speed and efficiency of the process, thereby taking fewer resources and time. So far, the deep learning methodology of AI has been used in research to detect cancer. This study is a novel approach to determining the potential of using preprocessing algorithms combined with classification algorithms in detecting metastatic cancer. The study used two preprocessing algorithms: principal component analysis (PCA) and the genetic algorithm, to reduce the dimensionality of the dataset and then used three classification algorithms: logistic regression, decision tree classifier, and k-nearest neighbors to detect metastatic cancer in the pathology scans. The highest accuracy of 71.14% was produced by the ML pipeline comprising of PCA, the genetic algorithm, and the k-nearest neighbor algorithm, suggesting that preprocessing and classification algorithms have great potential for detecting metastatic cancer.Keywords: breast cancer, principal component analysis, genetic algorithm, k-nearest neighbors, decision tree classifier, logistic regression
Procedia PDF Downloads 821602 COVID-19 Detection from Computed Tomography Images Using UNet Segmentation, Region Extraction, and Classification Pipeline
Authors: Kenan Morani, Esra Kaya Ayana
Abstract:
This study aimed to develop a novel pipeline for COVID-19 detection using a large and rigorously annotated database of computed tomography (CT) images. The pipeline consists of UNet-based segmentation, lung extraction, and a classification part, with the addition of optional slice removal techniques following the segmentation part. In this work, a batch normalization was added to the original UNet model to produce lighter and better localization, which is then utilized to build a full pipeline for COVID-19 diagnosis. To evaluate the effectiveness of the proposed pipeline, various segmentation methods were compared in terms of their performance and complexity. The proposed segmentation method with batch normalization outperformed traditional methods and other alternatives, resulting in a higher dice score on a publicly available dataset. Moreover, at the slice level, the proposed pipeline demonstrated high validation accuracy, indicating the efficiency of predicting 2D slices. At the patient level, the full approach exhibited higher validation accuracy and macro F1 score compared to other alternatives, surpassing the baseline. The classification component of the proposed pipeline utilizes a convolutional neural network (CNN) to make final diagnosis decisions. The COV19-CT-DB dataset, which contains a large number of CT scans with various types of slices and rigorously annotated for COVID-19 detection, was utilized for classification. The proposed pipeline outperformed many other alternatives on the dataset.Keywords: classification, computed tomography, lung extraction, macro F1 score, UNet segmentation
Procedia PDF Downloads 1311601 Facilitating Waste Management to Achieve Sustainable Residential Built Environments
Authors: Ingy Ibrahim El-Darwish, Neveen Youssef Azmy
Abstract:
The endowment of a healthy environment can be implemented by endorsing sustainable fundamentals. Design of sustainable buildings through recycling of waste, can reduce health problems, provide good environments and contribute to the aesthetically pleasing entourage. Such environments can help in providing energy-saving alternatives to consolidate the principles of sustainability. The poor community awareness and the absence of laws and legislation in Egypt for waste management specifically in residential areas have led to an inability to provide an integrated system for waste management in urban and rural areas. Many problems and environmental challenges face the Egyptian urban environments. From these problems, is the lack of a cohesive vision for waste collection and recycling for energy-saving. The second problem is the lack public awareness of the short term and long term vision of waste management. Bad practices have adversely affected the efficiency of environmental management systems due to lack of urban legislations that codify collection and recycling of residential communities in Egyptian urban environments. Hence, this research tries to address residents on waste management matters to facilitate legislative process on waste collection and classification within residential units and outside them in a preparation phase for recycling in the Egyptian urban environments. In order to achieve this goal, one of the Egyptian communities has been addressed, analyzed and studied. Waste collection, classification, separation and access to recycling places in the urban city are proposed in preparation for a legislation ruling and regulating the process. Hence, sustainable principles are to be achieved.Keywords: recycling, residential buildings, sustainability, waste
Procedia PDF Downloads 3271600 The Three-dimensional Response of Mussel Plaque Anchoring to Wet Substrates under Directional Tensions
Authors: Yingwei Hou, Tao Liu, Yong Pang
Abstract:
The paper explored the three-dimensional deformation of mussel plaques anchor to wet polydimethylsiloxane (PDMS) substrates under tension stress with different angles. Mussel plaques exhibiting natural adhesive structures, have attracted significant attention for their remarkable adhesion properties. Understanding their behavior under mechanical stress, particularly in a three-dimensional context, holds immense relevance for biomimetic material design and bio-inspired adhesive development. This study employed a novel approach to investigate the 3D deformation of the PDMS substrates anchored by mussel plaques subjected to controlled tension. Utilizing our customized stereo digital image correlation technique and mechanical mechanics analyses, we found the distributions of the displacement and resultant force on the substrate became concentrated under the plaque. Adhesion and sucking mechanisms were analyzed for the mussel plaque-substrate system under tension until detachment. The experimental findings were compared with a developed model using finite element analysis and the results provide new insights into mussels’ attachment mechanism. This research not only contributes to the fundamental understanding of biological adhesion but also holds promising implications for the design of innovative adhesive materials with applications in fields such as medical adhesives, underwater technologies, and industrial bonding. The comprehensive exploration of mussel plaque behavior in three dimensions is important for advancements in biomimicry and materials science, fostering the development of adhesives that emulate nature's efficiency.Keywords: adhesion mechanism, mytilus edulis, mussel plaque, stereo digital image correlation
Procedia PDF Downloads 571599 Accounting for Rice Productivity Heterogeneity in Ghana: The Two-Step Stochastic Metafrontier Approach
Authors: Franklin Nantui Mabe, Samuel A. Donkoh, Seidu Al-Hassan
Abstract:
Rice yields among agro-ecological zones are heterogeneous. Farmers, researchers and policy makers are making frantic efforts to bridge rice yield gaps between agro-ecological zones through the promotion of improved agricultural technologies (IATs). Farmers are also modifying these IATs and blending them with indigenous farming practices (IFPs) to form farmer innovation systems (FISs). Also, different metafrontier models have been used in estimating productivity performances and their drivers. This study used the two-step stochastic metafrontier model to estimate the productivity performances of rice farmers and their determining factors in GSZ, FSTZ and CSZ. The study used both primary and secondary data. Farmers in CSZ are the most technically efficient. Technical inefficiencies of farmers are negatively influenced by age, sex, household size, education years, extension visits, contract farming, access to improved seeds, access to irrigation, high rainfall amount, less lodging of rice, and well-coordinated and synergized adoption of technologies. Albeit farmers in CSZ are doing well in terms of rice yield, they still have the highest potential of increasing rice yield since they had the lowest TGR. It is recommended that government through the ministry of food and agriculture, development partners and individual private companies promote the adoption of IATs as well as educate farmers on how to coordinate and synergize the adoption of the whole package. Contract farming concept and agricultural extension intensification should be vigorously pursued to the latter.Keywords: efficiency, farmer innovation systems, improved agricultural technologies, two-step stochastic metafrontier approach
Procedia PDF Downloads 2671598 Factors Affecting Air Surface Temperature Variations in the Philippines
Authors: John Christian Lequiron, Gerry Bagtasa, Olivia Cabrera, Leoncio Amadore, Tolentino Moya
Abstract:
Changes in air surface temperature play an important role in the Philippine’s economy, industry, health, and food production. While increasing global mean temperature in the recent several decades has prompted a number of climate change and variability studies in the Philippines, most studies still focus on rainfall and tropical cyclones. This study aims to investigate the trend and variability of observed air surface temperature and determine its major influencing factor/s in the Philippines. A non-parametric Mann-Kendall trend test was applied to monthly mean temperature of 17 synoptic stations covering 56 years from 1960 to 2015 and a mean change of 0.58 °C or a positive trend of 0.0105 °C/year (p < 0.05) was found. In addition, wavelet decomposition was used to determine the frequency of temperature variability show a 12-month, 30-80-month and more than 120-month cycles. This indicates strong annual variations, interannual variations that coincide with ENSO events, and interdecadal variations that are attributed to PDO and CO2 concentrations. Air surface temperature was also correlated with smoothed sunspot number and galactic cosmic rays, the results show a low to no effect. The influence of ENSO teleconnection on temperature, wind pattern, cloud cover, and outgoing longwave radiation on different ENSO phases had significant effects on regional temperature variability. Particularly, an anomalous anticyclonic (cyclonic) flow east of the Philippines during the peak and decay phase of El Niño (La Niña) events leads to the advection of warm southeasterly (cold northeasterly) air mass over the country. Furthermore, an apparent increasing cloud cover trend is observed over the West Philippine Sea including portions of the Philippines, and this is believed to lessen the effect of the increasing air surface temperature. However, relative humidity was also found to be increasing especially on the central part of the country, which results in a high positive trend of heat index, exacerbating the effects on human discomfort. Finally, an assessment of gridded temperature datasets was done to look at the viability of using three high-resolution datasets in future climate analysis and model calibration and verification. Several error statistics (i.e. Pearson correlation, Bias, MAE, and RMSE) were used for this validation. Results show that gridded temperature datasets generally follows the observed surface temperature change and anomalies. In addition, it is more representative of regional temperature rather than a substitute to station-observed air temperature.Keywords: air surface temperature, carbon dioxide, ENSO, galactic cosmic rays, smoothed sunspot number
Procedia PDF Downloads 3231597 Aspects and Studies of Fractal Geometry in Automatic Breast Cancer Detection
Authors: Mrinal Kanti Bhowmik, Kakali Das Jr., Barin Kumar De, Debotosh Bhattacharjee
Abstract:
Breast cancer is the most common cancer and a leading cause of death for women in the 35 to 55 age group. Early detection of breast cancer can decrease the mortality rate of breast cancer. Mammography is considered as a ‘Gold Standard’ for breast cancer detection and a very popular modality, presently used for breast cancer screening and detection. The screening of digital mammograms often leads to over diagnosis and a consequence to unnecessary traumatic & painful biopsies. For that reason recent studies involving the use of thermal imaging as a screening technique have generated a growing interest especially in cases where the mammography is limited, as in young patients who have dense breast tissue. Tumor is a significant sign of breast cancer in both mammography and thermography. The tumors are complex in structure and they also exhibit a different statistical and textural features compared to the breast background tissue. Fractal geometry is a geometry which is used to describe this type of complex structure as per their main characteristic, where traditional Euclidean geometry fails. Over the last few years, fractal geometrics have been applied mostly in many medical image (1D, 2D, or 3D) analysis applications. In breast cancer detection using digital mammogram images, also it plays a significant role. Fractal is also used in thermography for early detection of the masses using the thermal texture. This paper presents an overview of the recent aspects and initiatives of fractals in breast cancer detection in both mammography and thermography. The scope of fractal geometry in automatic breast cancer detection using digital mammogram and thermogram images are analysed, which forms a foundation for further study on application of fractal geometry in medical imaging for improving the efficiency of automatic detection.Keywords: fractal, tumor, thermography, mammography
Procedia PDF Downloads 3881596 Pareto System of Optimal Placement and Sizing of Distributed Generation in Radial Distribution Networks Using Particle Swarm Optimization
Authors: Sani M. Lawal, Idris Musa, Aliyu D. Usman
Abstract:
The Pareto approach of optimal solutions in a search space that evolved in multi-objective optimization problems is adopted in this paper, which stands for a set of solutions in the search space. This paper aims at presenting an optimal placement of Distributed Generation (DG) in radial distribution networks with an optimal size for minimization of power loss and voltage deviation as well as maximizing voltage profile of the networks. And these problems are formulated using particle swarm optimization (PSO) as a constraint nonlinear optimization problem with both locations and sizes of DG being continuous. The objective functions adopted are the total active power loss function and voltage deviation function. The multiple nature of the problem, made it necessary to form a multi-objective function in search of the solution that consists of both the DG location and size. The proposed PSO algorithm is used to determine optimal placement and size of DG in a distribution network. The output indicates that PSO algorithm technique shows an edge over other types of search methods due to its effectiveness and computational efficiency. The proposed method is tested on the standard IEEE 34-bus and validated with 33-bus test systems distribution networks. Results indicate that the sizing and location of DG are system dependent and should be optimally selected before installing the distributed generators in the system and also an improvement in the voltage profile and power loss reduction have been achieved.Keywords: distributed generation, pareto, particle swarm optimization, power loss, voltage deviation
Procedia PDF Downloads 3641595 Planning for Sustainability in the Built Environment
Authors: Adedayo Jeremiah Adeyekun, Samuel Oluwagbemiga Ishola
Abstract:
This paper aimed to identify the significance of sustainability in the built environment, the economic and environmental importance to building and construction projects. Sustainability in the built environment has been a key objective of research over the past several decades. Sustainability in the built environment requires reconciliation between economic, environmental and social impacts of design and planning decisions made during the life cycle of a project from inception to termination. Planning for sustainability in the built environment needs us to go beyond our individual disciplines to consider the variety of economic, social and environmental impacts of our decisions in the long term. A decision to build a green residential development in an isolated location may pass some of the test of sustainability through its reduction in stormwater runoff, energy efficiency, and ecological sustainability in the building, but it may fail to be sustainable from a transportation perspective. Sustainability is important to the planning, design, construction, and preservation of the built environment; because it helps these activities reflect multiple values and considerations. In fact, the arts and sciences of the built environment have traditionally integrated values and fostered creative expression, capabilities that can and should lead the sustainability movement as society seeks ways to live in dynamic balance with its own diverse needs and the natural world. This research aimed to capture the state-of-the-art in the development of innovative sustainable design and planning strategies for building and construction projects. Therefore, there is a need for a holistic selection and implication approach for identifying potential sustainable strategies applicable to a particular project and evaluating the overall life cycle impact of each alternative by accounting for different applicable impacts and making the final selection among various viable alternatives.Keywords: sustainability, built environment, planning, design, construction
Procedia PDF Downloads 1761594 Experimental Investigation of Mechanical Friction Influence in Semi-Hydraulic Clutch Actuation System Over Mileage
Authors: Abdul Azarrudin M. A., Pothiraj K., Kandasamy Satish
Abstract:
In the current automobile scenario, there comes a demand on more sophistication and comfort drive feel on passenger segments. The clutch pedal effort is one such customer touch feels in manual transmission vehicles, where the driver continuous to operate the clutch pedal in his entire the driving maneuvers. Hence optimum pedal efforts at green condition and over mileage to be ensured for fatigue free the driving. As friction is one the predominant factor and its tendency to challenge the technicality by causing the function degradation. One such semi-hydraulic systems shows load efficiency of about 70-75% over lifetime only due to the increase in friction which leads to the increase in pedal effort and cause fatigue to the vehicle driver. This work deals with the study of friction with different interfaces and its influence in the fulcrum points over mileage, with the objective of understanding the trend over mileage and determining the alternative ways of resolving it. In that one way of methodology is the reduction of friction by experimental investigation of various friction reduction interfaces like metal-to-metal interface and it has been tried out and is detailed further. Also, the specific attention has been put up considering the fulcrum load and its contact interfaces to move on with this study. The main results of the experimental data with the influence of three different contact interfaces are being presented with an ultimate intention of ending up into less fatigue with longer consistent pedal effort, thus smoothens the operation of the end user. The Experimental validation also has been done through rig-level test setup to depict the performance at static condition and in-parallel vehicle level test has also been performed to record the additional influences if any.Keywords: automobile, clutch, friction, fork
Procedia PDF Downloads 1241593 Computational Investigation of Secondary Flow Losses in Linear Turbine Cascade by Modified Leading Edge Fence
Authors: K. N. Kiran, S. Anish
Abstract:
It is well known that secondary flow loses account about one third of the total loss in any axial turbine. Modern gas turbine height is smaller and have longer chord length, which might lead to increase in secondary flow. In order to improve the efficiency of the turbine, it is important to understand the behavior of secondary flow and device mechanisms to curtail these losses. The objective of the present work is to understand the effect of a stream wise end-wall fence on the aerodynamics of a linear turbine cascade. The study is carried out computationally by using commercial software ANSYS CFX. The effect of end-wall on the flow field are calculated based on RANS simulation by using SST transition turbulence model. Durham cascade which is similar to high-pressure axial flow turbine for simulation is used. The aim of fencing in blade passage is to get the maximum benefit from flow deviation and destroying the passage vortex in terms of loss reduction. It is observed that, for the present analysis, fence in the blade passage helps reducing the strength of horseshoe vortex and is capable of restraining the flow along the blade passage. Fence in the blade passage helps in reducing the under turning by 70 in comparison with base case. Fence on end-wall is effective in preventing the movement of pressure side leg of horseshoe vortex and helps in breaking the passage vortex. Computations are carried for different fence height whose curvature is different from the blade camber. The optimum fence geometry and location reduces the loss coefficient by 15.6% in comparison with base case.Keywords: boundary layer fence, horseshoe vortex, linear cascade, passage vortex, secondary flow
Procedia PDF Downloads 3491592 Water Governance Perspectives on the Urmia Lake Restoration Process: Challenges and Achievements
Authors: Jalil Salimi, Mandana Asadi, Naser Fathi
Abstract:
Urmia Lake (UL) has undergone a significant decline in water levels, resulting in severe environmental, socioeconomic, and health-related challenges. This paper examines the restoration process of UL from a water governance perspective. By applying a water governance model, the study evaluates the process based on six selected principles: stakeholder engagement, transparency and accountability, effectiveness, equitable water use, adaptation capacity, and water usage efficiency. The dominance of structural and physicalist approaches to water governance has led to a weak understanding of social and environmental issues, contributing to social crises. Urgent efforts are required to address the water crisis and reform water governance in the country, making water-related issues a top national priority. The UL restoration process has achieved significant milestones, including stakeholder consensus, scientific and participatory planning, environmental vision, intergenerational justice considerations, improved institutional environment for NGOs, investments in water infrastructure, transparency promotion, environmental effectiveness, and local issue resolutions. However, challenges remain, such as power distribution imbalances, bureaucratic administration, weak conflict resolution mechanisms, financial constraints, accountability issues, limited attention to social concerns, overreliance on structural solutions, legislative shortcomings, program inflexibility, and uncertainty management weaknesses. Addressing these weaknesses and challenges is crucial for the successful restoration and sustainable governance of UL.Keywords: evaluation, restoration process, Urmia Lake, water governance, water resource management
Procedia PDF Downloads 671591 Numerical Investigation of the Transverse Instability in Radiation Pressure Acceleration
Authors: F. Q. Shao, W. Q. Wang, Y. Yin, T. P. Yu, D. B. Zou, J. M. Ouyang
Abstract:
The Radiation Pressure Acceleration (RPA) mechanism is very promising in laser-driven ion acceleration because of high laser-ion energy conversion efficiency. Although some experiments have shown the characteristics of RPA, the energy of ions is quite limited. The ion energy obtained in experiments is only several MeV/u, which is much lower than theoretical prediction. One possible limiting factor is the transverse instability incited in the RPA process. The transverse instability is basically considered as the Rayleigh-Taylor (RT) instability, which is a kind of interfacial instability and occurs when a light fluid pushes against a heavy fluid. Multi-dimensional particle-in-cell (PIC) simulations show that the onset of transverse instability will destroy the acceleration process and broaden the energy spectrum of fast ions during the RPA dominant ion acceleration processes. The evidence of the RT instability driven by radiation pressure has been observed in a laser-foil interaction experiment in a typical RPA regime, and the dominant scale of RT instability is close to the laser wavelength. The development of transverse instability in the radiation-pressure-acceleration dominant laser-foil interaction is numerically examined by two-dimensional particle-in-cell simulations. When a laser interacts with a foil with modulated surface, the internal instability is quickly incited and it develops. The linear growth and saturation of the transverse instability are observed, and the growth rate is numerically diagnosed. In order to optimize interaction parameters, a method of information entropy is put forward to describe the chaotic degree of the transverse instability. With moderate modulation, the transverse instability shows a low chaotic degree and a quasi-monoenergetic proton beam is produced.Keywords: information entropy, radiation pressure acceleration, Rayleigh-Taylor instability, transverse instability
Procedia PDF Downloads 3451590 Relationshiop Between Occupants' Behaviour And Indoor Air Quality In Malaysian Public Hospital Outpatient Department
Authors: Farha Ibrahim, Ely Zarina Samsudin, Ahmad Razali Ishak, Jeyanthini Sathasivam
Abstract:
Introduction: Indoor air quality (IAQ) has recently gained substantial traction as the airborne transmission of infectious respiratory disease has become an increasing public health concern. Public hospital outpatient department (OPD). IAQ warrants special consideration as it is the most visited department in which patients and staff are all directly impacted by poor IAQ. However, there is limited evidence on IAQ in these settings. Moreover, occupants’ behavior like occupant’s movement and operation of door, windows and appliances, have been shown to significantly affect IAQ, yet the influence of these determinants on IAQ in such settings have not been established. Objectives: This study aims to examine IAQ in Malaysian public hospitals OPD and assess its relationships with occupants’ behavior. Methodology: A multicenter cross-sectional study in which stratified random sampling of Johor public hospitals OPD (n=6) according to building age was conducted. IAQ measurements include indoor air temperature, relative humidity (RH), air velocity (AV), carbon dioxide (CO2), total bacterial count (TBC) and total fungal count (TFC). Occupants’ behaviors in Malaysian public hospital OPD are assessed using observation forms, and results were analyzed. Descriptive statistics were performed to characterize all study variables, whereas non-parametric Spearman Rank correlation analysis was used to assess the correlation between IAQ and occupants’ behavior. Results: After adjusting for potential cofounder, the study has suggested that occupants’ movement in new building, like seated quietly, is significantly correlated with AV in new building (r 0.642, p-value 0.010), CO2 in new (r 0.772, p-value <0.001) and old building (r -0.559, p-value 0.020), TBC in new (r 0.747, p-value 0.001) and old building (r -0.559, p-value 0.020), and TFC in new (r 0.777, p-value <0.001) and old building (r -0.485, p-value 0.049). In addition, standing relaxed movement is correlated with indoor air temperature (r 0.823, p-value <0.001) in new building, CO2 (r 0.559, p-value 0.020), TBC (r 0.559, p-value 0.020), and TFC (r -0.485, p-value 0.049) in old building, while walking is correlated with AV in new building (r -0.642, p-value 0.001), CO2 in new (r -0.772, p-value <0.001) and old building (r 0.559, p-value 0.020), TBC in new (r -0.747, p-value 0.001) and old building (r 0.559, p-value 0.020), and TFC in old building (r -0.485, p-value 0.049). The indoor air temperature is significantly correlated with number of doors kept opened (r 0.522, p-value 0.046), frequency of door adjustments (r 0.753, p-value 0.001), number of windows kept opened (r 0.522, p-value 0.046), number of air-conditioned (AC) switched on (r 0.698, p-value 0.004) and frequency of AC adjustment (r 0.753, p-value 0.001) in new hospital OPD building. AV is found to be significantly correlated with number of doors kept opened (r 0.642, p-value 0.01), frequency of door adjustments (r 0.553, p-value 0.032), number of windows kept opened (r 0.642, p-value 0.01), and frequency of AC adjustment, number of fans switched on, and frequency of fans adjustment(all with r 0.553, p-value 0.032) in new building. In old hospital OPD building, the number of doors kept opened is significantly correlated with CO₂, TBC (both r -0.559, p-value 0.020) and TFC (r -0.495, p-value 0.049), frequency of door adjustment is significantly correlated with CO₂, TBC (both r-0.559, p-value 0.020) and TFC (r -0.495, p-value 0.049), number of windows kept opened is significantly correlated with CO₂, TBC (both r 0.559, p-value 0.020) and TFC (r 0.495, p-value 0.049), frequency of window adjustment is significantly correlated with CO₂,TBC (both r -0.559, p-value 0.020) and TFC (r -0.495, p-value 0.049), number of AC switched on is significantly correlated with CO₂, TBC (both r -0.559, p-value 0.020) and TFC (r -0.495, p-value 0.049),, frequency of AC adjustment is significantly correlated with CO2 (r 0.559, p-value 0.020), TBC (0.559, p-value 0.020) and TFC (r -0.495, p-value 0.049), number of fans switched on is significantly correlated with CO2, TBC (both r 0.559, p-value 0.020) and TFC (r 0.495, p-value 0.049), and frequency of fans adjustment is significantly correlated with CO2, TBC (both r -0.559, p-value 0.020) and TFC (r -0.495, p-value 0.049). Conclusion: This study provided evidence on IAQ parameters in Malaysian public hospitals OPD and significant factors that may be effective targets of prospective intervention, thus enabling stakeholders to develop appropriate policies and programs to mitigate IAQ issues in Malaysian public hospitals OPD.Keywords: outpatient department, iaq, occupants practice, public hospital
Procedia PDF Downloads 931589 Cooperative Cross Layer Topology for Concurrent Transmission Scheduling Scheme in Broadband Wireless Networks
Authors: Gunasekaran Raja, Ramkumar Jayaraman
Abstract:
In this paper, we consider CCL-N (Cooperative Cross Layer Network) topology based on the cross layer (both centralized and distributed) environment to form network communities. Various performance metrics related to the IEEE 802.16 networks are discussed to design CCL-N Topology. In CCL-N topology, nodes are classified as master nodes (Master Base Station [MBS]) and serving nodes (Relay Station [RS]). Nodes communities are organized based on the networking terminologies. Based on CCL-N Topology, various simulation analyses for both transparent and non-transparent relays are tabulated and throughput efficiency is calculated. Weighted load balancing problem plays a challenging role in IEEE 802.16 network. CoTS (Concurrent Transmission Scheduling) Scheme is formulated in terms of three aspects – transmission mechanism based on identical communities, different communities and identical node communities. CoTS scheme helps in identifying the weighted load balancing problem. Based on the analytical results, modularity value is inversely proportional to that of the error value. The modularity value plays a key role in solving the CoTS problem based on hop count. The transmission mechanism for identical node community has no impact since modularity value is same for all the network groups. In this paper three aspects of communities based on the modularity value which helps in solving the problem of weighted load balancing and CoTS are discussed.Keywords: cross layer network topology, concurrent scheduling, modularity value, network communities and weighted load balancing
Procedia PDF Downloads 2651588 Multiple Primary Pulmonary Meningiomas: A Case Report
Authors: Wellemans Isabelle, Remmelink Myriam, Foucart Annick, Rusu Stefan, Compère Christophe
Abstract:
Primary pulmonary meningioma (PPM) is a very rare tumor, and its occurrence has been reported only sporadically. Multiple PPMs are even more exceptional, and herein, we report, to the best of our knowledge, the fourth case, focusing on the clinicopathological features of the tumor. Moreover, the possible relationship between the use of progesterone–only contraceptives and the development of these neoplasms will be discussed. Case Report: We report a case of a 51-year-old female presenting three solid pulmonary nodules, with the following localizations: right upper lobe, middle lobe, and left lower lobe, described as incidental findings on computed tomography (CT) during a pre-bariatric surgery check-up. The patient revealed no drinking or smoking history. The physical exam was unremarkable except for the obesity. The lesions ranged in size between 6 and 24 mm and presented as solid nodules with lobulated contours. The largest lesion situated in the middle lobe had mild fluorodeoxyglucose (FDG) uptake on F-18 FDG positron emission tomography (PET)/CT, highly suggestive of primary lung neoplasm. For pathological assessment, video-assisted thoracoscopic middle lobectomy and wedge resection of the right upper nodule was performed. Histological examination revealed relatively well-circumscribed solid proliferation of bland meningothelial cells growing in whorls and lobular nests, presenting intranuclear pseudo-inclusions and psammoma bodies. No signs of anaplasia were observed. The meningothelial cells expressed diffusely Vimentin, focally Progesterone receptors and were negative for epithelial (cytokeratin (CK) AE1/AE3, CK7, CK20, Epithelial Membrane Antigen (EMA)), neuroendocrine markers (Synaptophysin, Chromogranin, CD56) and Estrogenic receptors. The proliferation labelling index Ki-67 was low (<5%). Metastatic meningioma was ruled out by brain and spine magnetic resonance imaging (MRI) scans. The third lesion localized in the left lower lobe was followed-up and resected three years later because of its slow but significant growth (14 mm to 16 mm), alongside two new infra centimetric lesions. Those three lesions showed a morphological and immunohistochemical profile similar to previously resected lesions. The patient was disease-free one year post-last surgery. Discussion: Although PPMs are mostly benign and slow-growing tumors with an excellent prognosis, they do not present specific radiological characteristics, and it is difficult to differentiate it from other lung tumors, histopathologic examination being essential. Aggressive behavior is associated with atypical or anaplastic features (WHO grades II–III) The etiology is still uncertain and different mechanisms have been proposed. A causal connection between sexual hormones and meningothelial proliferation has long been suspected and few studies examining progesterone only contraception and meningioma risk have all suggested an association. In line with this, our patient was treated with Levonorgestrel, a progesterone agonist, intra-uterine device (IUD). Conclusions: PPM, defined by the typical histological and immunohistochemical features of meningioma in the lungs and the absence of central nervous system lesions, is an extremely rare neoplasm, mainly solitary and associating, and indolent growth. Because of the unspecific radiologic findings, it should always be considered in the differential diagnosis of lung neoplasms. Regarding multiple PPM, only three cases are reported in the literature, and this is the first described in a woman treated by a progesterone-only IUD to the best of our knowledge.Keywords: pulmonary meningioma, multiple meningioma, meningioma, pulmonary nodules
Procedia PDF Downloads 1141587 Selectivity Mechanism of Cobalt Precipitation by an Imidazole Linker From an Old Battery Solution
Authors: Anna-Caroline Lavergne-Bril, Jean-François Colin, David Peralta, Pascale Maldivi
Abstract:
Cobalt is a critical material, widely used in Li-ion batteries. Due to the planned electrification of European vehicles, cobalt needs are expending – and resources are limited. To meet the needs in cobalt to come, it is necessary to develop new efficient ways to recycle cobalt. One of the biggest sources comes from old electrical vehicles batteries (batteries sold in 2019: 500 000 tons of waste to be). A closed loop process of cobalt recycling has been developed and this presentation aims to present the selectivity mechanism of cobalt over manganese and nickel in solution. Cobalt precipitation as a ZIF material (Zeolitic Imidazolate framework) from a starting solution composed of equimolar nickel, manganese and cobalt is studied. A 2-MeIm (2-methylimidazole) linker is introduced in a multimetallic Ni, Mn, Co solution and the resulting ZIF-67 is 100% pure Co among its metallic centers. Selectivity of Co over Ni is experimentally studied and DFT modelisation calculation are conducted to understand the geometry of ligand-metal-solvent complexes in solution. Selectivity of Co over Mn is experimentally studied, and DFT modelisation calcucation are conducted to understand the link between pKa of the ligand and precipitration of Mn impurities within the final material. Those calculation open the way to other ligand being used in the same process, with more efficiency. Experimental material are synthetized from bimetallic (Ni²⁺/Co²⁺, Mn²⁺/Co²⁺, Mn²⁺/Ni²⁺) solutions. Their crystallographic structure is analysed by XRD diffraction (Brüker AXS D8 diffractometer, Cu anticathode). Morphology is studied by scanning electron microscopy, using a LEO 1530 FE-SEM microscope. The chemical analysis is performed by using ICP-OES (Agilent Technologies 700 series ICP-OES). Modelisation calculation are DFT calculation (density functional theory), using B3LYP, conducted with Orca 4.2.Keywords: MOFs, ZIFs, recycling, closed-loop, cobalt, li-ion batteries
Procedia PDF Downloads 1371586 Labor Productivity and Organization Performance in Specialty Trade Construction: The Moderating Effect of Safety
Authors: Shalini Priyadarshini
Abstract:
The notion of performance measurement has held great appeal for the industry and research communities alike. This idea is also true for the construction sector, and some propose that performance measurement and productivity analysis are two separate management functions, where productivity is a subset of performance, the latter requiring comprehensive analysis of comparable factors. Labor productivity is considered one of the best indicators of production efficiency. The construction industry continues to account for a disproportionate share of injuries and illnesses despite adopting several technological and organizational interventions that promote worker safety. Specialty trades contractors typically complete a large fraction of work on any construction project, but insufficient body of work exists that address subcontractor safety and productivity issues. Literature review has revealed the possibility of a relation between productivity, safety and other factors and their links to project, organizational, task and industry performance. This research posits that there is an association between productivity and performance at project as well as organizational levels in the construction industry. Moreover, prior exploration of the importance of safety within the performance-productivity framework has been anecdotal at best. Using structured questionnaire survey and organization- and project level data, this study, which is a combination of cross-sectional and longitudinal research designs, addresses the identified research gap and models the relationship between productivity, safety, and performance with a focus on specialty trades in the construction sector. Statistical analysis is used to establish a correlation between the variables of interest. This research identifies the need for developing and maintaining productivity and safety logs for smaller businesses. Future studies can design and develop research to establish causal relationships between these variables.Keywords: construction, safety, productivity, performance, specialty trades
Procedia PDF Downloads 2781585 Design and Development of Tandem Dynamometer for Testing and Validation of Motor Performance Parameters
Authors: Vedansh More, Lalatendu Bal, Ronak Panchal, Atharva Kulkarni
Abstract:
The project aims at developing a cost-effective test bench capable of testing and validating the complete powertrain package of an electric vehicle. Emrax 228 high voltage synchronous motor was selected as the prime mover for study. A tandem type dynamometer comprising of two loading methods; inertial, using standard inertia rollers and absorptive, using a separately excited DC generator with resistive coils was developed. The absorptive loading of the prime mover was achieved by implementing a converter circuit through which duty of the input field voltage level was controlled. This control was efficacious in changing the magnetic flux and hence the generated voltage which was ultimately dropped across resistive coils assembled in a load bank with all parallel configuration. The prime mover and loading elements were connected via a chain drive with a 2:1 reduction ratio which allows flexibility in placement of components and a relaxed rating of the DC generator. The development will aid in determination of essential characteristics like torque-RPM, power-RPM, torque factor, RPM factor, heat loads of devices and battery pack state of charge efficiency but also provides a significant financial advantage over existing versions of dynamometers with its cost-effective solution.Keywords: absorptive load, chain drive, chordal action, DC generator, dynamometer, electric vehicle, inertia rollers, load bank, powertrain, pulse width modulation, reduction ratio, road load, testbench
Procedia PDF Downloads 2321584 Organic Permeation Properties of Hydrophobic Silica Membranes with Different Functional Groups
Authors: Sadao Araki, Daisuke Gondo, Satoshi Imasaka, Hideki Yamamoto
Abstract:
The separation of organic compounds from aqueous solutions is a key technology for recycling valuable organic compounds and for the treatment of wastewater. The wastewater from chemical plants often contains organic compounds such as ethyl acetate (EA), methylethyl ketone (MEK) and isopropyl alcohol (IPA). In this study, we prepared hydrophobic silica membranes by a sol-gel method. We used phenyltrimethoxysilane (PhTMS), ethyltrimethoxysilan (ETMS), Propyltrimethoxysilane (PrTMS), N-butyltrimethoxysilane (BTMS), N-Hexyltrimethoxysilane (HTMS) as silica sources to introduce each functional groups on the membrane surface. Cetyltrimethyl ammonium bromide (CTAB) was used as a molecular template to create suitable pore that enable the permeation of organic compounds. These membranes with five different functional groups were characterized by SEM, FT-IR, and permporometry. Thicknesses and pore diameters of silica layer for all membrane were about 1.0 μm and about 1 nm, respectively. In other words, functional groups had an insignificant effect on the membrane thicknesses and the formation of the pore by CTAB. We confirmed the effect of functional groups on the flux and separation factor for ethyl acetate (EA), methyl ethyl ketone, acetone and 1-butanol (1-BtOH) /water mixtures. All membranes showed a high flux for ethyl acetate compared with other compounds. In particular, the hydrophobic silica membrane prepared by using BTMS showed 0.75 kg m-2 h-1 of flux for EA. For all membranes, the fluxes of organic compounds showed the large values in the order corresponding to EA > MEK > acetone > 1-BtOH. On the other hand, carbon chain length of functional groups among ETMS, PrTMS, BTMS, PrTMS and HTMS did not have a major effect on the organic flux. Although we confirmed the relationship between organic fluxes and organic molecular diameters or fugacity of organic compounds, these factors had a low correlation with organic fluxes. It is considered that these factors affect the diffusivity. Generally, permeation through membranes is based on the diffusivity and solubility. Therefore, it is deemed that organic fluxes through these hydrophobic membranes are strongly influenced by solubility. We tried to estimate the organic fluxes by Hansen solubility parameter (HSP). HSP, which is based on the cohesion energy per molar volume and is composed of dispersion forces (δd), intermolecular dipole interactions (δp), and hydrogen-bonding interactions (δh), has recently attracted attention as a means for evaluating the resolution and aggregation behavior. Evaluation of solubility for two substances can be represented by using the Ra [(MPa)1/2] value, meaning the distance of HSPs for both of substances. A smaller Ra value means a higher solubility for each substance. On the other hand, it can be estimated that the substances with large Ra value show low solubility. We established the correlation equation, which was based on Ra, of organic flux at low concentrations of organic compounds and at 295-325 K.Keywords: hydrophobic, membrane, Hansen solubility parameter, functional group
Procedia PDF Downloads 3781583 Legacy of Smart Cities on Urban Future: Discussing the Future of Smart City by Sharing Its Experiences
Authors: Arsalan Makinian
Abstract:
Our future cities will constantly evolve the necessary technologies for tomorrow’s needs. Technologies which enable a better kind of prosperity and security. This paper reports on the precedent of a smart city from its beginning to prevalence among urbanism academic literature and reports of tech companies. The article aims to direct urban foresight studies and to build a pathway for the future of smart city concept by gathering theoretical and empirical experiences related to smart cities with both top-down and bottom-up approaches. It hopes to deliver results of different studies, pilot projects, and development strategies of some of the smart cities in order to allow a shareable knowledge to take shape and develop in terms of qualitative aspects of a smart city. Now the definition of the smart city goes beyond removing physical boundaries, changing the concept of mobility and providing electronic service for citizens, it now constitutes fields such as energy efficiency, economic competitiveness, protecting the environment and finally, it takes advantage of technology and data science to improve the quality of life. In the smart city, the role of citizens is considered as both final purpose and contributor. Emerging issues which are almost implications of advanced technologies -as the most important trends of the future- and their reflection on the society need to be foresighted. Educating and fostering knowledge of smartness is one of the targets of the smart city concept. In this regard, some of these smart cites have established research and development units to share their projects and smart city initiatives. Due to this fact, gaining experience and sharing the results of this subject is necessary for technology management and moving toward a smart urban future.Keywords: age of urban tech, bottom-up approach, role of citizens, smart city
Procedia PDF Downloads 1381582 Sunflower Oil as a Nutritional Strategy to Reduce the Impacts of Heat Stress on Meat Quality and Dirtiness Pigs Score
Authors: Angela Cristina Da F. De Oliveira, Salma E. Asmar, Norbert P. Battlori, Yaz Vera, Uriel R. Valencia, Tâmara D. Borges, Antoni D. Bueno, Leandro B. Costa
Abstract:
The present study aimed to evaluate the replacement of 5% of starch per 5% of sunflower oil (SO) on meat quality and animal welfare of growing and finishing pigs (Iberic x Duroc), exposed to a heat stress environment. The experiment lasted 90 days, and it was carried out in a randomized block design, in a 2 x 2 factorial, composed of two diets (starch or sunflower oil (with or without) and two feed intake management (ad libitum and restriction). Seventy-two crossbred males (51± 6,29 kg body weight - BW) were housed in climate-controlled rooms, in collective pens and exposed to heat stress environment (32°C; 35% to 50% humidity). The treatments studies were: 1) control diet (5% starch x 0% SO) with ad libitum intake (n = 18); 2) SO diet (replacement of 5% of starch per 5% of SO) with ad libitum intake (n = 18); 3) control diet with restriction feed intake (n = 18); or 4) SO diet with restriction feed intake (n = 18). Feed were provided in two phases, 50-100 Kg BW for growing and 100-140 Kg BW for finishing, respectively. Within welfare evaluations, dirtiness score was evaluated all morning during ninety days of the experiment. The presence of manure was individually measured based on one side of the pig´s body and scored according to: 0 (less than 20% of the body surface); 1 (more than 20% but less than 50% of the body surface); 2 (over 50% of the body surface). After the experimental period, when animals reach 130-140 kg BW, they were slaughtered using carbon dioxide (CO2) stunning. Carcass weight, leanness and fat content, measured at the last rib, were recorded within 20 min post-mortem (PM). At 24h PM, pH, electrical conductivity and color measures (L, a*, b*) were recorded in the Longissimus thoracis and Semimembranosus muscles. Data shown no interaction between diet (control x SO) and management feed intake (ad libitum x restriction) on the meat quality parameters. Animals in ad libitum management presented an increase (p < 0.05) on BW, carcass weight (CW), back fat thickness (BT), and intramuscular fat content (IM) when compared with animals in restriction management. In contrast, animals in restriction management showing a higher (p < 0.05) carcass yield, percentage of lean and loin thickness. To welfare evaluations, the interaction between diet and management feed intake did not influence the degree of dirtiness. Although, the animals that received SO diet, independently of the management, were cleaner than animals in control group (p < 0,05), which, for pigs, demonstrate an important strategy to reduce body temperature. Based in our results, the diet and management feed intake had a significant influence on meat quality and animal welfare being considered efficient nutritional strategies to reduce heat stress and improved meat quality.Keywords: dirtiness, environment, meat, pig
Procedia PDF Downloads 263