Search results for: metric grouping
90 Preparation of Metal Containing Epoxy Polymer and Investigation of Their Properties as Fluorescent Probe
Authors: Ertuğ Yıldırım, Dile Kara, Salih Zeki Yıldız
Abstract:
Metal containing polymers (MCPs) are macro molecules usually containing metal-ligand coordination units and are a multidisciplinary research field mainly based at the interface between coordination chemistry and polymer science. The progress of this area has also been reinforced by the growth of several other closely related disciplines including macro molecular engineering, crystal engineering, organic synthesis, supra molecular chemistry and colloidal and material science. Schiff base ligands are very effective in constructing supra molecular architectures such as coordination polymers, double helical and triple helical complexes. In addition, Schiff base derivatives incorporating a fluorescent moiety are appealing tools for optical sensing of metal ions. MCPs are well-known systems in which the combinations of local parameters are possible by means of fluoro metric techniques. Generally, without incorporation of the fluorescent groups with polymers is unspecific, and it is not useful to analyze their fluorescent properties. Therefore, it is necessary to prepare a new type epoxy polymers with fluorescent groups in terms of metal sensing prop and the other photo chemical applications. In the present study metal containing polymers were prepared via poly functional monomeric Schiff base metal chelate complexes in the presence of dis functional monomers such as diglycidyl ether Bisphenol A (DGEBA). The synthesized complexes and polymers were characterized by FTIR, UV-VIS and mass spectroscopies. The preparations of epoxy polymers have been carried out at 185 °C. The prepared composites having sharp and narrow excitation/emission properties are expected to be applicable in various systems such as heat-resistant polymers and photo voltaic devices. The prepared composite is also ideal for various applications, easily prepared, safe, and maintain good fluorescence properties.Keywords: Schiff base ligands, crystal engineering, fluorescence properties, Metal Containing Polymers (MCPs)
Procedia PDF Downloads 34789 Maintenance Performance Measurement Derived Optimization: A Case Study
Authors: James M. Wakiru, Liliane Pintelon, Peter Muchiri, Stanley Mburu
Abstract:
Maintenance performance measurement (MPM) represents an integrated aspect that considers both operational and maintenance related aspects while evaluating the effectiveness and efficiency of maintenance to ensure assets are working as they should. Three salient issues require to be addressed for an asset-intensive organization to employ an MPM-based framework to optimize maintenance. Firstly, the organization should establish important perfomance metric(s), in this case the maintenance objective(s), which they will be focuss on. The second issue entails aligning the maintenance objective(s) with maintenance optimization. This is achieved by deriving maintenance performance indicators that subsequently form an objective function for the optimization program. Lastly, the objective function is employed in an optimization program to derive maintenance decision support. In this study, we develop a framework that initially identifies the crucial maintenance performance measures, and employs them to derive maintenance decision support. The proposed framework is demonstrated in a case study of a geothermal drilling rig, where the objective function is evaluated utilizing a simulation-based model whose parameters are derived from empirical maintenance data. Availability, reliability and maintenance inventory are depicted as essential objectives requiring further attention. A simulation model is developed mimicking a drilling rig operations and maintenance where the sub-systems are modelled undergoing imperfect maintenance, corrective (CM) and preventive (PM), with the total cost as the primary performance measurement. Moreover, three maintenance spare inventory policies are considered; classical (retaining stocks for a contractual period), vendor-managed inventory with consignment stock and periodic monitoring order-to-stock (s, S) policy. Optimization results infer that the adoption of (s, S) inventory policy, increased PM interval and reduced reliance of CM actions offers improved availability and total costs reduction.Keywords: maintenance, vendor-managed, decision support, performance, optimization
Procedia PDF Downloads 12588 Study on Optimization of Air Infiltration at Entrance of a Commercial Complex in Zhejiang Province
Authors: Yujie Zhao, Jiantao Weng
Abstract:
In the past decade, with the rapid development of China's economy, the purchasing power and physical demand of residents have been improved, which results in the vast emergence of public buildings like large shopping malls. However, the architects usually focus on the internal functions and streamlines of these buildings, ignoring the impact of the environment on the subjective feelings of building users. Only in Zhejiang province, the infiltration of cold air in winter frequently occurs at the entrance of sizeable commercial complex buildings that have been in operation, which will affect the environmental comfort of the building lobby and internal public spaces. At present, to reduce these adverse effects, it is usually adopted to add active equipment, such as setting air curtains to block air exchange or adding heating air conditioners. From the perspective of energy consumption, the infiltration of cold air into the entrance will increase the heat consumption of indoor heating equipment, which will indirectly cause considerable economic losses during the whole winter heating stage. Therefore, it is of considerable significance to explore the suitable entrance forms for improving the environmental comfort of commercial buildings and saving energy. In this paper, a commercial complex with apparent cold air infiltration problem in Hangzhou is selected as the research object to establish a model. The environmental parameters of the building entrance, including temperature, wind speed, and infiltration air volume, are obtained by Computational Fluid Dynamics (CFD) simulation, from which the heat consumption caused by the natural air infiltration in the winter and its potential economic loss is estimated as the objective metric. This study finally obtains the optimization direction of the building entrance form of the commercial complex by comparing the simulation results of other local commercial complex projects with different entrance forms. The conclusions will guide the entrance design of the same type of commercial complex in this area.Keywords: air infiltration, commercial complex, heat consumption, CFD simulation
Procedia PDF Downloads 13387 The Impact of Sports Employees' of Perceptions of Organizational Climate and Organizational Trust on Work Motivation
Authors: Bilal Okudan, Omur F. Karakullukcu, Yusuf Can
Abstract:
Work motivation is one of the fundamental elements that determine the attitudes and performance of employees towards work. In this sense, work motivation depends not only on individual and occupational factors but also on employees' perception of organizational climate and organizational trust. Organizations that are aware of this have begun to do more research on work motivation in recent years to ensure that employees have the highest possible performance. In this framework of the purpose of this study is to examine the effect of sports employees' perceptions of organizational climate and organizational trust on work motivation. In the study, it has also been analyzed if there is any significant difference in the department of sports services’ employees’ organizational climate and organizational trust perception, and work motivation levels in terms of gender, age, duty status, year of service and level of education. 278 sports managers, who work in the department of sports service’s central and field organization at least as a chief in the manager position, have been chosen with random sampling method and they have voluntarily participated in the study. In the study, the organizational climate scale which was developed by Bilir (2005), organizational trusts scale developed by koksal (2012) and work motivation scale developed by Mottaz J. Clifford (1985) have been used as a data collection tool. The questionnaire form used as a data collection tool in the study includes a personal information form consisting of 5 questions; questioning gender, age, duty status, years of service and level of education. In the study, Pearson Correlation Analysis has been used for defining the correlation among organizational climate, organizational trust perceptions and work motivation levels in sports managers and regression analysis has been used to identify the effect of organizational climate and organizational trust on work motivation. T-test for binary grouping and ANOVA analysis have been used for more than binary groups in order to determine if there is any significant difference in the level of organizational climate, organizational trust perceptions and work motivations in terms of the participants’ duty status, year of service and level of education. According to the research results, it has been found that there is a positive correlation between the department of sports services’ employees’ organizational climate, organizational trust perceptions and work motivation levels. According to the results of the regression analysis; it is understood that the sports employees’ perception of organizational climate and organizational trust are two main factors which affects the perception of work motivation. Also, the results show that there is a significant difference in the level of organizational climate and organizational trust perceptions and work motivations of the department of sports services’ employees in terms of duty status, year of service, and level of education; however, the results reveal that there is no significant difference in terms of age groups and gender.Keywords: sports manager, organizational climate, organizational trust, work motivation
Procedia PDF Downloads 24386 Comparison of Inexpensive Cell Disruption Techniques for an Oleaginous Yeast
Authors: Scott Nielsen, Luca Longanesi, Chris Chuck
Abstract:
Palm oil is obtained from the flesh and kernel of the fruit of oil palms and is the most productive and inexpensive oil crop. The global demand for palm oil is approximately 75 million metric tonnes, a 29% increase in global production of palm oil since 2016. This expansion of oil palm cultivation has resulted in mass deforestation, vast biodiversity destruction and increasing net greenhouse gas emissions. One possible alternative is to produce a saturated oil, similar to palm, from microbes such as oleaginous yeast. The yeasts can be cultured on sugars derived from second-generation sources and do not compete with tropical forests for land. One highly promising oleaginous yeast for this application is Metschnikowia pulcherrima. However, recent techno-economic modeling has shown that cell lysis and standard lipid extraction are major contributors to the cost of the oil. Typical cell disruption techniques to extract either single cell oils or proteins have been based around bead-beating, homogenization and acid lysis. However, these can have a detrimental effect on lipid quality and are energy-intensive. In this study, a vortex separator, which produces high sheer with minimal energy input, was investigated as a potential low energy method of lysing cells. This was compared to four more traditional methods (thermal lysis, acid lysis, alkaline lysis, and osmotic lysis). For each method, the yeast loading was also examined at 1 g/L, 10 g/L and 100 g/L. The quality of the cell disruption was measured by optical cell density, cell counting and the particle size distribution profile comparison over a 2-hour period. This study demonstrates that the vortex separator is highly effective at lysing the cells and could potentially be used as a simple apparatus for lipid recovery in an oleaginous yeast process. The further development of this technology could potentially reduce the overall cost of microbial lipids in the future.Keywords: palm oil substitute, metschnikowia pulcherrima, cell disruption, cell lysis
Procedia PDF Downloads 20685 Characterization of a Newfound Manganese Tungstate Mineral of Hübnerite in Turquoise Gemstone from Miduk Mine, Kerman, Iran
Authors: Zahra Soleimani Rad, Fariborz Masoudi, Shirin Tondkar
Abstract:
Turquoise is one of the most well-known gemstones in Iran. The mineralogy, crystallography, and gemology of Shahr-e-Babak turquoise in Kerman were investigated and the results are presented in this research. The Miduk porphyry copper deposit is positioned in the Shahr-Babak area in Kerman province, Iran. This deposit is located 85 km NW of the Sar-Cheshmeh porphyry copper deposit. Preliminary mineral exploration was carried out from 1967 to 1970. So far, more than fifty diamond drill holes, each reaching a maximum depth of 1013 meters, have provided evidence supporting the presence of significant and promising porphyry copper mineralization at the Miduk deposit. The mineral deposit harbors a quantity of 170 million metric tons of ore, characterized by a mean composition of 0.86% copper (Cu), 0.007% molybdenum (Mo), 82 parts-per-billion gold (Au), and 1.8 parts-per-million silver (Ag). The Supergene enrichment layer, which constitutes the predominant source of copper ore, exhibits an approximate thickness of 50 meters. Petrography shows that the texture is homogeneous. In terms of a gemstone, greasy luster and blue color are seen, and samples are similar to what is commonly known as turquoise. The geometric minerals were detected in XRD analysis by analyzing the data using the x-pert software. From the mineralogical point of view; the turquoise gemstones of Miduk of Kerman consist of turquoise, quartz, mica, and hübnerite. In this article, to our best knowledge, we are stating the hübnerite mineral identified and seen in the Persian turquoise. Based on the obtained spectra, the main mineral of the Miduk samples from the six members of the turquoise family is the turquoise type with identical peaks that can be used as a reference for identification of the Miduk turquoise. This mineral is structurally composed of phosphate units, units of Al, Cu, water, and hydroxyl units, and does not include a Fe unit. In terms of gemology, the quality of a gemstone depends on the quantity of the turquoise phase and the amount of Cu in it according to SEM and XRD analysis.Keywords: turquoise, hübnerite, XRD analysis, Miduk, Kerman, Iran
Procedia PDF Downloads 7084 A Strategy to Oil Production Placement Zones Based on Maximum Closeness
Authors: Waldir Roque, Gustavo Oliveira, Moises Santos, Tatiana Simoes
Abstract:
Increasing the oil recovery factor of an oil reservoir has been a concern of the oil industry. Usually, the production placement zones are defined after some analysis of geological and petrophysical parameters, being the rock porosity, permeability and oil saturation of fundamental importance. In this context, the determination of hydraulic flow units (HFUs) renders an important step in the process of reservoir characterization since it may provide specific regions in the reservoir with similar petrophysical and fluid flow properties and, in particular, techniques supporting the placement of production zones that favour the tracing of directional wells. A HFU is defined as a representative volume of a total reservoir rock in which petrophysical and fluid flow properties are internally consistent and predictably distinct of other reservoir rocks. Technically, a HFU is characterized as a rock region that exhibit flow zone indicator (FZI) points lying on a straight line of the unit slope. The goal of this paper is to provide a trustful indication for oil production placement zones for the best-fit HFUs. The FZI cloud of points can be obtained from the reservoir quality index (RQI), a function of effective porosity and permeability. Considering log and core data the HFUs are identified and using the discrete rock type (DRT) classification, a set of connected cell clusters can be found and by means a graph centrality metric, the maximum closeness (MaxC) cell is obtained for each cluster. Considering the MaxC cells as production zones, an extensive analysis, based on several oil recovery factor and oil cumulative production simulations were done for the SPE Model 2 and the UNISIM-I-D synthetic fields, where the later was build up from public data available from the actual Namorado Field, Campos Basin, in Brazil. The results have shown that the MaxC is actually technically feasible and very reliable as high performance production placement zones.Keywords: hydraulic flow unit, maximum closeness centrality, oil production simulation, production placement zone
Procedia PDF Downloads 33183 Post Harvest Losses and Food Security in Northeast Nigeria What Are the Key Challenges and Concrete Solutions
Authors: Adebola Adedugbe
Abstract:
The challenge of post-harvest losses poses serious threats for food security in Nigeria and the north-eastern part with the country losing about $9billion annually due to postharvest losses in the sector. Post-harvest loss (PHL) is the quantitative and qualitative loss of food in various post-harvest operations. In Nigeria, post-harvest losses (PHL) have been a major challenge to food security and improved farmer’s income. In 2022, the Nigerian government had said over 30 percent of food produced by Nigerian farmers perish during post-harvest. For many in northeast Nigeria, agriculture is the predominant source of livelihood and income. The persistent communal conflicts, flood, decade-old attacks by boko haram and insurgency in this region have disrupted farming activities drastically, with farmlands becoming insecure and inaccessible as communities are forced to abandon ancestral homes, The impact of climate change is also affecting agricultural and fishing activities, leading to shortage of food supplies, acute hunger and loss of livelihood. This has continued to impact negatively on the region and country’s food production and availability making it loose billions of US dollars annually in income in this sector. The root cause of postharvest losses among others in crops, livestock and fisheries are lack of modern post-harvest equipment, chemical and lack of technologies used for combating losses. The 2019 Global Hunger Index showed Nigeria’s case was progressing from a ‘serious to alarming level’. As part of measures to address the problem of post-harvest losses experienced by farmers, the federal government of Nigeria concessioned 17 silos with 6000 metric tonne storage space to private sector to enable farmers to have access to storage facilities. This paper discusses the causes, effects and solutions in handling post-harvest losses and optimize returns on food security in northeast Nigeria.Keywords: farmers, food security, northeast Nigeria, postharvest loss
Procedia PDF Downloads 7282 Gradient Boosted Trees on Spark Platform for Supervised Learning in Health Care Big Data
Authors: Gayathri Nagarajan, L. D. Dhinesh Babu
Abstract:
Health care is one of the prominent industries that generate voluminous data thereby finding the need of machine learning techniques with big data solutions for efficient processing and prediction. Missing data, incomplete data, real time streaming data, sensitive data, privacy, heterogeneity are few of the common challenges to be addressed for efficient processing and mining of health care data. In comparison with other applications, accuracy and fast processing are of higher importance for health care applications as they are related to the human life directly. Though there are many machine learning techniques and big data solutions used for efficient processing and prediction in health care data, different techniques and different frameworks are proved to be effective for different applications largely depending on the characteristics of the datasets. In this paper, we present a framework that uses ensemble machine learning technique gradient boosted trees for data classification in health care big data. The framework is built on Spark platform which is fast in comparison with other traditional frameworks. Unlike other works that focus on a single technique, our work presents a comparison of six different machine learning techniques along with gradient boosted trees on datasets of different characteristics. Five benchmark health care datasets are considered for experimentation, and the results of different machine learning techniques are discussed in comparison with gradient boosted trees. The metric chosen for comparison is misclassification error rate and the run time of the algorithms. The goal of this paper is to i) Compare the performance of gradient boosted trees with other machine learning techniques in Spark platform specifically for health care big data and ii) Discuss the results from the experiments conducted on datasets of different characteristics thereby drawing inference and conclusion. The experimental results show that the accuracy is largely dependent on the characteristics of the datasets for other machine learning techniques whereas gradient boosting trees yields reasonably stable results in terms of accuracy without largely depending on the dataset characteristics.Keywords: big data analytics, ensemble machine learning, gradient boosted trees, Spark platform
Procedia PDF Downloads 24181 Identifying Protein-Coding and Non-Coding Regions in Transcriptomes
Authors: Angela U. Makolo
Abstract:
Protein-coding and Non-coding regions determine the biology of a sequenced transcriptome. Research advances have shown that Non-coding regions are important in disease progression and clinical diagnosis. Existing bioinformatics tools have been targeted towards Protein-coding regions alone. Therefore, there are challenges associated with gaining biological insights from transcriptome sequence data. These tools are also limited to computationally intensive sequence alignment, which is inadequate and less accurate to identify both Protein-coding and Non-coding regions. Alignment-free techniques can overcome the limitation of identifying both regions. Therefore, this study was designed to develop an efficient sequence alignment-free model for identifying both Protein-coding and Non-coding regions in sequenced transcriptomes. Feature grouping and randomization procedures were applied to the input transcriptomes (37,503 data points). Successive iterations were carried out to compute the gradient vector that converged the developed Protein-coding and Non-coding Region Identifier (PNRI) model to the approximate coefficient vector. The logistic regression algorithm was used with a sigmoid activation function. A parameter vector was estimated for every sample in 37,503 data points in a bid to reduce the generalization error and cost. Maximum Likelihood Estimation (MLE) was used for parameter estimation by taking the log-likelihood of six features and combining them into a summation function. Dynamic thresholding was used to classify the Protein-coding and Non-coding regions, and the Receiver Operating Characteristic (ROC) curve was determined. The generalization performance of PNRI was determined in terms of F1 score, accuracy, sensitivity, and specificity. The average generalization performance of PNRI was determined using a benchmark of multi-species organisms. The generalization error for identifying Protein-coding and Non-coding regions decreased from 0.514 to 0.508 and to 0.378, respectively, after three iterations. The cost (difference between the predicted and the actual outcome) also decreased from 1.446 to 0.842 and to 0.718, respectively, for the first, second and third iterations. The iterations terminated at the 390th epoch, having an error of 0.036 and a cost of 0.316. The computed elements of the parameter vector that maximized the objective function were 0.043, 0.519, 0.715, 0.878, 1.157, and 2.575. The PNRI gave an ROC of 0.97, indicating an improved predictive ability. The PNRI identified both Protein-coding and Non-coding regions with an F1 score of 0.970, accuracy (0.969), sensitivity (0.966), and specificity of 0.973. Using 13 non-human multi-species model organisms, the average generalization performance of the traditional method was 74.4%, while that of the developed model was 85.2%, thereby making the developed model better in the identification of Protein-coding and Non-coding regions in transcriptomes. The developed Protein-coding and Non-coding region identifier model efficiently identified the Protein-coding and Non-coding transcriptomic regions. It could be used in genome annotation and in the analysis of transcriptomes.Keywords: sequence alignment-free model, dynamic thresholding classification, input randomization, genome annotation
Procedia PDF Downloads 6880 YOLO-Based Object Detection for the Automatic Classification of Intestinal Organoids
Authors: Luana Conte, Giorgio De Nunzio, Giuseppe Raso, Donato Cascio
Abstract:
The intestinal epithelium serves as a pivotal model for studying stem cell biology and diseases such as colorectal cancer. Intestinal epithelial organoids, which replicate many in vivo features of the intestinal epithelium, are increasingly used as research models. However, manual classification of organoids is labor-intensive and prone to subjectivity, limiting scalability. In this study, we developed an automated object-detection algorithm to classify intestinal organoids in transmitted-light microscopy images. Our approach utilizes the YOLOv10 medium model (YOLO10m), a state-of-the-art object-detection algorithm, to predict and classify objects within labeled bounding boxes. The model was fine-tuned on a publicly available dataset containing 840 manually annotated images with 23,066 total annotations, averaging 28.2 annotations per image (median: 21; range: 1–137). It was trained to identify four categories: cysts, early organoids, late organoids, and spheroids, using a 90:10 train-validation split over 150 epochs. Model performance was assessed using mean average precision (mAP), precision, and recall metrics. The mAP, a standard metric ranging from 0 to 1 (with 1 indicating perfect agreement with manual labeling), was calculated at a 50% overlap threshold (mAP=0.5). Optimal performance was achieved at epoch 80, with an mAP of 0.85, precision of 0.78, and recall of 0.80 on the validation dataset. Classspecific mAP values were highest for cysts (0.87), followed by late organoids (0.83), early organoids (0.76), and spheroids (0.68). Additionally, the model demonstrated the ability to measure organoid sizes and classify them with accuracy comparable to expert scientists, while operating significantly faster. This automated pipeline represents a robust tool for large-scale, high-throughput analysis of intestinal organoids, paving the way for more efficient research in organoid biology and related fields.Keywords: intestinal organoids, object detection, YOLOv10, transmitted-light microscopy
Procedia PDF Downloads 079 Evaluation of Video Quality Metrics and Performance Comparison on Contents Taken from Most Commonly Used Devices
Authors: Pratik Dhabal Deo, Manoj P.
Abstract:
With the increasing number of social media users, the amount of video content available has also significantly increased. Currently, the number of smartphone users is at its peak, and many are increasingly using their smartphones as their main photography and recording devices. There have been a lot of developments in the field of Video Quality Assessment (VQA) and metrics like VMAF, SSIM etc. are said to be some of the best performing metrics, but the evaluation of these metrics is dominantly done on professionally taken video contents using professional tools, lighting conditions etc. No study particularly pinpointing the performance of the metrics on the contents taken by users on very commonly available devices has been done. Datasets that contain a huge number of videos from different high-end devices make it difficult to analyze the performance of the metrics on the content from most used devices even if they contain contents taken in poor lighting conditions using lower-end devices. These devices face a lot of distortions due to various factors since the spectrum of contents recorded on these devices is huge. In this paper, we have presented an analysis of the objective VQA metrics on contents taken only from most used devices and their performance on them, focusing on full-reference metrics. To carry out this research, we created a custom dataset containing a total of 90 videos that have been taken from three most commonly used devices, and android smartphone, an IOS smartphone and a DSLR. On the videos taken on each of these devices, the six most common types of distortions that users face have been applied on addition to already existing H.264 compression based on four reference videos. These six applied distortions have three levels of degradation each. A total of the five most popular VQA metrics have been evaluated on this dataset and the highest values and the lowest values of each of the metrics on the distortions have been recorded. Finally, it is found that blur is the artifact on which most of the metrics didn’t perform well. Thus, in order to understand the results better the amount of blur in the data set has been calculated and an additional evaluation of the metrics was done using HEVC codec, which is the next version of H.264 compression, on the camera that proved to be the sharpest among the devices. The results have shown that as the resolution increases, the performance of the metrics tends to become more accurate and the best performing metric among them is VQM with very few inconsistencies and inaccurate results when the compression applied is H.264, but when the compression is applied is HEVC, SSIM and VMAF have performed significantly better.Keywords: distortion, metrics, performance, resolution, video quality assessment
Procedia PDF Downloads 20478 Variation among East Wollega Coffee (Coffea arabica L.) Landraces for Quality Attributes
Authors: Getachew Weldemichael, Sentayehu Alamerew, Leta Tulu, Gezahegn Berecha
Abstract:
Coffee quality improvement program is becoming the focus of coffee research, as the world coffee consumption pattern shifted to high-quality coffee. However, there is limited information on the genetic variation of C. Arabica for quality improvement in potential specialty coffee growing areas of Ethiopia. Therefore, this experiment was conducted with the objectives of determining the magnitude of variation among 105 coffee accessions collected from east Wollega coffee growing areas and assessing correlations between the different coffee qualities attributes. It was conducted in RCRD with three replications. Data on green bean physical characters (shape and make, bean color and odor) and organoleptic cup quality traits (aromatic intensity, aromatic quality, acidity, astringency, bitterness, body, flavor, and overall standard of the liquor) were recorded. Analysis of variance, clustering, genetic divergence, principal component and correlation analysis was performed using SAS software. The result revealed that there were highly significant differences (P<0.01) among the accessions for all quality attributes except for odor and bitterness. Among the tested accessions, EW104 /09, EW101 /09, EW58/09, EW77/09, EW35/09, EW71/09, EW68/09, EW96 /09, EW83/09 and EW72/09 had the highest total coffee quality values (the sum of bean physical and cup quality attributes). These genotypes could serve as a source of genes for green bean physical characters and cup quality improvement in Arabica coffee. Furthermore, cluster analysis grouped the coffee accessions into five clusters with significant inter-cluster distances implying that there is moderate diversity among the accessions and crossing accessions from these divergent inter-clusters would result in hetrosis and recombinants in segregating generations. The principal component analysis revealed that the first three principal components with eigenvalues greater than unity accounted for 83.1% of the total variability due to the variation of nine quality attributes considered for PC analysis, indicating that all quality attributes equally contribute to a grouping of the accessions in different clusters. Organoleptic cup quality attributes showed positive and significant correlations both at the genotypic and phenotypic levels, demonstrating the possibility of simultaneous improvement of the traits. Path coefficient analysis revealed that acidity, flavor, and body had a high positive direct effect on overall cup quality, implying that these traits can be used as indirect criteria to improve overall coffee quality. Therefore, it was concluded that there is considerable variation among the accessions, which need to be properly conserved for future improvement of the coffee quality. However, the variability observed for quality attributes must be further verified using biochemical and molecular analysis.Keywords: accessions, Coffea arabica, cluster analysis, correlation, principal component
Procedia PDF Downloads 16677 The Impact of Human Intervention on Net Primary Productivity for the South-Central Zone of Chile
Authors: Yannay Casas-Ledon, Cinthya A. Andrade, Camila E. Salazar, Mauricio Aguayo
Abstract:
The sustainable management of available natural resources is a crucial question for policy-makers, economists, and the research community. Among several, land constitutes one of the most critical resources, which is being intensively appropriated by human activities producing ecological stresses and reducing ecosystem services. In this context, net primary production (NPP) has been considered as a feasible proxy indicator for estimating the impacts of human interventions on land-uses intensity. Accordingly, the human appropriation of NPP (HANPP) was calculated for the south-central regions of Chile between 2007 and 2014. The HANPP was defined as the difference between the potential NPP of the naturally produced vegetation (NPP0, i.e., the vegetation that would exist without any human interferences) and the NPP remaining in the field after harvest (NPPeco), expressed in gC/m² yr. Other NPP flows taken into account in HANPP estimation were the harvested (NPPh) and the losses of NPP through land conversion (NPPluc). The ArcGIS 10.4 software was used for assessing the spatial and temporal HANPP changes. The differentiation of HANPP as % of NPP0 was estimated by each landcover type taken in 2007 and 2014 as the reference years. The spatial results depicted a negative impact on land use efficiency during 2007 and 2014, showing negative HANPP changes for the whole region. The harvest and biomass losses through land conversion components are the leading causes of loss of land-use efficiency. Furthermore, the study depicted higher HANPP in 2014 than in 2007, representing 50% of NPP0 for all landcover classes concerning 2007. This performance was mainly related to the higher volume of harvested biomass for agriculture. In consequence, the cropland depicted the high HANPP followed by plantation. This performance highlights the strong positive correlation between the economic activities developed into the region. This finding constitutes the base for a better understanding of the main driving force influencing biomass productivity and a powerful metric for supporting the sustainable management of land use.Keywords: human appropriation, land-use changes, land-use impact, net primary productivity
Procedia PDF Downloads 13876 Dual Duality for Unifying Spacetime and Internal Symmetry
Authors: David C. Ni
Abstract:
The current efforts for Grand Unification Theory (GUT) can be classified into General Relativity, Quantum Mechanics, String Theory and the related formalisms. In the geometric approaches for extending General Relativity, the efforts are establishing global and local invariance embedded into metric formalisms, thereby additional dimensions are constructed for unifying canonical formulations, such as Hamiltonian and Lagrangian formulations. The approaches of extending Quantum Mechanics adopt symmetry principle to formulate algebra-group theories, which evolved from Maxwell formulation to Yang-Mills non-abelian gauge formulation, and thereafter manifested the Standard model. This thread of efforts has been constructing super-symmetry for mapping fermion and boson as well as gluon and graviton. The efforts of String theory currently have been evolving to so-called gauge/gravity correspondence, particularly the equivalence between type IIB string theory compactified on AdS5 × S5 and N = 4 supersymmetric Yang-Mills theory. Other efforts are also adopting cross-breeding approaches of above three formalisms as well as competing formalisms, nevertheless, the related symmetries, dualities, and correspondences are outlined as principles and techniques even these terminologies are defined diversely and often generally coined as duality. In this paper, we firstly classify these dualities from the perspective of physics. Then examine the hierarchical structure of classes from mathematical perspective referring to Coleman-Mandula theorem, Hidden Local Symmetry, Groupoid-Categorization and others. Based on Fundamental Theorems of Algebra, we argue that rather imposing effective constraints on different algebras and the related extensions, which are mainly constructed by self-breeding or self-mapping methodologies for sustaining invariance, we propose a new addition, momentum-angular momentum duality at the level of electromagnetic duality, for rationalizing the duality algebras, and then characterize this duality numerically with attempt for addressing some unsolved problems in physics and astrophysics.Keywords: general relativity, quantum mechanics, string theory, duality, symmetry, correspondence, algebra, momentum-angular-momentum
Procedia PDF Downloads 39875 Leveraging Natural Language Processing for Legal Artificial Intelligence: A Longformer Approach for Taiwanese Legal Cases
Abstract:
Legal artificial intelligence (LegalAI) has been increasing applications within legal systems, propelled by advancements in natural language processing (NLP). Compared with general documents, legal case documents are typically long text sequences with intrinsic logical structures. Most existing language models have difficulty understanding the long-distance dependencies between different structures. Another unique challenge is that while the Judiciary of Taiwan has released legal judgments from various levels of courts over the years, there remains a significant obstacle in the lack of labeled datasets. This deficiency makes it difficult to train models with strong generalization capabilities, as well as accurately evaluate model performance. To date, models in Taiwan have yet to be specifically trained on judgment data. Given these challenges, this research proposes a Longformer-based pre-trained language model explicitly devised for retrieving similar judgments in Taiwanese legal documents. This model is trained on a self-constructed dataset, which this research has independently labeled to measure judgment similarities, thereby addressing a void left by the lack of an existing labeled dataset for Taiwanese judgments. This research adopts strategies such as early stopping and gradient clipping to prevent overfitting and manage gradient explosion, respectively, thereby enhancing the model's performance. The model in this research is evaluated using both the dataset and the Average Entropy of Offense-charged Clustering (AEOC) metric, which utilizes the notion of similar case scenarios within the same type of legal cases. Our experimental results illustrate our model's significant advancements in handling similarity comparisons within extensive legal judgments. By enabling more efficient retrieval and analysis of legal case documents, our model holds the potential to facilitate legal research, aid legal decision-making, and contribute to the further development of LegalAI in Taiwan.Keywords: legal artificial intelligence, computation and language, language model, Taiwanese legal cases
Procedia PDF Downloads 7374 Towards the Need of Resilient Design and Its Assessment in South China
Authors: Alan Lai, Wilson Yik
Abstract:
With rapid urbanization, there has been a dramatic increase in global urban population in Asia and over half of population in Asia will live in urban regions in the near future. Facing with increasing exposure to climate-related stresses and shocks, most of the Asian cities will very likely to experience more frequent heat waves and flooding with rising sea levels, particularly the coastal cities will grapple for intense typhoons and storm surges. These climate changes have severe impacts in urban areas at the costs of infrastructure and population, for example, human health, wellbeing and high risks of dengue fever, malaria and diarrheal disease. With the increasing prominence of adaptation to climate changes, there have been changes in corresponding policies. Smaller cities have greater potentials for integrating the concept of resilience into their infrastructure as well as keeping pace with their rapid growths in population. It is therefore important to explore the potentials of Asian cities adapting to climate change and the opportunities of building climate resilience in urban planning and building design. Furthermore, previous studies have mainly attempted at exploiting the potential of resilience on a macro-level within urban planning rather than that on micro-level within the context of individual building. The resilience of individual building as a research field has not yet been much explored. Nonetheless, recent studies define that the resilience of an individual building is the one which is able to respond to physical damage and recover from such damage in a quickly and cost-effectively manner, while maintain its primary functions. There is also a need to develop an assessment tool to evaluate the resilience on building scale which is still largely uninvestigated as it should be regarded as a basic function of a building. Due to the lack of literature reporting metric for assessing building resilience with sustainability, the research will be designed as a case study to provide insight into the issue. The aim of this research project is to encourage and assist in developing neighborhood climate resilience design strategies for Hong Kong so as to bridge the gap between difference scales and that between theory and practice.Keywords: resilience cities, building resilience, resilient buildings and infrastructure, climate resilience, hot and humid southeast area, high-density cities
Procedia PDF Downloads 16573 Fields of Power, Visual Culture, and the Artistic Practice of Two 'Unseen' Women of Central Brazil
Authors: Carolina Brandão Piva
Abstract:
In our visual culture, images play a newly significant role in the basis of a complex dialogue between imagination, creativity, and social practice. Insofar as imagination has broken out of the 'special expressive space of art' to become a part of the quotidian mental work of ordinary people, it is pertinent to recognize that visual representation can no longer be assumed as if in a domain detached from everyday life or exclusively 'centered' within the limited frame of 'art history.' The approach of Visual Culture as a field of study is, in this sense, indispensable to comprehend that not only 'the image,' but also 'the imagined' and 'the imaginary' are produced in the plurality of social interactions; crucial enough, this assertion directs us to something new in contemporary cultural processes, namely both imagination and image production constitute a social practice. This paper starts off with this approach and seeks to examine the artistic practice of two women from the State of Goiás, Brazil, who are ordinary citizens with their daily activities and narratives but also dedicated to visuality production. With no formal training from art schools, branded or otherwise, Maria Aparecida de Souza Pires deploys 'waste disposal' of daily life—from car tires to old work clothes—as a trampoline for art; also adept at sourcing raw materials collected from her surroundings, she manipulates raw hewn wood, tree trunks, plant life, and various other pieces she collects from nature giving them new meaning and possibility. Hilda Freire works with sculptures in clay using different scales and styles; her art focuses on representations of women and pays homage to unprivileged groups such as the practitioners of African-Brazilian religions, blue-collar workers, poor live-in housekeepers, and so forth. Although they have never been acknowledged by any mainstream art institution in Brazil, whose 'criterion of value' still favors formally trained artists, Maria Aparecida de Souza Pires, and Hilda Freire have produced visualities that instigate 'new ways of seeing,' meriting cultural significance in many ways. Their artworks neither descend from a 'traditional' medium nor depend on 'canonical viewing settings' of visual representation; rather, they consist in producing relationships with the world which do not result in 'seeing more,' but 'at least differently.' From this perspective, the paper finally demonstrates that grouping this kind of artistic production under the label of 'mere craft' has much more to do with who is privileged within the fields of power in art system, who we see and who we do not see, and whose imagination of what is fed by which visual images in Brazilian contemporary society.Keywords: visual culture, artistic practice, women's art in the Brazilian State of Goiás, Maria Aparecida de Souza Pires, Hilda Freire
Procedia PDF Downloads 15272 Preliminary Experience in Multiple Green Health Hospital Construction
Authors: Ming-Jyh Chen, Wen-Ming Huang, Yi-Chu Liu, Li-Hui Yang
Abstract:
Introduction: Social responsibility is the key to sustainable organizational development. Under the ground Green Health Hospital Declaration signed by our superintendent, we have launched comprehensive energy conservation management in medical services, the community, and the staff’s life. To execute environment-friendly promotion with robust strategies, we build up a low-carbon medical system and community with smart green public construction promotion as well as intensifying energy conservation education and communication. Purpose/Methods: With the support of the board and the superintendent, we construct an energy management team, commencing with an environment-friendly system, management, education, and ISO 50001 energy management system; we have ameliorated energy performance and energy efficiency and continuing. Results: In the year 2021, we have achieved multiple goals. The energy management system efficiently controls diesel, natural gas, and electricity usage. About 5% of the consumption is saved when compared to the numbers from 2018 and 2021. Our company develops intelligent services and promotes various paperless electronic operations to provide people with a vibrant and environmentally friendly lifestyle. The goal is to save 68.6% on printing and photocopying by reducing 35.15 million sheets of paper yearly. We strengthen the concept of environmental protection classification among colleagues. In the past two years, the amount of resource recycling has reached more than 650 tons, and the resource recycling rate has reached 70%. The annual growth rate of waste recycling is about 28 metric tons. Conclusions: To build a green medical system with “high efficacy, high value, low carbon, low reliance,” energy stewardship, economic prosperity, and social responsibility are our principles when it comes to formulation of energy conservation management strategies, converting limited sources to efficient usage, developing clean energy, and continuing with sustainable energy.Keywords: energy efficiency, environmental education, green hospital, sustainable development
Procedia PDF Downloads 8071 Enhanced Tensor Tomographic Reconstruction: Integrating Absorption, Refraction and Temporal Effects
Authors: Lukas Vierus, Thomas Schuster
Abstract:
A general framework is examined for dynamic tensor field tomography within an inhomogeneous medium characterized by refraction and absorption, treated as an inverse source problem concerning the associated transport equation. Guided by Fermat’s principle, the Riemannian metric within the specified domain is determined by the medium's refractive index. While considerable literature exists on the inverse problem of reconstructing a tensor field from its longitudinal ray transform within a static Euclidean environment, limited inversion formulas and algorithms are available for general Riemannian metrics and time-varying tensor fields. It is established that tensor field tomography, akin to an inverse source problem for a transport equation, persists in dynamic scenarios. Framing dynamic tensor tomography as an inverse source problem embodies a comprehensive perspective within this domain. Ensuring well-defined forward mappings necessitates establishing existence and uniqueness for the underlying transport equations. However, the bilinear forms of the associated weak formulations fail to meet the coercivity condition. Consequently, recourse to viscosity solutions is taken, demonstrating their unique existence within suitable Sobolev spaces (in the static case) and Sobolev-Bochner spaces (in the dynamic case), under a specific assumption restricting variations in the refractive index. Notably, the adjoint problem can also be reformulated as a transport equation, with analogous results regarding uniqueness. Analytical solutions are expressed as integrals over geodesics, facilitating more efficient evaluation of forward and adjoint operators compared to solving partial differential equations. Certainly, here's the revised sentence in English: Numerical experiments are conducted using a Nesterov-accelerated Landweber method, encompassing various fields, absorption coefficients, and refractive indices, thereby illustrating the enhanced reconstruction achieved through this holistic modeling approach.Keywords: attenuated refractive dynamic ray transform of tensor fields, geodesics, transport equation, viscosity solutions
Procedia PDF Downloads 5170 A Topology-Based Dynamic Repair Strategy for Enhancing Urban Road Network Resilience under Flooding
Authors: Xuhui Lin, Qiuchen Lu, Yi An, Tao Yang
Abstract:
As global climate change intensifies, extreme weather events such as floods increasingly threaten urban infrastructure, making the vulnerability of urban road networks a pressing issue. Existing static repair strategies fail to adapt to the rapid changes in road network conditions during flood events, leading to inefficient resource allocation and suboptimal recovery. The main research gap lies in the lack of repair strategies that consider both the dynamic characteristics of networks and the progression of flood propagation. This paper proposes a topology-based dynamic repair strategy that adjusts repair priorities based on real-time changes in flood propagation and traffic demand. Specifically, a novel method is developed to assess and enhance the resilience of urban road networks during flood events. The method combines road network topological analysis, flood propagation modelling, and traffic flow simulation, introducing a local importance metric to dynamically evaluate the significance of road segments across different spatial and temporal scales. Using London's road network and rainfall data as a case study, the effectiveness of this dynamic strategy is compared to traditional and Transport for London (TFL) strategies. The most significant highlight of the research is that the dynamic strategy substantially reduced the number of stranded vehicles across different traffic demand periods, improving efficiency by up to 35.2%. The advantage of this method lies in its ability to adapt in real-time to changes in network conditions, enabling more precise resource allocation and more efficient repair processes. This dynamic strategy offers significant value to urban planners, traffic management departments, and emergency response teams, helping them better respond to extreme weather events like floods, enhance overall urban resilience, and reduce economic losses and social impacts.Keywords: Urban resilience, road networks, flood response, dynamic repair strategy, topological analysis
Procedia PDF Downloads 3769 Performance Evaluation of Routing Protocols in Vehicular Adhoc Networks
Authors: Salman Naseer, Usman Zafar, Iqra Zafar
Abstract:
This study explores the implication of Vehicular Adhoc Network (VANET) - in the rural and urban scenarios that is one domain of Mobile Adhoc Network (MANET). VANET provides wireless communication between vehicle to vehicle and also roadside units. The Federal Commission Committee of United States of American has been allocated 75 MHz of the spectrum band in the 5.9 GHz frequency range for dedicated short-range communications (DSRC) that are specifically designed to enhance any road safety applications and entertainment/information applications. There are several vehicular related projects viz; California path, car 2 car communication consortium, the ETSI, and IEEE 1609 working group that have already been conducted to improve the overall road safety or traffic management. After the critical literature review, the selection of routing protocols is determined, and its performance was well thought-out in the urban and rural scenarios. Numerous routing protocols for VANET are applied to carry out current research. Its evaluation was conceded with the help of selected protocols through simulation via performance metric i.e. throughput and packet drop. Excel and Google graph API tools are used for plotting the graphs after the simulation results in order to compare the selected routing protocols which result with each other. In addition, the sum of the output from each scenario was computed to undoubtedly present the divergence in results. The findings of the current study present that DSR gives enhanced performance for low packet drop and high throughput as compared to AODV and DSDV in an urban congested area and in rural environments. On the other hand, in low-density area, VANET AODV gives better results as compared to DSR. The worth of the current study may be judged as the information exchanged between vehicles is useful for comfort, safety, and entertainment. Furthermore, the communication system performance depends on the way routing is done in the network and moreover, the routing of the data based on protocols implement in the network. The above-presented results lead to policy implication and develop our understanding of the broader spectrum of VANET.Keywords: AODV, DSDV, DSR, Adhoc network
Procedia PDF Downloads 28768 Tea and Its Working Methodology in the Biomass Estimation of Poplar Species
Authors: Pratima Poudel, Austin Himes, Heidi Renninger, Eric McConnel
Abstract:
Populus spp. (poplar) are the fastest-growing trees in North America, making them ideal for a range of applications as they can achieve high yields on short rotations and regenerate by coppice. Furthermore, poplar undergoes biochemical conversion to fuels without complexity, making it one of the most promising, purpose-grown, woody perennial energy sources. Employing wood-based biomass for bioenergy offers numerous benefits, including reducing greenhouse gas (GHG) emissions compared to non-renewable traditional fuels, the preservation of robust forest ecosystems, and creating economic prospects for rural communities.In order to gain a better understanding of the potential use of poplar as a biomass feedstock for biofuel in the southeastern US, the conducted a techno-economic assessment (TEA). This assessment is an analytical approach that integrates technical and economic factors of a production system to evaluate its economic viability. the TEA specifically focused on a short rotation coppice system employing a single-pass cut-and-chip harvesting method for poplar. It encompassed all the costs associated with establishing dedicated poplar plantations, including land rent, site preparation, planting, fertilizers, and herbicides. Additionally, we performed a sensitivity analysis to evaluate how different costs can affect the economic performance of the poplar cropping system. This analysis aimed to determine the minimum average delivered selling price for one metric ton of biomass necessary to achieve a desired rate of return over the cropping period. To inform the TEA, data on the establishment, crop care activities, and crop yields were derived from a field study conducted at the Mississippi Agricultural and Forestry Experiment Station's Bearden Dairy Research Center in Oktibbeha County and Pontotoc Ridge-Flatwood Branch Experiment Station in Pontotoc County.Keywords: biomass, populus species, sensitivity analysis, technoeconomic analysis
Procedia PDF Downloads 8367 Improve Divers Tracking and Classification in Sonar Images Using Robust Diver Wake Detection Algorithm
Authors: Mohammad Tarek Al Muallim, Ozhan Duzenli, Ceyhun Ilguy
Abstract:
Harbor protection systems are so important. The need for automatic protection systems has increased over the last years. Diver detection active sonar has great significance. It used to detect underwater threats such as divers and autonomous underwater vehicle. To automatically detect such threats the sonar image is processed by algorithms. These algorithms used to detect, track and classify of underwater objects. In this work, divers tracking and classification algorithm is improved be proposing a robust wake detection method. To detect objects the sonar images is normalized then segmented based on fixed threshold. Next, the centroids of the segments are found and clustered based on distance metric. Then to track the objects linear Kalman filter is applied. To reduce effect of noise and creation of false tracks, the Kalman tracker is fine tuned. The tuning is done based on our active sonar specifications. After the tracks are initialed and updated they are subjected to a filtering stage to eliminate the noisy and unstable tracks. Also to eliminate object with a speed out of the diver speed range such as buoys and fast boats. Afterwards the result tracks are subjected to a classification stage to deiced the type of the object been tracked. Here the classification stage is to deice wither if the tracked object is an open circuit diver or a close circuit diver. At the classification stage, a small area around the object is extracted and a novel wake detection method is applied. The morphological features of the object with his wake is extracted. We used support vector machine to find the best classifier. The sonar training images and the test images are collected by ARMELSAN Defense Technologies Company using the portable diver detection sonar ARAS-2023. After applying the algorithm to the test sonar data, we get fine and stable tracks of the divers. The total classification accuracy achieved with the diver type is 97%.Keywords: harbor protection, diver detection, active sonar, wake detection, diver classification
Procedia PDF Downloads 23866 A Geographical Information System Supported Method for Determining Urban Transformation Areas in the Scope of Disaster Risks in Kocaeli
Authors: Tayfun Salihoğlu
Abstract:
Following the Law No: 6306 on Transformation of Disaster Risk Areas, urban transformation in Turkey found its legal basis. In the best practices all over the World, the urban transformation was shaped as part of comprehensive social programs through the discourses of renewing the economic, social and physical degraded parts of the city, producing spaces resistant to earthquakes and other possible disasters and creating a livable environment. In Turkish practice, a contradictory process is observed. In this study, it is aimed to develop a method for better understanding of the urban space in terms of disaster risks in order to constitute a basis for decisions in Kocaeli Urban Transformation Master Plan, which is being prepared by Kocaeli Metropolitan Municipality. The spatial unit used in the study is the 50x50 meter grids. In order to reflect the multidimensionality of urban transformation, three basic components that have spatial data in Kocaeli were identified. These components were named as 'Problems in Built-up Areas', 'Disaster Risks arising from Geological Conditions of the Ground and Problems of Buildings', and 'Inadequacy of Urban Services'. Each component was weighted and scored for each grid. In order to delimitate urban transformation zones Optimized Outlier Analysis (Local Moran I) in the ArcGIS 10.6.1 was conducted to test the type of distribution (clustered or scattered) and its significance on the grids by assuming the weighted total score of the grid as Input Features. As a result of this analysis, it was found that the weighted total scores were not significantly clustering at all grids in urban space. The grids which the input feature is clustered significantly were exported as the new database to use in further mappings. Total Score Map reflects the significant clusters in terms of weighted total scores of 'Problems in Built-up Areas', 'Disaster Risks arising from Geological Conditions of the Ground and Problems of Buildings' and 'Inadequacy of Urban Services'. Resulting grids with the highest scores are the most likely candidates for urban transformation in this citywide study. To categorize urban space in terms of urban transformation, Grouping Analysis in ArcGIS 10.6.1 was conducted to data that includes each component scores in significantly clustered grids. Due to Pseudo Statistics and Box Plots, 6 groups with the highest F stats were extracted. As a result of the mapping of the groups, it can be said that 6 groups can be interpreted in a more meaningful manner in relation to the urban space. The method presented in this study can be magnified due to the availability of more spatial data. By integrating with other data to be obtained during the planning process, this method can contribute to the continuation of research and decision-making processes of urban transformation master plans on a more consistent basis.Keywords: urban transformation, GIS, disaster risk assessment, Kocaeli
Procedia PDF Downloads 12065 Impact of Civil Engineering and Economic Growth in the Sustainability of the Environment: Case of Albania
Authors: Rigers Dodaj
Abstract:
Nowadays, the environment is a critical goal for civil engineers, human activity, construction projects, economic growth, and whole national development. Regarding the development of Albania's economy, people's living standards are increasing, and the requirements for the living environment are also increasing. Under these circumstances, environmental protection and sustainability this is the critical issue. The rising industrialization, urbanization, and energy demand affect the environment by emission of carbon dioxide gas (CO2), a significant parameter known to impact air pollution directly. Consequently, many governments and international organizations conducted policies and regulations to address environmental degradation in the pursuit of economic development, for instance in Albania, the CO2 emission calculated in metric tons per capita has increased by 23% in the last 20 years. This paper analyzes the importance of civil engineering and economic growth in the sustainability of the environment focusing on CO2 emission. The analyzed data are time series 2001 - 2020 (with annual frequency), based on official publications of the World Bank. The statistical approach with vector error correction model and time series forecasting model are used to perform the parameter’s estimations and long-run equilibrium. The research in this paper adds a new perspective to the evaluation of a sustainable environment in the context of carbon emission reduction. Also, it provides reference and technical support for the government toward green and sustainable environmental policies. In the context of low-carbon development, effectively improving carbon emission efficiency is an inevitable requirement for achieving sustainable economic and environmental protection. Also, the study reveals that civil engineering development projects impact greatly the environment in the long run, especially in areas of flooding, noise pollution, water pollution, erosion, ecological disorder, natural hazards, etc. The potential for reducing industrial carbon emissions in recent years indicates that reduction is becoming more difficult, it needs another economic growth policy and more civil engineering development, by improving the level of industrialization and promoting technological innovation in industrial low-carbonization.Keywords: CO₂ emission, civil engineering, economic growth, environmental sustainability
Procedia PDF Downloads 8564 A Method To Assess Collaboration Using Perception of Risk from the Architectural Engineering Construction Industry
Authors: Sujesh F. Sujan, Steve W. Jones, Arto Kiviniemi
Abstract:
The use of Building Information Modelling (BIM) in the Architectural-Engineering-Construction (AEC) industry is a form of systemic innovation. Unlike incremental innovation, (such as the technological development of CAD from hand based drawings to 2D electronically printed drawings) any form of systemic innovation in Project-Based Inter-Organisational Networks requires complete collaboration and results in numerous benefits if adopted and utilised properly. Proper use of BIM involves people collaborating with the use of interoperable BIM compliant tools. The AEC industry globally has been known for its adversarial and fragmented nature where firms take advantage of one another to increase their own profitability. Due to the industry’s nature, getting people to collaborate by unifying their goals is critical to successful BIM adoption. However, this form of innovation is often being forced artificially in the old ways of working which do not suit collaboration. This may be one of the reasons for its low global use even though the technology was developed more than 20 years ago. Therefore, there is a need to develop a metric/method to support and allow industry players to gain confidence in their investment into BIM software and workflow methods. This paper departs from defining systemic risk as a risk that affects all the project participants at a given stage of a project and defines categories of systemic risks. The need to generalise is to allow method applicability to any industry where the category will be the same, but the example of the risk will depend on the industry the study is done in. The method proposed seeks to use individual perception of an example of systemic risk as a key parameter. The significance of this study lies in relating the variance of individual perception of systemic risk to how much the team is collaborating. The method bases its notions on the claim that a more unified range of individual perceptions would mean a higher probability that the team is collaborating better. Since contracts and procurement devise how a project team operates, the method could also break the methodological barrier of highly subjective findings that case studies inflict, which has limited the possibility of generalising between global industries. Since human nature applies in all industries, the authors’ intuition is that perception can be a valuable parameter to study collaboration which is essential especially in projects that utilise systemic innovation such as BIM.Keywords: building information modelling, perception of risk, systemic innovation, team collaboration
Procedia PDF Downloads 18663 Role of Internal and External Factors in Preventing Risky Sexual Behavior, Drug and Alcohol Abuse
Authors: Veronika Sharok
Abstract:
Research relevance on psychological determinants of risky behaviors is caused by high prevalence of such behaviors, particularly among youth. Risky sexual behavior, including unprotected and casual sex, frequent change of sexual partners, drug and alcohol use lead to negative social consequences and contribute to the spread of HIV infection and other sexually transmitted diseases. Data were obtained from 302 respondents aged 15-35 which were divided into 3 empirical groups: persons prone to risky sexual behavior, drug users and alcohol users; and 3 control groups: the individuals who are not prone to risky sexual behavior, persons who do not use drugs and the respondents who do not use alcohol. For processing, we used the following methods: Qualitative method for nominative data (Chi-squared test) and quantitative methods for metric data (student's t-test, Fisher's F-test, Pearson's r correlation test). Statistical processing was performed using Statistica 6.0 software. The study identifies two groups of factors that prevent risky behaviors. Internal factors, which include the moral and value attitudes; significance of existential values: love, life, self-actualization and search for the meaning of life; understanding independence as a responsibility for the freedom and ability to get attached to someone or something up to a point when this relationship starts restricting the freedom and becomes vital; awareness of risky behaviors as dangerous for the person and for others; self-acknowledgement. External factors (prevent risky behaviors in case of absence of the internal ones): absence of risky behaviors among friends and relatives; socio-demographic characteristics (middle class, marital status); awareness about the negative consequences of risky behaviors; inaccessibility to psychoactive substances. These factors are common for proneness to each type of risky behavior, because it usually caused by the same reasons. It should be noted that if prevention of risky behavior is based only on elimination of external factors, it is not as effective as it may be if we pay more attention to internal factors. The results obtained in the study can be used to develop training programs and activities for prevention of risky behaviors, for using values preventing such behaviors and promoting healthy lifestyle.Keywords: existential values, prevention, psychological features, risky behavior
Procedia PDF Downloads 25662 Analytical Performance of Cobas C 8000 Analyzer Based on Sigma Metrics
Authors: Sairi Satari
Abstract:
Introduction: Six-sigma is a metric that quantifies the performance of processes as a rate of Defects-Per-Million Opportunities. Sigma methodology can be applied in chemical pathology laboratory for evaluating process performance with evidence for process improvement in quality assurance program. In the laboratory, these methods have been used to improve the timeliness of troubleshooting, reduce the cost and frequency of quality control and minimize pre and post-analytical errors. Aim: The aim of this study is to evaluate the sigma values of the Cobas 8000 analyzer based on the minimum requirement of the specification. Methodology: Twenty-one analytes were chosen in this study. The analytes were alanine aminotransferase (ALT), albumin, alkaline phosphatase (ALP), Amylase, aspartate transaminase (AST), total bilirubin, calcium, chloride, cholesterol, HDL-cholesterol, creatinine, creatinine kinase, glucose, lactate dehydrogenase (LDH), magnesium, potassium, protein, sodium, triglyceride, uric acid and urea. Total error was obtained from Clinical Laboratory Improvement Amendments (CLIA). The Bias was calculated from end cycle report of Royal College of Pathologists of Australasia (RCPA) cycle from July to December 2016 and coefficient variation (CV) from six-month internal quality control (IQC). The sigma was calculated based on the formula :Sigma = (Total Error - Bias) / CV. The analytical performance was evaluated based on the sigma, sigma > 6 is world class, sigma > 5 is excellent, sigma > 4 is good and sigma < 4 is satisfactory and sigma < 3 is poor performance. Results: Based on the calculation, we found that, 96% are world class (ALT, albumin, ALP, amylase, AST, total bilirubin, cholesterol, HDL-cholesterol, creatinine, creatinine kinase, glucose, LDH, magnesium, potassium, triglyceride and uric acid. 14% are excellent (calcium, protein and urea), and 10% ( chloride and sodium) require more frequent IQC performed per day. Conclusion: Based on this study, we found that IQC should be performed frequently for only Chloride and Sodium to ensure accurate and reliable analysis for patient management.Keywords: sigma matrics, analytical performance, total error, bias
Procedia PDF Downloads 17261 Application of Combined Cluster and Discriminant Analysis to Make the Operation of Monitoring Networks More Economical
Authors: Norbert Magyar, Jozsef Kovacs, Peter Tanos, Balazs Trasy, Tamas Garamhegyi, Istvan Gabor Hatvani
Abstract:
Water is one of the most important common resources, and as a result of urbanization, agriculture, and industry it is becoming more and more exposed to potential pollutants. The prevention of the deterioration of water quality is a crucial role for environmental scientist. To achieve this aim, the operation of monitoring networks is necessary. In general, these networks have to meet many important requirements, such as representativeness and cost efficiency. However, existing monitoring networks often include sampling sites which are unnecessary. With the elimination of these sites the monitoring network can be optimized, and it can operate more economically. The aim of this study is to illustrate the applicability of the CCDA (Combined Cluster and Discriminant Analysis) to the field of water quality monitoring and optimize the monitoring networks of a river (the Danube), a wetland-lake system (Kis-Balaton & Lake Balaton), and two surface-subsurface water systems on the watershed of Lake Neusiedl/Lake Fertő and on the Szigetköz area over a period of approximately two decades. CCDA combines two multivariate data analysis methods: hierarchical cluster analysis and linear discriminant analysis. Its goal is to determine homogeneous groups of observations, in our case sampling sites, by comparing the goodness of preconceived classifications obtained from hierarchical cluster analysis with random classifications. The main idea behind CCDA is that if the ratio of correctly classified cases for a grouping is higher than at least 95% of the ratios for the random classifications, then at the level of significance (α=0.05) the given sampling sites don’t form a homogeneous group. Due to the fact that the sampling on the Lake Neusiedl/Lake Fertő was conducted at the same time at all sampling sites, it was possible to visualize the differences between the sampling sites belonging to the same or different groups on scatterplots. Based on the results, the monitoring network of the Danube yields redundant information over certain sections, so that of 12 sampling sites, 3 could be eliminated without loss of information. In the case of the wetland (Kis-Balaton) one pair of sampling sites out of 12, and in the case of Lake Balaton, 5 out of 10 could be discarded. For the groundwater system of the catchment area of Lake Neusiedl/Lake Fertő all 50 monitoring wells are necessary, there is no redundant information in the system. The number of the sampling sites on the Lake Neusiedl/Lake Fertő can decrease to approximately the half of the original number of the sites. Furthermore, neighbouring sampling sites were compared pairwise using CCDA and the results were plotted on diagrams or isoline maps showing the location of the greatest differences. These results can help researchers decide where to place new sampling sites. The application of CCDA proved to be a useful tool in the optimization of the monitoring networks regarding different types of water bodies. Based on the results obtained, the monitoring networks can be operated more economically.Keywords: combined cluster and discriminant analysis, cost efficiency, monitoring network optimization, water quality
Procedia PDF Downloads 350