Search results for: distribution networks
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7305

Search results for: distribution networks

1665 Image Processing-Based Maize Disease Detection Using Mobile Application

Authors: Nathenal Thomas

Abstract:

In the food chain and in many other agricultural products, corn, also known as maize, which goes by the scientific name Zea mays subsp, is a widely produced agricultural product. Corn has the highest adaptability. It comes in many different types, is employed in many different industrial processes, and is more adaptable to different agro-climatic situations. In Ethiopia, maize is among the most widely grown crop. Small-scale corn farming may be a household's only source of food in developing nations like Ethiopia. The aforementioned data demonstrates that the country's requirement for this crop is excessively high, and conversely, the crop's productivity is very low for a variety of reasons. The most damaging disease that greatly contributes to this imbalance between the crop's supply and demand is the corn disease. The failure to diagnose diseases in maize plant until they are too late is one of the most important factors influencing crop output in Ethiopia. This study will aid in the early detection of such diseases and support farmers during the cultivation process, directly affecting the amount of maize produced. The diseases in maize plants, such as northern leaf blight and cercospora leaf spot, have distinct symptoms that are visible. This study aims to detect the most frequent and degrading maize diseases using the most efficiently used subset of machine learning technology, deep learning so, called Image Processing. Deep learning uses networks that can be trained from unlabeled data without supervision (unsupervised). It is a feature that simulates the exercises the human brain goes through when digesting data. Its applications include speech recognition, language translation, object classification, and decision-making. Convolutional Neural Network (CNN) for Image Processing, also known as convent, is a deep learning class that is widely used for image classification, image detection, face recognition, and other problems. it will also use this algorithm as the state-of-the-art for my research to detect maize diseases by photographing maize leaves using a mobile phone.

Keywords: CNN, zea mays subsp, leaf blight, cercospora leaf spot

Procedia PDF Downloads 44
1664 3D Numerical Study of Tsunami Loading and Inundation in a Model Urban Area

Authors: A. Bahmanpour, I. Eames, C. Klettner, A. Dimakopoulos

Abstract:

We develop a new set of diagnostic tools to analyze inundation into a model district using three-dimensional CFD simulations, with a view to generating a database against which to test simpler models. A three-dimensional model of Oregon city with different-sized groups of building next to the coastline is used to run calculations of the movement of a long period wave on the shore. The initial and boundary conditions of the off-shore water are set using a nonlinear inverse method based on Eulerian spatial information matching experimental Eulerian time series measurements of water height. The water movement is followed in time, and this enables the pressure distribution on every surface of each building to be followed in a temporal manner. The three-dimensional numerical data set is validated against published experimental work. In the first instance, we use the dataset as a basis to understand the success of reduced models - including 2D shallow water model and reduced 1D models - to predict water heights, flow velocity and forces. This is because models based on the shallow water equations are known to underestimate drag forces after the initial surge of water. The second component is to identify critical flow features, such as hydraulic jumps and choked states, which are flow regions where dissipation occurs and drag forces are large. Finally, we describe how future tsunami inundation models should be modified to account for the complex effects of buildings through drag and blocking.Financial support from UCL and HR Wallingford is greatly appreciated. The authors would like to thank Professor Daniel Cox and Dr. Hyoungsu Park for providing the data on the Seaside Oregon experiment.

Keywords: computational fluid dynamics, extreme events, loading, tsunami

Procedia PDF Downloads 88
1663 Fast Bayesian Inference of Multivariate Block-Nearest Neighbor Gaussian Process (NNGP) Models for Large Data

Authors: Carlos Gonzales, Zaida Quiroz, Marcos Prates

Abstract:

Several spatial variables collected at the same location that share a common spatial distribution can be modeled simultaneously through a multivariate geostatistical model that takes into account the correlation between these variables and the spatial autocorrelation. The main goal of this model is to perform spatial prediction of these variables in the region of study. Here we focus on a geostatistical multivariate formulation that relies on sharing common spatial random effect terms. In particular, the first response variable can be modeled by a mean that incorporates a shared random spatial effect, while the other response variables depend on this shared spatial term, in addition to specific random spatial effects. Each spatial random effect is defined through a Gaussian process with a valid covariance function, but in order to improve the computational efficiency when the data are large, each Gaussian process is approximated to a Gaussian random Markov field (GRMF), specifically to the block nearest neighbor Gaussian process (Block-NNGP). This approach involves dividing the spatial domain into several dependent blocks under certain constraints, where the cross blocks allow capturing the spatial dependence on a large scale, while each individual block captures the spatial dependence on a smaller scale. The multivariate geostatistical model belongs to the class of Latent Gaussian Models; thus, to achieve fast Bayesian inference, it is used the integrated nested Laplace approximation (INLA) method. The good performance of the proposed model is shown through simulations and applications for massive data.

Keywords: Block-NNGP, geostatistics, gaussian process, GRMF, INLA, multivariate models.

Procedia PDF Downloads 56
1662 Environmental Performance Measurement for Network-Level Pavement Management

Authors: Jessica Achebe, Susan Tighe

Abstract:

The recent Canadian infrastructure report card reveals the unhealthy state of municipal infrastructure intensified challenged faced by municipalities to maintain adequate infrastructure performance thresholds and meet user’s required service levels. For a road agency, huge funding gap issue is inflated by growing concerns of the environmental repercussion of road construction, operation and maintenance activities. As the reduction of material consumption and greenhouse gas emission when maintain and rehabilitating road networks can achieve added benefits including improved life cycle performance of pavements, reduced climate change impacts and human health effect due to less air pollution, improved productivity due to optimal allocation of resources and reduced road user cost. Incorporating environmental sustainability measure into pavement management is solution widely cited and studied. However measuring the environmental performance of road network is still a far-fetched practice in road network management, more so an ostensive agency-wide environmental sustainability or sustainable maintenance specifications is missing. To address this challenge, this present research focuses on the environmental sustainability performance of network-level pavement management. The ultimate goal is to develop a framework to incorporate environmental sustainability in pavement management systems for network-level maintenance programming. In order to achieve this goal, this study reviewed previous studies that employed environmental performance measures, as well as the suitability of environmental performance indicators for the evaluation of the sustainability of network-level pavement maintenance strategies. Through an industry practice survey, this paper provides a brief forward regarding the pavement manager motivations and barriers to making more sustainable decisions, and data needed to support the network-level environmental sustainability. The trends in network-level sustainable pavement management are also presented, existing gaps are highlighted, and ideas are proposed for sustainable network-level pavement management.

Keywords: pavement management, sustainability, network-level evaluation, environment measures

Procedia PDF Downloads 183
1661 Robustness of the Deep Chroma Extractor and Locally-Normalized Quarter Tone Filters in Automatic Chord Estimation under Reverberant Conditions

Authors: Luis Alvarado, Victor Poblete, Isaac Gonzalez, Yetzabeth Gonzalez

Abstract:

In MIREX 2016 (http://www.music-ir.org/mirex), the deep neural network (DNN)-Deep Chroma Extractor, proposed by Korzeniowski and Wiedmer, reached the highest score in an audio chord recognition task. In the present paper, this tool is assessed under acoustic reverberant environments and distinct source-microphone distances. The evaluation dataset comprises The Beatles and Queen datasets. These datasets are sequentially re-recorded with a single microphone in a real reverberant chamber at four reverberation times (0 -anechoic-, 1, 2, and 3 s, approximately), as well as four source-microphone distances (32, 64, 128, and 256 cm). It is expected that the performance of the trained DNN will dramatically decrease under these acoustic conditions with signals degraded by room reverberation and distance to the source. Recently, the effect of the bio-inspired Locally-Normalized Cepstral Coefficients (LNCC), has been assessed in a text independent speaker verification task using speech signals degraded by additive noise at different signal-to-noise ratios with variations of recording distance, and it has also been assessed under reverberant conditions with variations of recording distance. LNCC showed a performance so high as the state-of-the-art Mel Frequency Cepstral Coefficient filters. Based on these results, this paper proposes a variation of locally-normalized triangular filters called Locally-Normalized Quarter Tone (LNQT) filters. By using the LNQT spectrogram, robustness improvements of the trained Deep Chroma Extractor are expected, compared with classical triangular filters, and thus compensating the music signal degradation improving the accuracy of the chord recognition system.

Keywords: chord recognition, deep neural networks, feature extraction, music information retrieval

Procedia PDF Downloads 193
1660 C-eXpress: A Web-Based Analysis Platform for Comparative Functional Genomics and Proteomics in Human Cancer Cell Line, NCI-60 as an Example

Authors: Chi-Ching Lee, Po-Jung Huang, Kuo-Yang Huang, Petrus Tang

Abstract:

Background: Recent advances in high-throughput research technologies such as new-generation sequencing and multi-dimensional liquid chromatography makes it possible to dissect the complete transcriptome and proteome in a single run for the first time. However, it is almost impossible for many laboratories to handle and analysis these “BIG” data without the support from a bioinformatics team. We aimed to provide a web-based analysis platform for users with only limited knowledge on bio-computing to study the functional genomics and proteomics. Method: We use NCI-60 as an example dataset to demonstrate the power of the web-based analysis platform and data delivering system: C-eXpress takes a simple text file that contain the standard NCBI gene or protein ID and expression levels (rpkm or fold) as input file to generate a distribution map of gene/protein expression levels in a heatmap diagram organized by color gradients. The diagram is hyper-linked to a dynamic html table that allows the users to filter the datasets based on various gene features. A dynamic summary chart is generated automatically after each filtering process. Results: We implemented an integrated database that contain pre-defined annotations such as gene/protein properties (ID, name, length, MW, pI); pathways based on KEGG and GO biological process; subcellular localization based on GO cellular component; functional classification based on GO molecular function, kinase, peptidase and transporter. Multiple ways of sorting of column and rows is also provided for comparative analysis and visualization of multiple samples.

Keywords: cancer, visualization, database, functional annotation

Procedia PDF Downloads 583
1659 A Case Study: Social Network Analysis of Construction Design Teams

Authors: Elif D. Oguz Erkal, David Krackhardt, Erica Cochran-Hameen

Abstract:

Even though social network analysis (SNA) is an abundantly studied concept for many organizations and industries, a clear SNA approach to the project teams has not yet been adopted by the construction industry. The main challenges for performing SNA in construction and the apparent reason for this gap is the unique and complex structure of each construction project, the comparatively high circulation of project team members/contributing parties and the variety of authentic problems for each project. Additionally, there are stakeholders from a variety of professional backgrounds collaborating in a high-stress environment fueled by time and cost constraints. Within this case study on Project RE, a design & build project performed at the Urban Design Build Studio of Carnegie Mellon University, social network analysis of the project design team will be performed with the main goal of applying social network theory to construction project environments. The research objective is to determine a correlation between the network of how individuals relate to each other on one’s perception of their own professional strengths and weaknesses and the communication patterns within the team and the group dynamics. Data is collected through a survey performed over four rounds conducted monthly, detailed follow-up interviews and constant observations to assess the natural alteration in the network with the effect of time. The data collected is processed by the means of network analytics and in the light of the qualitative data collected with observations and individual interviews. This paper presents the full ethnography of this construction design team of fourteen architecture students based on an elaborate social network data analysis over time. This study is expected to be used as an initial step to perform a refined, targeted and large-scale social network data collection in construction projects in order to deduce the impacts of social networks on project performance and suggest better collaboration structures for construction project teams henceforth.

Keywords: construction design teams, construction project management, social network analysis, team collaboration, network analytics

Procedia PDF Downloads 171
1658 Impact of Modern Beehive on Income of Rural Households: Evidence from Bugina District of Northern Ethiopia

Authors: Wondmnew Derebe Yohannis

Abstract:

The enhanced utilization of modern beehives holds significant potential to enhance the livelihoods of smallholder farmers who heavily rely on mixed crop-livestock farming for their income. Recognizing this, the distribution of improved beehives has been implemented across various regions in Ethiopia, including the Bugina district. However, the precise impact of these improved beehives on farmers' income has received limited attention. To address this gap, this study aims to assess the influence of adopting upgraded beehives on rural households' income and asset accumulation. To conduct this research, survey data was gathered from a sample of 350 households selected through random sampling. The collected data was then analyzed using an econometric stochastic frontier model (ESRM) approach. The findings reveal that the adoption of improved beehives has resulted in higher annual income and asset growth for beekeepers. On average, those who adopted the improved beehives earned approximately 6,077 Ethiopian Birr (ETB) more than their counterparts who did not adopt these beehives. However, it is worth noting that the impact of adoption would have been even greater for non-adopters, as evidenced by the negative transitional heterogeneity effect of 1792 ETB. Furthermore, the analysis indicates that the decision to adopt or not adopt improved beehives was driven by individual self-selection. The adoption of improved beehives also led to an increase in fixed assets for households, establishing it as a viable strategy for poverty reduction. Overall, this study underscores the positive effect of adopting improved beehives on rural households' income and asset holdings, showcasing its potential to uplift smallholder farmers and serve as an alternative mechanism for reducing poverty.

Keywords: impact, adoption, endogenous switching regression, income, improved beehives

Procedia PDF Downloads 17
1657 Rainwater Harvesting for Household Consumption in Rural Demonstration Sites of Nong Khai Province, Thailand

Authors: Shotiros Protong

Abstract:

In recent years, Thailand has been affected by climate change phenomenon, which is clearly seen from the season change for different times. The occurrence of violent storms, heavy rains, floods, and drought were found in several areas. In a long dry period, the water supply is not adequate in drought areas. Nowadays, it is renowned that there is a significant decrease of rainwater use for household consumption in rural area of Thailand. Rainwater harvesting is the practice of collection and storage of rainwater in storage tanks before it is lost as surface run-off. Rooftop rainwater harvesting is used to provide drinking water, domestic water, and water for livestock. Rainwater harvesting in households is an alternative for people to readily prepare water resources for their own consumptions during the drought season, can help mitigate flooding of flooded plains, and also may reduce demand on the basin and well. It also helps in the availability of potable water, as rainwater is substantially free of salts. Application of rainwater harvesting in rural water system provide a substantial benefit for both water supply and wastewater subsystems by reducing the need for clean water in water distribution systems, less generated storm water in sewer systems, and a reduction in storm water runoff polluting freshwater bodies. The combination of rainwater quality and rainfall quantity is used to determine proper rainwater harvesting for household consumption to be safe and adequate for survivals. Rainwater quality analysis is compared with the drinking water standard. In terms of rainfall quantity, the observed rainfall data are interpolated by GIS 10.5 and showed by map during 1980 to 2020, used to assess the annual yield for household consumptions.

Keywords: rainwater harvesting, drinking water standard, annual yield, rainfall quantity

Procedia PDF Downloads 134
1656 Epstein-Barr Virus-associated Diseases and TCM Syndromes Types: In Search for Correlation

Authors: Xu Yifei, Le Yining, Yang Qingluan, Tu Yanjie

Abstract:

Objective: This study aims to investigate the distribution features of Traditional Chinese Medicine (TCM) syndromes and syndrome elements in Epstein-Barr virus-associated diseases and then explores the relations between TCM syndromes or syndrome elements and laboratory indicators of Epstein-Barr virus-associated diseases. Methods: A cross-sectional study of 70 patients with EBV infection was described. We assessed the diagnostic information and laboratory indicators of these patients from Huashan Hospital Affiliated to Fudan University between November 2017 and July 2019. The disease diagnosis and syndrome differentiation were based on the diagnostic criteria of EBV-associated diseases and the theory of TCM respectively. Confidence correlation analysis, logistic regression analysis, cluster analysis, and the Sankey diagram were used to analyze the correlation between the data. Results: The differentiation of the 4 primary TCM syndromes in the collected patients was correlated with the indexes of immune function, liver function, inflammation, and anemia, especially the relationship between Qifen syndrome and high lactic acid dehydrogenase level. The common 11 TCM syndrome elements were associated with the increased CD3+ T cell rate, low hemoglobin level, high procalcitonin level, high lactic acid dehydrogenase level, and low albumin level. Conclusion: The changes in immune function indexes, procalcitonin, and liver function-related indexes in patients with EBV-associated diseases were consistent with the evolution law of TCM syndromes. This study provides a reference for judging the pathological stages of these kinds of diseases, predicting their prognosis, and guiding subsequent treatment strategies based on TCM syndrome type.

Keywords: EBV-associated diseases, traditional Chinese medicine syndrome, syndrome element, diagnostics

Procedia PDF Downloads 41
1655 Verification of Satellite and Observation Measurements to Build Solar Energy Projects in North Africa

Authors: Samy A. Khalil, U. Ali Rahoma

Abstract:

The measurements of solar radiation, satellite data has been routinely utilize to estimate solar energy. However, the temporal coverage of satellite data has some limits. The reanalysis, also known as "retrospective analysis" of the atmosphere's parameters, is produce by fusing the output of NWP (Numerical Weather Prediction) models with observation data from a variety of sources, including ground, and satellite, ship, and aircraft observation. The result is a comprehensive record of the parameters affecting weather and climate. The effectiveness of reanalysis datasets (ERA-5) for North Africa was evaluate against high-quality surfaces measured using statistical analysis. Estimating the distribution of global solar radiation (GSR) over five chosen areas in North Africa through ten-years during the period time from 2011 to 2020. To investigate seasonal change in dataset performance, a seasonal statistical analysis was conduct, which showed a considerable difference in mistakes throughout the year. By altering the temporal resolution of the data used for comparison, the performance of the dataset is alter. Better performance is indicate by the data's monthly mean values, but data accuracy is degraded. Solar resource assessment and power estimation are discuses using the ERA-5 solar radiation data. The average values of mean bias error (MBE), root mean square error (RMSE) and mean absolute error (MAE) of the reanalysis data of solar radiation vary from 0.079 to 0.222, 0.055 to 0.178, and 0.0145 to 0.198 respectively during the period time in the present research. The correlation coefficient (R2) varies from 0.93 to 99% during the period time in the present research. This research's objective is to provide a reliable representation of the world's solar radiation to aid in the use of solar energy in all sectors.

Keywords: solar energy, ERA-5 analysis data, global solar radiation, North Africa

Procedia PDF Downloads 70
1654 Faster Pedestrian Recognition Using Deformable Part Models

Authors: Alessandro Preziosi, Antonio Prioletti, Luca Castangia

Abstract:

Deformable part models achieve high precision in pedestrian recognition, but all publicly available implementations are too slow for real-time applications. We implemented a deformable part model algorithm fast enough for real-time use by exploiting information about the camera position and orientation. This implementation is both faster and more precise than alternative DPM implementations. These results are obtained by computing convolutions in the frequency domain and using lookup tables to speed up feature computation. This approach is almost an order of magnitude faster than the reference DPM implementation, with no loss in precision. Knowing the position of the camera with respect to horizon it is also possible prune many hypotheses based on their size and location. The range of acceptable sizes and positions is set by looking at the statistical distribution of bounding boxes in labelled images. With this approach it is not needed to compute the entire feature pyramid: for example higher resolution features are only needed near the horizon. This results in an increase in mean average precision of 5% and an increase in speed by a factor of two. Furthermore, to reduce misdetections involving small pedestrians near the horizon, input images are supersampled near the horizon. Supersampling the image at 1.5 times the original scale, results in an increase in precision of about 4%. The implementation was tested against the public KITTI dataset, obtaining an 8% improvement in mean average precision over the best performing DPM-based method. By allowing for a small loss in precision computational time can be easily brought down to our target of 100ms per image, reaching a solution that is faster and still more precise than all publicly available DPM implementations.

Keywords: autonomous vehicles, deformable part model, dpm, pedestrian detection, real time

Procedia PDF Downloads 247
1653 Instructional Resources Development in Open and Distance Learning: Prospects and Challenges of Media Integration in Nigeria

Authors: Felix E. Gbenoba, Opeyemi Dahunsi

Abstract:

Self-instructional materials are at the heart of instructional delivery in Open and Distance Learning (ODL). The success of any ODL institution depends on the availability of instructional materials in quality and quantity. An ODL study material is expected to fully play the teacher plays in the face-to-face learning environment. In Nigeria, efforts to deliver ODL learning materials have been peculiarly challenging. Although researchers are unrelenting in hewing out ways to make ODL delivery in Africa generally and Nigeria in particular, meet the learners’ needs and acceptable global practices, the prospects of integrating instructional media into distance learning courses are largely unexplored. In the present study, we critically examine the prospects of integration of instructional media into ODL courses for pedagogic and other benefits it portends for delivery via the distance learning mode. Although efforts to integrate media in ODL have been recorded before now, the reality has not matched the expectation so far in Nigeria. This does not mean that the existing instructional materials have not produced any significant positive results in improving the overall learning (and teaching) experience in its institutions; it implies that increased integration as suggested here will further improve the experience as well as bring up the new challenges. Obstacles and problems of instructional materials and media development that could have affected the open educational resource initiatives are well established. The first aspect of this paper recalls the revolutionary strides that ODL brought to delivery of education in Nigeria particularly. The other aspect is on what instructional media are, their role, prospects and challenges for ODL in Nigeria; these are examined vis a vis the challenges of development, production and distribution of print instructional materials as the major format of instructional delivery at Nigeria’s only single mode ODL institution, NOUN. In the third aspect, we justify the need and benefits of integrating instructional media into the courses and make recommendations.

Keywords: instructional delivery, instructional media, ODL, media integration, Nigeria, self-instructional materials

Procedia PDF Downloads 351
1652 PitMod: The Lorax Pit Lake Hydrodynamic and Water Quality Model

Authors: Silvano Salvador, Maryam Zarrinderakht, Alan Martin

Abstract:

Open pits, which are the result of mining, are filled by water over time until the water reaches the elevation of the local water table and generates mine pit lakes. There are several specific regulations about the water quality of pit lakes, and mining operations should keep the quality of groundwater above pre-defined standards. Therefore, an accurate, acceptable numerical model predicting pit lakes’ water balance and water quality is needed in advance of mine excavation. We carry on analyzing and developing the model introduced by Crusius, Dunbar, et al. (2002) for pit lakes. This model, called “PitMod”, simulates the physical and geochemical evolution of pit lakes over time scales ranging from a few months up to a century or more. Here, a lake is approximated as one-dimensional, horizontally averaged vertical layers. PitMod calculates the time-dependent vertical distribution of physical and geochemical pit lake properties, like temperature, salinity, conductivity, pH, trace metals, and dissolved oxygen, within each model layer. This model considers the effect of pit morphology, climate data, multiple surface and subsurface (groundwater) inflows/outflows, precipitation/evaporation, surface ice formation/melting, vertical mixing due to surface wind stress, convection, background turbulence and equilibrium geochemistry using PHREEQC and linking that to the geochemical reactions. PitMod, which is used and validated in over 50 mines projects since 2002, incorporates physical processes like those found in other lake models such as DYRESM (Imerito 2007). However, unlike DYRESM PitMod also includes geochemical processes, pit wall runoff, and other effects. In addition, PitMod is actively under development and can be customized as required for a particular site.

Keywords: pit lakes, mining, modeling, hydrology

Procedia PDF Downloads 107
1651 Analysis of Enhanced Built-up and Bare Land Index in the Urban Area of Yangon, Myanmar

Authors: Su Nandar Tin, Wutjanun Muttitanon

Abstract:

The availability of free global and historical satellite imagery provides a valuable opportunity for mapping and monitoring the year by year for the built-up area, constantly and effectively. Land distribution guidelines and identification of changes are important in preparing and reviewing changes in the ground overview data. This study utilizes Landsat images for thirty years of information to acquire significant, and land spread data that are extremely valuable for urban arranging. This paper is mainly introducing to focus the basic of extracting built-up area for the city development area from the satellite images of LANDSAT 5,7,8 and Sentinel 2A from USGS in every five years. The purpose analyses the changing of the urban built-up area according to the year by year and to get the accuracy of mapping built-up and bare land areas in studying the trend of urban built-up changes the periods from 1990 to 2020. The GIS tools such as raster calculator and built-up area modelling are using in this study and then calculating the indices, which include enhanced built-up and bareness index (EBBI), Normalized difference Built-up index (NDBI), Urban index (UI), Built-up index (BUI) and Normalized difference bareness index (NDBAI) are used to get the high accuracy urban built-up area. Therefore, this study will point out a variable approach to automatically mapping typical enhanced built-up and bare land changes (EBBI) with simple indices and according to the outputs of indexes. Therefore, the percentage of the outputs of enhanced built-up and bareness index (EBBI) of the sentinel-2A can be realized with 48.4% of accuracy than the other index of Landsat images which are 15.6% in 1990 where there is increasing urban expansion area from 43.6% in 1990 to 92.5% in 2020 on the study area for last thirty years.

Keywords: built-up area, EBBI, NDBI, NDBAI, urban index

Procedia PDF Downloads 121
1650 The Effects of Cost-Sharing Contracts on the Costs and Operations of E-Commerce Supply Chains

Authors: Sahani Rathnasiri, Pritee Ray, Sardar M. N. Isalm, Carlos A. Vega-Mejia

Abstract:

This study develops a cooperative game theory-based cost-sharing contract model for a business to consumer (B2C) e-commerce supply chain to minimize the overall supply chain costs and the individual costs within an information asymmetry scenario. The objective of this study is to address the issues of strategic interactions among the key players of the e-commerce supply chain operation, which impedes the optimal operational outcomes. Game theory has been included in the field of supply chain management to resolve strategic decision-making issues; however, most of the studies are limited only to two-echelons of the supply chains. Multi-echelon supply chain optimizations based on game-theoretic models are less explored in the previous literature. This study adopts a cooperative game model to focus on the common payoff of operations and addresses the issues of information asymmetry and coordination of a three-echelon e-commerce supply chain. The cost-sharing contract model integrates operational features such as production, inventory management and distribution with the contract related constraints. The outcomes of the model highlight the importance of maintaining lower operational costs by all players to obtain benefits from the cost-sharing contract. Further, the cost-sharing contract ensures true cost revelation, and hence eliminates the information asymmetry issues among the players. Comparing the results of the contract model with the de-centralized e-commerce supply chain operation further emphasizes that the cost-sharing contract derives Pareto-improved outcomes and minimizes the costs of overall e-commerce supply chain operation.

Keywords: cooperative game theory, cost-sharing contract, e-commerce supply chain, information asymmetry

Procedia PDF Downloads 93
1649 Phonology and Syntax of Article Incorporation in Mauritian Creole: Evidence from Bantou Languages

Authors: Emmanuel Nikiema

Abstract:

This paper examines article incorporation in Mauritian Creole, a French Lexifier Creole which exhibits three forms of article incorporation as illustrated in (1-3). While various analyses of article incorporation have been proposed in the literature, fewer studies have explored the motivation of this widespread phenomenon in Mauritian Creole (MC) as opposed to other French Lexifier Creoles spoken in the Caribbean. For example, Mauritian Creole exhibits 4 times more CV incorporation than Haitian Creole, and 40 times more than Reunion Creole. (1) Consonantal type (C): loraz ‘thunder storm’, lete ‘summer’, zwazo ‘bird’, nide ‘idea’. (2) Syllabic type (CV): lapo ‘skin’, liku ‘neck’, ledo ‘back’, leker ‘heart’, diber ‘butter’. (3) Bi-consonantal (CVC): delo ‘water’, dizef ‘egg’, lizye ‘eye’, dilwil ‘oil’. The goal of this study is twofold: 1) uncover the rules governing the three types of article incorporation in MC, and 2) account for its remarkable occurrence in MC as opposed to its quasi-absence in Reunion Creole. We have collected a corpus of over 700 cases and organized it into three categories (C; CV and CVC). For example, there are 471 examples of CV incorporation in MC against 112 in Haitian Creole and only 12 in Reunion Creole. Two questions can be raised: 1) what is the motivation and distribution of the three types of incorporation in MC, and 2) how can one account for the high volume of incorporation in MC as opposed to its quasi-absence in Reunion Creole? We suggest that article incorporation in MC is related to the structure of nouns in Bantou languages. While previous authors have largely used population settlement data in the colonies during the Creole formation period to justify their analyses, we propose an account based on the syntactic structure of Bantou nouns. This analysis will shed light on the contribution of African languages to the formation of MC, and on to why MC has exhibited more article incorporation cases than any other French Lexifier Creole.

Keywords: article incorporation, creole languages, description, phonology

Procedia PDF Downloads 82
1648 Improved Performance of Mn Substituted Ceria Nanospheres for Water Gas Shift Reaction: Influence of Preparation Conditions

Authors: Bhairi Lakshminarayana, Surajit Sarker, Ch. Subrahmanyam

Abstract:

The present study reports the development of noble metal free nano catalysts for low-temperature CO oxidation and water gas shift reaction. Mn-substituted CeO2 solid solution catalysts were synthesized by co-precipitation, combustion and hydrothermal methods. The formation of solid solution was confirmed by XRD with Rietveld refinement and the percentage of carbon and nitrogen doping was ensured by CHNS analyzer. Raman spectroscopic confirmed the oxygen vacancies. The surface area, pore volume and pore size distribution confirmed by N2 physisorption analysis, whereas, UV-visible diffuse reflectance spectroscopy and XPS data confirmed the oxidation state of the Mn ion. The particle size and morphology (spherical shape) of the material was confirmed using FESEM and HRTEM analysis. Ce0.8Mn0.2O2-δ was calcined at 400 °C, 600 °C and 800 °C. Raman spectroscopy confirmed that the catalyst calcined at 400 °C has the best redox properties. The activity of the designed catalysts for CO oxidation (0.2 vol%), carried out with GHSV of 21,000 h-1 and it has been observed that co-precipitation favored the best active catalyst towards CO oxidation and water gas shift reaction, due to the high surface area, improved reducibility, oxygen mobility and highest quantity of surface oxygen species. The activation energy of low temperature CO oxidation on Ce0.8Mn0.2O2- δ (combustion) was 5.5 kcal.K-1.mole-1. The designed catalysts were tested for water gas shift reaction. The present study demonstrates that Mn ion substituted ceria at 400 °C calcination temperature prepared by co-precipitation method promise to revive a green sustainable energy production approach.

Keywords: Ce0.8Mn0.2O2-ð, CO oxidation, physicochemical characterization, water gas shift reaction (WGS)

Procedia PDF Downloads 207
1647 The Role of Heat Pumps in the Decarbonization of European Regions

Authors: Domenico M. Mongelli, Michele De Carli, Laura Carnieletto, Filippo Busato

Abstract:

Europe's dependence on imported fossil fuels has been particularly highlighted by the Russian invasion of Ukraine. Limiting this dependency with a massive replacement of fossil fuel boilers with heat pumps for building heating is the goal of this work. Therefore, with the aim of diversifying energy sources and evaluating the potential use of heat pump technologies for residential buildings with a view to decarbonization, the quantitative reduction in the consumption of fossil fuels was investigated in all regions of Europe through the use of heat pumps. First, a general overview of energy consumption in buildings in Europe has been assessed. The consumption of buildings has been addressed to the different uses (heating, cooling, DHW, etc.) as well as the different sources (natural gas, oil, biomass, etc.). The analysis has been done in order to provide a baseline at the European level on the current consumptions and future consumptions, with a particular interest in the future increase of cooling. A database was therefore created on the distribution of residential energy consumption linked to air conditioning among the various energy carriers (electricity, waste heat, gas, solid fossil fuels, liquid fossil fuels, and renewable sources) for each region in Europe. Subsequently, the energy profiles of various European cities representative of the different climates are analyzed in order to evaluate, in each European climatic region, which energy coverage can be provided by heat pumps in replacement of natural gas and solid and liquid fossil fuels for air conditioning of the buildings, also carrying out the environmental and economic assessments for this energy transition operation. This work aims to make an innovative contribution to the evaluation of the potential for introducing heat pump technology for decarbonization in the air conditioning of buildings in all climates of the different European regions.

Keywords: heat pumps, heating, decarbonization, energy policies

Procedia PDF Downloads 94
1646 Two-Level Graph Causality to Detect and Predict Random Cyber-Attacks

Authors: Van Trieu, Shouhuai Xu, Yusheng Feng

Abstract:

Tracking attack trajectories can be difficult, with limited information about the nature of the attack. Even more difficult as attack information is collected by Intrusion Detection Systems (IDSs) due to the current IDSs having some limitations in identifying malicious and anomalous traffic. Moreover, IDSs only point out the suspicious events but do not show how the events relate to each other or which event possibly cause the other event to happen. Because of this, it is important to investigate new methods capable of performing the tracking of attack trajectories task quickly with less attack information and dependency on IDSs, in order to prioritize actions during incident responses. This paper proposes a two-level graph causality framework for tracking attack trajectories in internet networks by leveraging observable malicious behaviors to detect what is the most probable attack events that can cause another event to occur in the system. Technically, given the time series of malicious events, the framework extracts events with useful features, such as attack time and port number, to apply to the conditional independent tests to detect the relationship between attack events. Using the academic datasets collected by IDSs, experimental results show that the framework can quickly detect the causal pairs that offer meaningful insights into the nature of the internet network, given only reasonable restrictions on network size and structure. Without the framework’s guidance, these insights would not be able to discover by the existing tools, such as IDSs. It would cost expert human analysts a significant time if possible. The computational results from the proposed two-level graph network model reveal the obvious pattern and trends. In fact, more than 85% of causal pairs have the average time difference between the causal and effect events in both computed and observed data within 5 minutes. This result can be used as a preventive measure against future attacks. Although the forecast may be short, from 0.24 seconds to 5 minutes, it is long enough to be used to design a prevention protocol to block those attacks.

Keywords: causality, multilevel graph, cyber-attacks, prediction

Procedia PDF Downloads 130
1645 Structural Model on Organizational Climate, Leadership Behavior and Organizational Commitment: Work Engagement of Private Secondary School Teachers in Davao City

Authors: Genevaive Melendres

Abstract:

School administrators face the reality of teachers losing their engagement, or schools losing the teachers. This study is then conducted to identify a structural model that best predict work engagement of private secondary teachers in Davao City. Ninety-three teachers from four sectarian schools and 56 teachers from four non-sectarian schools were involved in the completion of four survey instruments namely Organizational Climate Questionnaire, Leader Behavior Descriptive Questionnaire, Organizational Commitment Scales, and Utrecht Work Engagement Scales. Data were analyzed using frequency distribution, mean, standardized deviation, t-test for independent sample, Pearson r, stepwise multiple regression analysis, and structural equation modeling. Results show that schools have high level of organizational climate dimensions; leaders oftentimes show work-oriented and people-oriented behavior; teachers have high normative commitment and they are very often engaged at their work. Teachers from non-sectarian schools have higher organizational commitment than those from sectarian schools. Organizational climate and leadership behavior are positively related to and predict work engagement whereas commitment did not show any relationship. This study underscores the relative effects of three variables on the work engagement of teachers. After testing network of relationships and evaluating several models, a best-fitting model was found between leadership behavior and work engagement. The noteworthy findings suggest that principals pay attention and consistently evaluate their behavior for this best predicts the work engagement of the teachers. The study provides value to administrators who take decisions and create conditions in which teachers derive fulfillment.

Keywords: leadership behavior, organizational climate, organizational commitment, private secondary school teachers, structural model on work engagement

Procedia PDF Downloads 217
1644 Biomechanical Performance of the Synovial Capsule of the Glenohumeral Joint with a BANKART Lesion through Finite Element Analysis

Authors: Duvert A. Puentes T., Javier A. Maldonado E., Ivan Quintero., Diego F. Villegas

Abstract:

Mechanical Computation is a great tool to study the performance of complex models. An example of it is the study of the human body structure. This paper took advantage of different types of software to make a 3D model of the glenohumeral joint and apply a finite element analysis. The main objective was to study the change in the biomechanical properties of the joint when it presents an injury. Specifically, a BANKART lesion, which consists in the detachment of the anteroinferior labrum from the glenoid. Stress and strain distribution of the soft tissues were the focus of this study. First, a 3D model was made of a joint without any pathology, as a control sample, using segmentation software for the bones with the support of medical imagery and a cadaveric model to represent the soft tissue. The joint was built to simulate a compression and external rotation test using CAD to prepare the model in the adequate position. When the healthy model was finished, it was submitted to a finite element analysis and the results were validated with experimental model data. With the validated model, it was sensitized to obtain the best mesh measurement. Finally, the geometry of the 3D model was changed to imitate a BANKART lesion. Then, the contact zone of the glenoid with the labrum was slightly separated simulating a tissue detachment. With this new geometry, the finite element analysis was applied again, and the results were compared with the control sample created initially. With the data gathered, this study can be used to improve understanding of the labrum tears. Nevertheless, it is important to remember that the computational analysis are approximations and the initial data was taken from an in vitro assay.

Keywords: biomechanics, computational model, finite elements, glenohumeral joint, bankart lesion, labrum

Procedia PDF Downloads 123
1643 Investigating the Public’s Perceptions and Factors Contributing to the Management of Household Solid Waste in Rural Communities: A Case Study of Two Contrasting Rural Wards in the Greater Tzaneen Municipality

Authors: Dimakatso Machetele, Clare Kelso, Thea Schoeman

Abstract:

In developing countries such as India, China, and South Africa, disposal of household solid waste in rural areas is of great concern. Rural communities face numerous challenges that include the absence of waste collection services and sanitation facilities. The inadequate provision of waste collection and sanitation services results to the occurrence of infectious diseases e.g., malaria. The gap in the management of household solid waste between rural and urban communities, whereby urban communities have better waste management services compared to rural areas is an environmental injustice towards rural communities. The unequal distribution of infrastructure in South Africa’s waste management is a concern that stems from the spatial inequalities of the country’s apartheid history. The Limpopo province has a higher proportion of households without waste collection services from the municipality. The present research objectives are to investigate the public’s perceptions and factors contributing to the management of household solid waste in two contrasting rural Wards in the Greater Tzaneen Municipality. There is limited data and studies that have been conducted to understand the management of household solid waste in rural areas, and specifically, for the Greater Tzaneen Municipality located in the Limpopo province, South Africa. The findings of the study will propose recommendations to the Greater Tzaneen Municipality, rural municipalities in South Africa, and globally to explore sustainable methods to manage household solid waste and explore economic opportunities within the waste management sector to alleviate poverty in rural communities.

Keywords: rural, household solid wase, perceptions, waste management

Procedia PDF Downloads 66
1642 Molecular Dynamic Simulation of Cold Spray Process

Authors: Aneesh Joshi, Sagil James

Abstract:

Cold Spray (CS) process is deposition of solid particles over a substrate above a certain critical impact velocity. Unlike thermal spray processes, CS process does not melt the particles thus retaining their original physical and chemical properties. These characteristics make CS process ideal for various engineering applications involving metals, polymers, ceramics and composites. The bonding mechanism involved in CS process is extremely complex considering the dynamic nature of the process. Though CS process offers great promise for several engineering applications, the realization of its full potential is limited by the lack of understanding of the complex mechanisms involved in this process and the effect of critical process parameters on the deposition efficiency. The goal of this research is to understand the complex nanoscale mechanisms involved in CS process. The study uses Molecular Dynamics (MD) simulation technique to understand the material deposition phenomenon during the CS process. Impact of a single crystalline copper nanoparticle on copper substrate is modelled under varying process conditions. The quantitative results of the impacts at different velocities, impact angle and size of the particles are evaluated using flattening ratio, von Mises stress distribution and local shear strain. The study finds that the flattening ratio and hence the quality of deposition was highest for an impact velocity of 700 m/s, particle size of 20 Å and an impact angle of 90°. The stress and strain analysis revealed regions of shear instabilities in the periphery of impact and also revealed plastic deformation of the particles after the impact. The results of this study can be used to augment our existing knowledge in the field of CS processes.

Keywords: cold spray process, molecular dynamics simulation, nanoparticles, particle impact

Procedia PDF Downloads 335
1641 Performance Evaluation of Solid Lubricant Characteristics at Different Sliding Conditions

Authors: Suresh Kumar Reddy Narala, Rakesh Kumar Gunda

Abstract:

In modern industry, mechanical parts are subjected to friction and wear, leading to heat generation, which affects the reliability, life and power consumption of machinery. To overcome the tribological losses due to friction and wear, a significant portion of lubricant with high viscous properties allows very smooth relative motion between two sliding surfaces. Advancement in modern tribology has facilitated the use of applying solid lubricants in various industrial applications. Solid lubricant additives with high viscous thin film formation between the sliding surfaces can adequately wet and adhere to a work surface. In the present investigation, an attempt has been made to investigate and evaluate the tribological studies of various solid lubricants like MoS¬2, graphite, and boric acid at different sliding conditions. The base oil used in this study was SAE 40 oil with a viscosity of 220 cSt at 400C. The tribological properties were measured on pin-on-disc tribometer. An experimental set-up has been developed for effective supply of solid lubricants to the pin-disc interface zone. The results obtained from the experiments show that the friction coefficient increases with increase in applied load for all the considered environments. The tribological properties with MoS2 solid lubricant exhibit larger load carrying capacity than that of graphite and boric acid. The present research work also contributes to the understanding of the behavior of film thickness distribution of solid lubricant using potential contact technique under different sliding conditions. The results presented in this research work are expected to form a scientific basis for selecting the best solid lubricant in various industrial applications for possible minimization of friction and wear.

Keywords: friction, wear, temperature, solid lubricant

Procedia PDF Downloads 319
1640 Application of RayMan Model in Quantifying the Impacts of the Built Environment and Surface Properties on Surrounding Temperature

Authors: Maryam Karimi, Rouzbeh Nazari

Abstract:

Introduction: Understanding thermal distribution in the micro-urban climate has now been necessary for urban planners or designers due to the impact of complex micro-scale features of Urban Heat Island (UHI) on the built environment and public health. Hence, understanding the interrelation between urban components and thermal pattern can assist planners in the proper addition of vegetation to build-environment, which can minimize the UHI impact. To characterize the need for urban green infrastructure (UGI) through better urban planning, this study proposes the use of RayMan model to measure the impact of air quality and increased temperature based on urban morphology in the selected metropolitan cities. This project will measure the impact of build environment for urban and regional planning using human biometeorological evaluations (Tmrt). Methods: We utilized the RayMan model to estimate the Tmrt in an urban environment incorporating location and height of buildings and trees as a supplemental tool in urban planning and street design. The estimated Tmrt value will be compared with existing surface and air temperature data to find the actual temperature felt by pedestrians. Results: Our current results suggest a strong relationship between sky-view factor (SVF) and increased surface temperature in megacities based on current urban morphology. Conclusion: This study will help with Quantifying the impacts of the built environment and surface properties on surrounding temperature, identifying priority urban neighborhoods by analyzing Tmrt and air quality data at the pedestrian level, and characterizing the need for urban green infrastructure cooling potential.

Keywords: built environment, urban planning, urban cooling, extreme heat

Procedia PDF Downloads 84
1639 On Cloud Computing: A Review of the Features

Authors: Assem Abdel Hamed Mousa

Abstract:

The Internet of Things probably already influences your life. And if it doesn’t, it soon will, say computer scientists; Ubiquitous computing names the third wave in computing, just now beginning. First were mainframes, each shared by lots of people. Now we are in the personal computing era, person and machine staring uneasily at each other across the desktop. Next comes ubiquitous computing, or the age of calm technology, when technology recedes into the background of our lives. Alan Kay of Apple calls this "Third Paradigm" computing. Ubiquitous computing is essentially the term for human interaction with computers in virtually everything. Ubiquitous computing is roughly the opposite of virtual reality. Where virtual reality puts people inside a computer-generated world, ubiquitous computing forces the computer to live out here in the world with people. Virtual reality is primarily a horse power problem; ubiquitous computing is a very difficult integration of human factors, computer science, engineering, and social sciences. The approach: Activate the world. Provide hundreds of wireless computing devices per person per office, of all scales (from 1" displays to wall sized). This has required new work in operating systems, user interfaces, networks, wireless, displays, and many other areas. We call our work "ubiquitous computing". This is different from PDA's, dynabooks, or information at your fingertips. It is invisible; everywhere computing that does not live on a personal device of any sort, but is in the woodwork everywhere. The initial incarnation of ubiquitous computing was in the form of "tabs", "pads", and "boards" built at Xerox PARC, 1988-1994. Several papers describe this work, and there are web pages for the Tabs and for the Boards (which are a commercial product now): Ubiquitous computing will drastically reduce the cost of digital devices and tasks for the average consumer. With labor intensive components such as processors and hard drives stored in the remote data centers powering the cloud , and with pooled resources giving individual consumers the benefits of economies of scale, monthly fees similar to a cable bill for services that feed into a consumer’s phone.

Keywords: internet, cloud computing, ubiquitous computing, big data

Procedia PDF Downloads 357
1638 Investigating Elements of Identity of Traditional Neighborhoods in Isfahan and Using These Elements in the Design of Modern Neighborhoods

Authors: Saman Keshavarzi

Abstract:

The process of planning, designing and building neighborhoods is a complex and multidimensional part of urban planning. Understanding the elements that give a neighborhood a sense of identity can lead to successful city planning and result in a cohesive and functional community where people feel a sense of belonging. These factors are important in ensuring that the needs of the urban population are met to live in a safe, pleasant and healthy society. This research paper aims to identify the elements of the identity of traditional neighborhoods in Isfahan and analyzes ways of using these elements in the design of modern neighborhoods to increase social interaction between communities and cultural reunification of people. The neighborhood of Jolfa in Isfahan has a unique socio-cultural identity as it dates back to the Safavid Dynasty of the 16th century, and most of its inhabitants are Christian Armenians of a religious minority. The elements of the identity of Jolfa were analyzed through the following research methods: field observations, distribution of questionnaires and qualitative analysis. The basic methodology that was used to further understand the Jolfa neighborhood and deconstruct the identity image that residents associate with their respective neighborhoods was a qualitative research method. This was done through utilizing questionnaires that respondents had to fill out in response to a series of research questions. From collecting these qualitative data, the major finding was that traditional neighborhoods that have elements of identity embedded in them are seen to have closer-knit communities whose residents have strong societal ties. This area of study in urban planning is vital to ensuring that new neighborhoods are built with concepts of social cohesion, community and inclusion in mind as they are what lead to strong, connected, and prosperous societies.

Keywords: development, housing, identity, neighborhood, policy, urbanization

Procedia PDF Downloads 142
1637 Reliability Qualification Test Plan Derivation Method for Weibull Distributed Products

Authors: Ping Jiang, Yunyan Xing, Dian Zhang, Bo Guo

Abstract:

The reliability qualification test (RQT) is widely used in product development to qualify whether the product meets predetermined reliability requirements, which are mainly described in terms of reliability indices, for example, MTBF (Mean Time Between Failures). It is widely exercised in product development. In engineering practices, RQT plans are mandatorily referred to standards, such as MIL-STD-781 or GJB899A-2009. But these conventional RQT plans in standards are not preferred, as the test plans often require long test times or have high risks for both producer and consumer due to the fact that the methods in the standards only use the test data of the product itself. And the standards usually assume that the product is exponentially distributed, which is not suitable for a complex product other than electronics. So it is desirable to develop an RQT plan derivation method that safely shortens test time while keeping the two risks under control. To meet this end, for the product whose lifetime follows Weibull distribution, an RQT plan derivation method is developed. The merit of the method is that expert judgment is taken into account. This is implemented by applying the Bayesian method, which translates the expert judgment into prior information on product reliability. Then producer’s risk and the consumer’s risk are calculated accordingly. The procedures to derive RQT plans are also proposed in this paper. As extra information and expert judgment are added to the derivation, the derived test plans have the potential to shorten the required test time and have satisfactory low risks for both producer and consumer, compared with conventional test plans. A case study is provided to prove that when using expert judgment in deriving product test plans, the proposed method is capable of finding ideal test plans that not only reduce the two risks but also shorten the required test time as well.

Keywords: expert judgment, reliability qualification test, test plan derivation, producer’s risk, consumer’s risk

Procedia PDF Downloads 87
1636 Special Single Mode Fiber Tests of Polarization Mode Dispersion Changes in a Harsh Environment

Authors: Jan Bohata, Stanislav Zvanovec, Matej Komanec, Jakub Jaros, David Hruby

Abstract:

Even though there is a rapid development in new optical networks, still optical communication infrastructures remain composed of thousands of kilometers of aging optical cables. Many of them are located in a harsh environment which contributes to an increased attenuation or induced birefringence of the fibers leading to the increase of polarization mode dispersion (PMD). In this paper, we report experimental results from environmental optical cable tests and characterization in the climate chamber. We focused on the evaluation of optical network reliability in a harsh environment. For this purpose, a special thermal chamber was adopted, targeting to the large temperature changes between -60 °C and 160 C° with defined humidity. Single mode optical cable 230 meters long, having six tubes and a total number of 72 single mode optical fibers was spliced together forming one fiber link, which was afterward tested in the climate chamber. The main emphasis was put to the polarization mode dispersion (PMD) changes, which were evaluated by three different PMD measuring methods (general interferometry technique, scrambled state-of-polarization analysis and polarization optical time domain reflectometer) in order to fully validate obtained results. Moreover, attenuation and chromatic dispersion (CD), as well as the PMD, were monitored using 17 km long single mode optical cable. Results imply a strong PMD dependence on thermal changes, imposing the exceeding 200 % of its value during the exposure to extreme temperatures and experienced more than 20 dB insertion losses in the optical system. The derived statistic is provided in the paper together with an evaluation of such as optical system reliability, which could be a crucial tool for the optical network designers. The environmental tests are further taken in context to our previously published results from long-term monitoring of fundamental parameters within an optical cable placed in a harsh environment in a special outdoor testbed. Finally, we provide a correlation between short-term and long-term monitoring campaigns and statistics, which are necessary for optical network safety and reliability.

Keywords: optical fiber, polarization mode dispersion, harsh environment, aging

Procedia PDF Downloads 350