Search results for: single inductor multi output (SIMO)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9930

Search results for: single inductor multi output (SIMO)

1560 Uncertainty Quantification of Corrosion Anomaly Length of Oil and Gas Steel Pipelines Based on Inline Inspection and Field Data

Authors: Tammeen Siraj, Wenxing Zhou, Terry Huang, Mohammad Al-Amin

Abstract:

The high resolution inline inspection (ILI) tool is used extensively in the pipeline industry to identify, locate, and measure metal-loss corrosion anomalies on buried oil and gas steel pipelines. Corrosion anomalies may occur singly (i.e. individual anomalies) or as clusters (i.e. a colony of corrosion anomalies). Although the ILI technology has advanced immensely, there are measurement errors associated with the sizes of corrosion anomalies reported by ILI tools due limitations of the tools and associated sizing algorithms, and detection threshold of the tools (i.e. the minimum detectable feature dimension). Quantifying the measurement error in the ILI data is crucial for corrosion management and developing maintenance strategies that satisfy the safety and economic constraints. Studies on the measurement error associated with the length of the corrosion anomalies (in the longitudinal direction of the pipeline) has been scarcely reported in the literature and will be investigated in the present study. Limitations in the ILI tool and clustering process can sometimes cause clustering error, which is defined as the error introduced during the clustering process by including or excluding a single or group of anomalies in or from a cluster. Clustering error has been found to be one of the biggest contributory factors for relatively high uncertainties associated with ILI reported anomaly length. As such, this study focuses on developing a consistent and comprehensive framework to quantify the measurement errors in the ILI-reported anomaly length by comparing the ILI data and corresponding field measurements for individual and clustered corrosion anomalies. The analysis carried out in this study is based on the ILI and field measurement data for a set of anomalies collected from two segments of a buried natural gas pipeline currently in service in Alberta, Canada. Data analyses showed that the measurement error associated with the ILI-reported length of the anomalies without clustering error, denoted as Type I anomalies is markedly less than that for anomalies with clustering error, denoted as Type II anomalies. A methodology employing data mining techniques is further proposed to classify the Type I and Type II anomalies based on the ILI-reported corrosion anomaly information.

Keywords: clustered corrosion anomaly, corrosion anomaly assessment, corrosion anomaly length, individual corrosion anomaly, metal-loss corrosion, oil and gas steel pipeline

Procedia PDF Downloads 307
1559 Cauda Equina Syndrome: An Audit on Referral Adequacy and its Impact on Delay to Surgery

Authors: David Mafullul, Jiang Lei, Edward Goacher, Jibin Francis

Abstract:

PURPOSE: Timely decompressive surgery for cauda equina syndrome (CES) is dependent on efficient referral pathways for patients presenting at local primary or secondary centres to tertiary spinal centres in the United Kingdom (UK). Identifying modifiable points of delay within this process is important as minimising time between presentation and surgery may improve patient outcomes. This study aims to analyse whether adequacy of referral impacts on time to surgery in CES. MATERIALS AND METHODS: Data from all cases of confirmed CES referred to a single tertiary UK hospital between August 2017 to December 2019, via a suspected CES e-referral pathway, were obtained retrospectively. Referral adequacy was defined by the inclusion of sufficient information to determine the presence or absence of several NICE ‘red flags’. Correlation between referral adequacy and delay from referral-to-surgery was then analysed. RESULTS: In total, 118 confirmed CES cases were included. Adequate documentation for saddle anaesthesia was associated with reduced delays of more than 48 hours from referral-to-surgery [X2(1, N=116)=7.12, p=.024], an effect partly attributable to these referrals being accepted sooner [U=16.5; n1=27, n2=4, p=.029, r=.39]. Other red flags had poor association with delay. Referral adequacy was better for somatic red flags [bilateral sciatica (97.5%); severe or progressive bilateral neurological deficit of the legs (95.8%); saddle anaesthesia (91.5%)] compared to autonomic red flags [loss of anal tone (80.5%); urinary retention (79.7%); faecal incontinence or lost sensation of rectal fullness (57.6%)]. Although referral adequacy for urinary retention was 79.7%, only 47.5% of referrals documented a post-void residual numerical value. CONCLUSIONS: Adequate documentation of saddle anaesthesia in e-referrals is associated with reduced delay-to-surgery for confirmed CES, partly attributable to these referrals being accepted sooner. Other red flags had poor association with delay to surgery. Referral adequacy for autonomic red flags, including documentation for post-void residuals, has significant room for improvement.

Keywords: cauda equina, cauda equina syndrome, neurosurgery, spinal surgery, decompression, delay, referral, referral adequacy

Procedia PDF Downloads 37
1558 Territorialisation and Elections: Land and Politics in Benin

Authors: Kamal Donko

Abstract:

In the frontier zone of Benin Republic, land seems to be a fundamental political resource as it is used as a tool for socio-political mobilization, blackmail, inclusion and exclusion, conquest and political control. This paper seeks to examine the complex and intriguing interlinks between land, identity and politics in central Benin. It aims to investigate what roles territorialisation and land ownership are playing in the electioneering process in central Benin. It employs ethnographic multi-sited approach to data collections including observations, interviews and focused group discussions. Research findings reveal a complex and intriguing relationship between land ownership and politics in central Benin. Land is found to be playing a key role in the electioneering process in the region. The study has also discovered many emerging socio-spatial patterns of controlling and maintaining political power in the zone which are tied to land politics. These include identity reconstruction and integration mechanism through intermarriages, socio-political initiatives and construction of infrastructure of sovereignty. It was also found that ‘Diaspora organizations’ and identity issues; strategic creation of administrative units; alliance building strategy; gerrymandering local political field, etc. These emerging socio-spatial patterns of territorialisation for maintaining political power affect migrant and native communities’ relationships. It was also found that ‘Diaspora organizations’ and identity issues; strategic creation of administrative units; alliance building strategy; gerrymandering local political field, etc. are currently affecting migrant’s and natives’ relationships. The study argues that territorialisation is not only about national boundaries and the demarcation between different nation states, but more importantly, it serves as a powerful tool of domination and political control at the grass root level. Furthermore, this study seems to provide another perspective from which the political situation in Africa can be studied. Investigating how the dynamics of land ownership is influencing politics at the grass root or micro level, this study is fundamental to understanding spatial issues in the frontier zone.

Keywords: land, migration, politics, territorialisation

Procedia PDF Downloads 360
1557 A Comparative Assessment of Information Value, Fuzzy Expert System Models for Landslide Susceptibility Mapping of Dharamshala and Surrounding, Himachal Pradesh, India

Authors: Kumari Sweta, Ajanta Goswami, Abhilasha Dixit

Abstract:

Landslide is a geomorphic process that plays an essential role in the evolution of the hill-slope and long-term landscape evolution. But its abrupt nature and the associated catastrophic forces of the process can have undesirable socio-economic impacts, like substantial economic losses, fatalities, ecosystem, geomorphologic and infrastructure disturbances. The estimated fatality rate is approximately 1person /100 sq. Km and the average economic loss is more than 550 crores/year in the Himalayan belt due to landslides. This study presents a comparative performance of a statistical bivariate method and a machine learning technique for landslide susceptibility mapping in and around Dharamshala, Himachal Pradesh. The final produced landslide susceptibility maps (LSMs) with better accuracy could be used for land-use planning to prevent future losses. Dharamshala, a part of North-western Himalaya, is one of the fastest-growing tourism hubs with a total population of 30,764 according to the 2011 census and is amongst one of the hundred Indian cities to be developed as a smart city under PM’s Smart Cities Mission. A total of 209 landslide locations were identified in using high-resolution linear imaging self-scanning (LISS IV) data. The thematic maps of parameters influencing landslide occurrence were generated using remote sensing and other ancillary data in the GIS environment. The landslide causative parameters used in the study are slope angle, slope aspect, elevation, curvature, topographic wetness index, relative relief, distance from lineaments, land use land cover, and geology. LSMs were prepared using information value (Info Val), and Fuzzy Expert System (FES) models. Info Val is a statistical bivariate method, in which information values were calculated as the ratio of the landslide pixels per factor class (Si/Ni) to the total landslide pixel per parameter (S/N). Using this information values all parameters were reclassified and then summed in GIS to obtain the landslide susceptibility index (LSI) map. The FES method is a machine learning technique based on ‘mean and neighbour’ strategy for the construction of fuzzifier (input) and defuzzifier (output) membership function (MF) structure, and the FR method is used for formulating if-then rules. Two types of membership structures were utilized for membership function Bell-Gaussian (BG) and Trapezoidal-Triangular (TT). LSI for BG and TT were obtained applying membership function and if-then rules in MATLAB. The final LSMs were spatially and statistically validated. The validation results showed that in terms of accuracy, Info Val (83.4%) is better than BG (83.0%) and TT (82.6%), whereas, in terms of spatial distribution, BG is best. Hence, considering both statistical and spatial accuracy, BG is the most accurate one.

Keywords: bivariate statistical techniques, BG and TT membership structure, fuzzy expert system, information value method, machine learning technique

Procedia PDF Downloads 127
1556 Optimization of Process Parameters and Modeling of Mass Transport during Hybrid Solar Drying of Paddy

Authors: Aprajeeta Jha, Punyadarshini P. Tripathy

Abstract:

Drying is one of the most critical unit operations for prolonging the shelf-life of food grains in order to ensure global food security. Photovoltaic integrated solar dryers can be a sustainable solution for replacing energy intensive thermal dryers as it is capable of drying in off-sunshine hours and provide better control over drying conditions. But, performance and reliability of PV based solar dryers depend hugely on climatic conditions thereby, drastically affecting process parameters. Therefore, to ensure quality and prolonged shelf-life of paddy, optimization of process parameters for solar dryers is critical. Proper moisture distribution within the grains is most detrimental factor to enhance the shelf-life of paddy therefore; modeling of mass transport can help in providing a better insight of moisture migration. Hence, present work aims at optimizing the process parameters and to develop a 3D finite element model (FEM) for predicting moisture profile in paddy during solar drying. Optimization of process parameters (power level, air velocity and moisture content) was done using box Behnken model in Design expert software. Furthermore, COMSOL Multiphysics was employed to develop a 3D finite element model for predicting moisture profile. Optimized model for drying paddy was found to be 700W, 2.75 m/s and 13% wb with optimum temperature, milling yield and drying time of 42˚C, 62%, 86 min respectively, having desirability of 0.905. Furthermore, 3D finite element model (FEM) for predicting moisture migration in single kernel for every time step has been developed. The mean absolute error (MAE), mean relative error (MRE) and standard error (SE) were found to be 0.003, 0.0531 and 0.0007, respectively, indicating close agreement of model with experimental results. Above optimized conditions can be successfully used to dry paddy in PV integrated solar dryer in order to attain maximum uniformity, quality and yield of product to achieve global food and energy security

Keywords: finite element modeling, hybrid solar drying, mass transport, paddy, process optimization

Procedia PDF Downloads 137
1555 A Furniture Industry Concept for a Sustainable Generative Design Platform Employing Robot Based Additive Manufacturing

Authors: Andrew Fox, Tao Zhang, Yuanhong Zhao, Qingping Yang

Abstract:

The furniture manufacturing industry has been slow in general to adopt the latest manufacturing technologies, historically relying heavily upon specialised conventional machinery. This approach not only requires high levels of specialist process knowledge, training, and capital investment but also suffers from significant subtractive manufacturing waste and high logistics costs due to the requirement for centralised manufacturing, with high levels of furniture product not re-cycled or re-used. This paper aims to address the problems by introducing suitable digital manufacturing technologies to create step changes in furniture manufacturing design, as the traditional design practices have been reported as building in 80% of environmental impact. In this paper, a 3D printing robot for furniture manufacturing is reported. The 3D printing robot mainly comprises a KUKA industrial robot, an Arduino microprocessor, and a self-assembled screw fed extruder. Compared to traditional 3D printer, the 3D printing robot has larger motion range and can be easily upgraded to enlarge the maximum size of the printed object. Generative design is also investigated in this paper, aiming to establish a combined design methodology that allows assessment of goals, constraints, materials, and manufacturing processes simultaneously. ‘Matrixing’ for part amalgamation and product performance optimisation is enabled. The generative design goals of integrated waste reduction increased manufacturing efficiency, optimised product performance, and reduced environmental impact institute a truly lean and innovative future design methodology. In addition, there is massive future potential to leverage Single Minute Exchange of Die (SMED) theory through generative design post-processing of geometry for robot manufacture, resulting in ‘mass customised’ furniture with virtually no setup requirements. These generatively designed products can be manufactured using the robot based additive manufacturing. Essentially, the 3D printing robot is already functional; some initial goals have been achieved and are also presented in this paper.

Keywords: additive manufacturing, generative design, robot, sustainability

Procedia PDF Downloads 130
1554 A Study on the Measurement of Spatial Mismatch and the Influencing Factors of “Job-Housing” in Affordable Housing from the Perspective of Commuting

Authors: Daijun Chen

Abstract:

Affordable housing is subsidized by the government to meet the housing demand of low and middle-income urban residents in the process of urbanization and to alleviate the housing inequality caused by market-based housing reforms. It is a recognized fact that the living conditions of the insured have been improved while constructing the subsidized housing. However, the choice of affordable housing is mostly in the suburbs, where the surrounding urban functions and infrastructure are incomplete, resulting in the spatial mismatch of "jobs-housing" in affordable housing. The main reason for this problem is that the residents of affordable housing are more sensitive to the spatial location of their residence, but their selectivity and controllability to the housing location are relatively weak, which leads to higher commuting costs. Their real cost of living has not been effectively reduced. In this regard, 92 subsidized housing communities in Nanjing, China, are selected as the research sample in this paper. The residents of the affordable housing and their commuting Spatio-temporal behavior characteristics are identified based on the LBS (location-based service) data. Based on the spatial mismatch theory, spatial mismatch indicators such as commuting distance and commuting time are established to measure the spatial mismatch degree of subsidized housing in different districts of Nanjing. Furthermore, the geographically weighted regression model is used to analyze the influencing factors of the spatial mismatch of affordable housing in terms of the provision of employment opportunities, traffic accessibility and supporting service facilities by using spatial, functional and other multi-source Spatio-temporal big data. The results show that the spatial mismatch of affordable housing in Nanjing generally presents a "concentric circle" pattern of decreasing from the central urban area to the periphery. The factors affecting the spatial mismatch of affordable housing in different spatial zones are different. The main reasons are the number of enterprises within 1 km of the affordable housing district and the shortest distance to the subway station. And the low spatial mismatch is due to the diversity of services and facilities. Based on this, a spatial optimization strategy for different levels of spatial mismatch in subsidized housing is proposed. And feasible suggestions for the later site selection of subsidized housing are also provided. It hopes to avoid or mitigate the impact of "spatial mismatch," promote the "spatial adaptation" of "jobs-housing," and truly improve the overall welfare level of affordable housing residents.

Keywords: affordable housing, spatial mismatch, commuting characteristics, spatial adaptation, welfare benefits

Procedia PDF Downloads 106
1553 The Impact of Social Customer Relationship Management on Brand Loyalty and Reducing Co-Destruction of Value by Customers

Authors: Sanaz Farhangi, Habib Alipour

Abstract:

The main objective of this paper is to explore how social media as a critical platform would increase the interactions between the tourism sector and stakeholders. Nowadays, human interactions through social media in many areas, especially in tourism, provide various experiences and information that users share and discuss. Organizations and firms can gain customer loyalty through social media platforms, albeit consumers' negative image of the product or services. Such a negative image can be reduced through constant communication between produces and consumers, especially with the availability of the new technology. Therefore, effective management of customer relationships in social media creates an extraordinary opportunity for organizations to enhance value and brand loyalty. In this study, we seek to develop a conceptual model for addressing factors such as social media, SCRM, and customer engagement affecting brand loyalty and diminish co-destruction. To support this model, we scanned the relevant literature using a comprehensive category of ideas in the context of marketing and customer relationship management. This will allow exploring whether there is any relationship between social media, customer engagement, social customer relationship management (SCRM), co-destruction, and brand loyalty. SCRM has been explored as a moderating factor in the relationship between customer engagement and social media to secure brand loyalty and diminish co-destruction of the company’s value. Although numerous studies have been conducted on the impact of social media on customers and marketing behavior, there are limited studies for investigating the relationship between SCRM, brand loyalty, and negative e-WOM, which results in the reduction of the co-destruction of value by customers. This study is an important contribution to the tourism and hospitality industry in orienting customer behavior in social media using SCRM. This study revealed that through social media platforms, management can generate discussion and engagement about the product and services, which facilitates customers feeling in an appositive way towards the firm and its product. Study has also revealed that customers’ complaints through social media have a multi-purpose effect; it can degrade the value of the product, but at the same time, it will motivate the firm to overcome its weaknesses and correct its shortcomings. This study has also implications for the managers and practitioners, especially in the tourism and hospitality sector. Future research direction and limitations of the research were also discussed.

Keywords: brand loyalty, co-destruction, customer engagement, SCRM, tourism and hospitality

Procedia PDF Downloads 112
1552 Prevalence and Correlates of Mental Disorders in Children and Adolescents in Mendefera Community, Eritrea

Authors: Estifanos H. Zeru

Abstract:

Introduction: Epidemiological research is important to draw need-based rational public health policy. However, research on child and adolescent mental health in low and middle income countries, where socioeconomic, political, cultural, biological and other mental health hazards are in abundance, is almost nonexistent. To the author's knowledge, there is no published research in this field in Eritrea, whose child and adolescent population constitutes 53% of its total population. Study Aims and Objectives: The objective of this study was to determine the prevalence and patterns of DSM-IV psychiatric disorders and identify their socio-demographic correlates among children and adolescents in Mendefera, Eritrea. The study aims to provide local information to public health policymakers to guide policy in service development. Methodology: In a cross-sectional two stage procedure, both the Parent and Child versions of the SDQ were used to screen 314 children and adolescents aged 4-17 years, recruited by a multi-stage random sampling method. All parents/adult guardians also completed a socio-demographic questionnaire. All children and adolescents who screened positive for any of the SDQ abnormality sub-classes were selected for the second stage interview, which was conducted using the K-SADS-PL 2009 Working Draft version to generate specific DSM-IV diagnoses. All data gathered was entered into CSPro version 6.2 and was then transported in to and analyzed using SPSS version 20 for windows. Results: Prevalence of DSM-IV psychiatric disorders was found to be 13.1%. Adolescents 11-17 years old and males had higher prevalence than children 4-10 years old and females, respectively. Behavioral disorders were the commonest disorders (9.9%), followed by affective disorders (3.2%) and anxiety disorders (2.5). Chronic medical illness in the child, poor academic performance, difficulties with teachers in school, psychopathology in a family member and parental conflict were found to be independently associated with these disorders. Conclusion: Prevalence of child and adolescent psychiatric disorders in Eritrea is high. Promotion, prevention, treatment, and rehabilitation for child and adolescent mental health services need to be made widely available in the country. The socio-demographic correlates identified by this study can be targeted for intervention. The need for further research is emphasized.

Keywords: adolescents, children, correlates, DSM-IV psychiatric disorders, Eritrea, K-SAD-PL 2009, prevalence and correlates, SDQ

Procedia PDF Downloads 263
1551 Nose Macroneedling Tie Suture Hidden Technique

Authors: Mohamed Ghoz, Hala Alsabeh

Abstract:

Context: Macroscopic Nose Macroneedling (MNM) is a new non-surgical procedure for lifting and tightening the nose. It is a tissue-non-invasive technique that uses a needle to create micro-injuries in the skin. These injuries stimulate the production of collagen and elastin, which results in the tightening and lifting of the skin. Research Aim: The research aim of this study was to investigate the efficacy and safety of MNM for the treatment of nasal deformities. Methodology A total of 100 patients with nasal deformities were included in this study. The patients were randomly assigned to either the MNM group or the control group. The MNM group received a single treatment of MNM, while the control group received no treatment. The patients were evaluated at baseline, 6 months, and 12 months after treatment. Findings: The results of this study showed that MNM was effective in improving the appearance of the nose in patients with nasal deformities. At 6 months after treatment, the patients in the MNM group had significantly improved nasal tip projection, nasal bridge height, and nasal width compared to the patients in the control group. The improvements in nasal appearance were maintained at 12 months after treatment. Theoretical Importance: The findings of this study provide support for the use of MNM as a safe and effective treatment for nasal deformities. MNM is a non-surgical procedure that is associated with minimal downtime and no risk of scarring. This makes it an attractive option for patients who are looking for a minimally invasive treatment for their nasal deformities. Data Collection: Data was collected from the patients using a variety of methods, including clinical assessments, photographic assessments, and patient-reported outcome measures. Analysis Procedures: The data was analyzed using a variety of statistical methods, including descriptive statistics, inferential statistics, and meta-analysis. Question Addressed: The research question addressed in this study was whether MNM is an effective and safe treatment for nasal deformities. Conclusion: The findings of this study suggest that MNM is an effective and safe treatment for nasal deformities. MNM is a non-surgical procedure that is associated with minimal downtime and no risk of scarring. This makes it an attractive option for patients who are looking for a minimally invasive treatment for their nasal deformities.

Keywords: nose, surgery, tie, suture

Procedia PDF Downloads 73
1550 Portable and Parallel Accelerated Development Method for Field-Programmable Gate Array (FPGA)-Central Processing Unit (CPU)- Graphics Processing Unit (GPU) Heterogeneous Computing

Authors: Nan Hu, Chao Wang, Xi Li, Xuehai Zhou

Abstract:

The field-programmable gate array (FPGA) has been widely adopted in the high-performance computing domain. In recent years, the embedded system-on-a-chip (SoC) contains coarse granularity multi-core CPU (central processing unit) and mobile GPU (graphics processing unit) that can be used as general-purpose accelerators. The motivation is that algorithms of various parallel characteristics can be efficiently mapped to the heterogeneous architecture coupled with these three processors. The CPU and GPU offload partial computationally intensive tasks from the FPGA to reduce the resource consumption and lower the overall cost of the system. However, in present common scenarios, the applications always utilize only one type of accelerator because the development approach supporting the collaboration of the heterogeneous processors faces challenges. Therefore, a systematic approach takes advantage of write-once-run-anywhere portability, high execution performance of the modules mapped to various architectures and facilitates the exploration of design space. In this paper, A servant-execution-flow model is proposed for the abstraction of the cooperation of the heterogeneous processors, which supports task partition, communication and synchronization. At its first run, the intermediate language represented by the data flow diagram can generate the executable code of the target processor or can be converted into high-level programming languages. The instantiation parameters efficiently control the relationship between the modules and computational units, including two hierarchical processing units mapping and adjustment of data-level parallelism. An embedded system of a three-dimensional waveform oscilloscope is selected as a case study. The performance of algorithms such as contrast stretching, etc., are analyzed with implementations on various combinations of these processors. The experimental results show that the heterogeneous computing system with less than 35% resources achieves similar performance to the pure FPGA and approximate energy efficiency.

Keywords: FPGA-CPU-GPU collaboration, design space exploration, heterogeneous computing, intermediate language, parameterized instantiation

Procedia PDF Downloads 116
1549 Determining Face-Validity for a Set of Preventable Drug-Related Morbidity Indicators Developed for Primary Healthcare in South Africa

Authors: D. Velayadum, P. Sthandiwe , N. Maharaj, T. Munien, S. Ndamase, G. Zulu, S. Xulu, F. Oosthuizen

Abstract:

Introduction and aims of the study: It is the responsibility of the pharmacist to manage drug-related problems in order to ensure the greatest benefit to the patient. In order to prevent drug-related morbidity, pharmacists should be aware of medicines that may contribute to certain drug-related problems due to their pharmacological action. In an attempt to assist healthcare practitioners to prevent drug-related morbidity (PDRM), indicators for prevention have been designed. There are currently no indicators available for primary health care in developing countries like South Africa, where the majority of the population access primary health care. There is, therefore, a need to develop such indicators, specifically with the aim of assisting healthcare practitioners in primary health care. Methods: A literature study was conducted to compile a comprehensive list of PDRM indicators as developed internationally using the search engines Google Scholar and PubMed. MESH term used to retrieve suitable articles was 'preventable drug-related morbidity indicators'. The comprehensive list of PDRM indicators obtained from the literature study was further evaluated for face validity. Face validity was done in duplicate by 2 sets of independent researchers to ensure 1) no duplication of indicators when compiling a single list, 2) inclusion of only medication available in primary healthcare, and 3) inclusion of medication currently available in South Africa. Results: The list of indicators, compiled from PDRM indicators in the USA, UK, Portugal, Australia, India, and Canada contained 324 PDRM. 184 of these indicators were found to be duplicates, and the duplications were omitted, leaving a final list of 140. The 140 PDRM indicators were evaluated for face-validity, and 97 were accepted as relevant to primary health care in South Africa. 43 indicators did not comply with the criteria and were omitted from the final list. Conclusion: This study is a first step in compiling a list of PDRM indicators for South Africa. It is important to take cognizance to the fact the health systems differ vastly internationally, and it is, therefore, important to develop country-specific indicators.

Keywords: drug-related morbidity, primary healthcare, South Africa, developing countries

Procedia PDF Downloads 146
1548 A Fourier Method for Risk Quantification and Allocation of Credit Portfolios

Authors: Xiaoyu Shen, Fang Fang, Chujun Qiu

Abstract:

Herewith we present a Fourier method for credit risk quantification and allocation in the factor-copula model framework. The key insight is that, compared to directly computing the cumulative distribution function of the portfolio loss via Monte Carlo simulation, it is, in fact, more efficient to calculate the transformation of the distribution function in the Fourier domain instead and inverting back to the real domain can be done in just one step and semi-analytically, thanks to the popular COS method (with some adjustments). We also show that the Euler risk allocation problem can be solved in the same way since it can be transformed into the problem of evaluating a conditional cumulative distribution function. Once the conditional or unconditional cumulative distribution function is known, one can easily calculate various risk metrics. The proposed method not only fills the niche in literature, to the best of our knowledge, of accurate numerical methods for risk allocation but may also serve as a much faster alternative to the Monte Carlo simulation method for risk quantification in general. It can cope with various factor-copula model choices, which we demonstrate via examples of a two-factor Gaussian copula and a two-factor Gaussian-t hybrid copula. The fast error convergence is proved mathematically and then verified by numerical experiments, in which Value-at-Risk, Expected Shortfall, and conditional Expected Shortfall are taken as examples of commonly used risk metrics. The calculation speed and accuracy are tested to be significantly superior to the MC simulation for real-sized portfolios. The computational complexity is, by design, primarily driven by the number of factors instead of the number of obligors, as in the case of Monte Carlo simulation. The limitation of this method lies in the "curse of dimension" that is intrinsic to multi-dimensional numerical integration, which, however, can be relaxed with the help of dimension reduction techniques and/or parallel computing, as we will demonstrate in a separate paper. The potential application of this method has a wide range: from credit derivatives pricing to economic capital calculation of the banking book, default risk charge and incremental risk charge computation of the trading book, and even to other risk types than credit risk.

Keywords: credit portfolio, risk allocation, factor copula model, the COS method, Fourier method

Procedia PDF Downloads 165
1547 Using Daily Light Integral Concept to Construct the Ecological Plant Design Strategy of Urban Landscape

Authors: Chuang-Hung Lin, Cheng-Yuan Hsu, Jia-Yan Lin

Abstract:

It is an indispensible strategy to adopt greenery approach on architectural bases so as to improve ecological habitats, decrease heat-island effect, purify air quality, and relieve surface runoff as well as noise pollution, all of which are done in an attempt to achieve sustainable environment. How we can do with plant design to attain the best visual quality and ideal carbon dioxide fixation depends on whether or not we can appropriately make use of greenery according to the nature of architectural bases. To achieve the goal, it is a need that architects and landscape architects should be provided with sufficient local references. Current greenery studies focus mainly on the heat-island effect of urban with large scale. Most of the architects still rely on people with years of expertise regarding the adoption and disposition of plantation in connection with microclimate scale. Therefore, environmental design, which integrates science and aesthetics, requires fundamental research on landscape environment technology divided from building environment technology. By doing so, we can create mutual benefits between green building and the environment. This issue is extremely important for the greening design of the bases of green buildings in cities and various open spaces. The purpose of this study is to establish plant selection and allocation strategies under different building sunshade levels. Initially, with the shading of sunshine on the greening bases as the starting point, the effects of the shades produced by different building types on the greening strategies were analyzed. Then, by measuring the PAR( photosynthetic active radiation), the relative DLI( daily light integral) was calculated, while the DLI Map was established in order to evaluate the effects of the building shading on the established environmental greening, thereby serving as a reference for plant selection and allocation. The discussion results were to be applied in the evaluation of environment greening of greening buildings and establish the “right plant, right place” design strategy of multi-level ecological greening for application in urban design and landscape design development, as well as the greening criteria to feedback to the eco-city greening buildings.

Keywords: daily light integral, plant design, urban open space

Procedia PDF Downloads 508
1546 Design of Large Parallel Underground Openings in Himalayas: A Case Study of Desilting Chambers for Punatsangchhu-I, Bhutan

Authors: Kanupreiya, Rajani Sharma

Abstract:

Construction of a single underground structure is itself a challenging task, and it becomes more critical in tectonically active young mountains such as the Himalayas which are highly anisotropic. The Himalayan geology mostly comprises of incompetent and sheared rock mass in addition to fold/faults, rock burst, and water ingress. Underground tunnels form the most essential and important structure in run-of-river hydroelectric projects. Punatsangchhu I hydroelectric project (PHEP-I), Bhutan (1200 MW) is a run-of-river scheme which has four parallel underground desilting chambers. The Punatsangchhu River carries a large quantity of silt load during monsoon season. Desilting chambers were provided to remove the silt particles of size greater than and equal to 0.2 mm with 90% efficiency, thereby minimizing the rate of damage to turbines. These chambers are 330 m long, 18 m wide at the center and 23.87 m high, with a 5.87 m hopper portion. The geology of desilting chambers was known from an exploratory drift which exposed low dipping foliation joint and six joint sets. The RMR and Q value in this reach varied from 40 to 60 and 1 to 6 respectively. This paper describes different rock engineering principles undertaken for safe excavation and rock support of the moderately jointed, blocky and thinly foliated biotite gneiss. For the design of rock support system of desilting chambers, empirical and numerical analysis was adopted. Finite element analysis was carried out for cavern design and finalization of pillar width using Phase2. Phase2 is a powerful tool for simulation of stage-wise excavation with simultaneous provision of support system. As the geology of the region had 7 sets of joints, in addition to FEM based approach, safety factors for potentially unstable wedges were checked using UnWedge. The final support recommendations were based on continuous face mapping, numerical modelling, empirical calculations, and practical experiences.

Keywords: dam siltation, Himalayan geology, hydropower, rock support, numerical modelling

Procedia PDF Downloads 89
1545 Experimental Evaluation of Contact Interface Stiffness and Damping to Sustain Transients and Resonances

Authors: Krystof Kryniski, Asa Kassman Rudolphi, Su Zhao, Per Lindholm

Abstract:

ABB offers range of turbochargers from 500 kW to 80+ MW diesel and gas engines. Those operate on ships, power stations, generator-sets, diesel locomotives and large, off-highway vehicles. The units need to sustain harsh operating conditions, exposure to high speeds, temperatures and varying loads. They are expected to work at over-critical speeds damping effectively any transients and encountered resonances. Components are often connected via friction joints. Designs of those interfaces need to account for surface roughness, texture, pre-stress, etc. to sustain against fretting fatigue. The experience from field contributed with valuable input on components performance in hash sea environment and their exposure to high temperature, speed and load conditions. Study of tribological interactions of oxide formations provided an insight into dynamic activities occurring between the surfaces. Oxidation was recognized as the dominant factor of a wear. Microscopic inspections of fatigue cracks on turbine indicated insufficient damping and unrestrained structural stress leading to catastrophic failure, if not prevented in time. The contact interface exhibits strongly non-linear mechanism and to describe it the piecewise approach was used. Set of samples representing the combinations of materials, texture, surface and heat treatment were tested on a friction rig under range of loads, frequencies and excitation amplitudes. Developed numerical technique extracted the friction coefficient, tangential contact stiffness and damping. Vast amount of experimental data was processed with the multi-harmonics balance (MHB) method to categorize the components subjected to the periodic excitations. At the pre-defined excitation level both force and displacement formed semi-elliptical hysteresis curves having the same area and secant as the actual ones. By cross-correlating the terms remaining in the phase and out of the phase, respectively it was possible to separate an elastic energy from dissipation and derive the stiffness and damping characteristics.

Keywords: contact interface, fatigue, rotor-dynamics, torsional resonances

Procedia PDF Downloads 373
1544 Carrying Capacity Estimation for Small Hydro Plant Located in Torrential Rivers

Authors: Elena Carcano, James Ball, Betty Tiko

Abstract:

Carrying capacity refers to the maximum population that a given level of resources can sustain over a specific period. In undisturbed environments, the maximum population is determined by the availability and distribution of resources, as well as the competition for their utilization. This information is typically obtained through long-term data collection. In regulated environments, where resources are artificially modified, populations must adapt to changing conditions, which can lead to additional challenges due to fluctuations in resource availability over time and throughout development. An example of this is observed in hydropower plants, which alter water flow and impact fish migration patterns and behaviors. To assess how fish species can adapt to these changes, specialized surveys are conducted, which provide valuable information on fish populations, sample sizes, and density before and after flow modifications. In such situations, it is highly recommended to conduct hydrological and biological monitoring to gain insight into how flow reductions affect species adaptability and to prevent unfavorable exploitation conditions. This analysis involves several planned steps that help design appropriate hydropower production while simultaneously addressing environmental needs. Consequently, the study aims to strike a balance between technical assessment, biological requirements, and societal expectations. Beginning with a small hydro project that requires restoration, this analysis focuses on the lower tail of the Flow Duration Curve (FDC), where both hydrological and environmental goals can be met. The proposed approach involves determining the threshold condition that is tolerable for the most vulnerable species sampled (Telestes Muticellus) by identifying a low flow value from the long-term FDC. The results establish a practical connection between hydrological and environmental information and simplify the process by establishing a single reference flow value that represents the minimum environmental flow that should be maintained.

Keywords: carrying capacity, fish bypass ladder, long-term streamflow duration curve, eta-beta method, environmental flow

Procedia PDF Downloads 40
1543 Building a Blockchain-based Internet of Things

Authors: Rob van den Dam

Abstract:

Today’s Internet of Things (IoT) comprises more than a billion intelligent devices, connected via wired/wireless communications. The expected proliferation of hundreds of billions more places us at the threshold of a transformation sweeping across the communications industry. Yet, we found that the IoT architecture and solutions that currently work for billions of devices won’t necessarily scale to tomorrow’s hundreds of billions of devices because of high cost, lack of privacy, not future-proof, lack of functional value and broken business models. As the IoT scales exponentially, decentralized networks have the potential to reduce infrastructure and maintenance costs to manufacturers. Decentralization also promises increased robustness by removing single points of failure that could exist in traditional centralized networks. By shifting the power in the network from the center to the edges, devices gain greater autonomy and can become points of transactions and economic value creation for owners and users. To validate the underlying technology vision, IBM jointly developed with Samsung Electronics the autonomous decentralized peer-to- peer proof-of-concept (PoC). The primary objective of this PoC was to establish a foundation on which to demonstrate several capabilities that are fundamental to building a decentralized IoT. Though many commercial systems in the future will exist as hybrid centralized-decentralized models, the PoC demonstrated a fully distributed proof. The PoC (a) validated the future vision for decentralized systems to extensively augment today’s centralized solutions, (b) demonstrated foundational IoT tasks without the use of centralized control, (c) proved that empowered devices can engage autonomously in marketplace transactions. The PoC opens the door for the communications and electronics industry to further explore the challenges and opportunities of potential hybrid models that can address the complexity and variety of requirements posed by the internet that continues to scale. Contents: (a) The new approach for an IoT that will be secure and scalable, (b) The three foundational technologies that are key for the future IoT, (c) The related business models and user experiences, (d) How such an IoT will create an 'Economy of Things', (e) The role of users, devices, and industries in the IoT future, (f) The winners in the IoT economy.

Keywords: IoT, internet, wired, wireless

Procedia PDF Downloads 335
1542 Next Generation Radiation Risk Assessment and Prediction Tools Generation Applying AI-Machine (Deep) Learning Algorithms

Authors: Selim M. Khan

Abstract:

Indoor air quality is strongly influenced by the presence of radioactive radon (222Rn) gas. Indeed, exposure to high 222Rn concentrations is unequivocally linked to DNA damage and lung cancer and is a worsening issue in North American and European built environments, having increased over time within newer housing stocks as a function of as yet unclear variables. Indoor air radon concentration can be influenced by a wide range of environmental, structural, and behavioral factors. As some of these factors are quantitative while others are qualitative, no single statistical model can determine indoor radon level precisely while simultaneously considering all these variables across a complex and highly diverse dataset. The ability of AI- machine (deep) learning to simultaneously analyze multiple quantitative and qualitative features makes it suitable to predict radon with a high degree of precision. Using Canadian and Swedish long-term indoor air radon exposure data, we are using artificial deep neural network models with random weights and polynomial statistical models in MATLAB to assess and predict radon health risk to human as a function of geospatial, human behavioral, and built environmental metrics. Our initial artificial neural network with random weights model run by sigmoid activation tested different combinations of variables and showed the highest prediction accuracy (>96%) within the reasonable iterations. Here, we present details of these emerging methods and discuss strengths and weaknesses compared to the traditional artificial neural network and statistical methods commonly used to predict indoor air quality in different countries. We propose an artificial deep neural network with random weights as a highly effective method for assessing and predicting indoor radon.

Keywords: radon, radiation protection, lung cancer, aI-machine deep learnng, risk assessment, risk prediction, Europe, North America

Procedia PDF Downloads 95
1541 Revealing Single Crystal Quality by Insight Diffraction Imaging Technique

Authors: Thu Nhi Tran Caliste

Abstract:

X-ray Bragg diffraction imaging (“topography”)entered into practical use when Lang designed an “easy” technical setup to characterise the defects / distortions in the high perfection crystals produced for the microelectronics industry. The use of this technique extended to all kind of high quality crystals, and deposited layers, and a series of publications explained, starting from the dynamical theory of diffraction, the contrast of the images of the defects. A quantitative version of “monochromatic topography” known as“Rocking Curve Imaging” (RCI) was implemented, by using synchrotron light and taking advantage of the dramatic improvement of the 2D-detectors and computerised image processing. The rough data is constituted by a number (~300) of images recorded along the diffraction (“rocking”) curve. If the quality of the crystal is such that a one-to-onerelation between a pixel of the detector and a voxel within the crystal can be established (this approximation is very well fulfilled if the local mosaic spread of the voxel is < 1 mradian), a software we developped provides, from the each rocking curve recorded on each of the pixels of the detector, not only the “voxel” integrated intensity (the only data provided by the previous techniques) but also its “mosaic spread” (FWHM) and peak position. We will show, based on many examples, that this new data, never recorded before, open the field to a highly enhanced characterization of the crystal and deposited layers. These examples include the characterization of dislocations and twins occurring during silicon growth, various growth features in Al203, GaNand CdTe (where the diffraction displays the Borrmannanomalous absorption, which leads to a new type of images), and the characterisation of the defects within deposited layers, or their effect on the substrate. We could also observe (due to the very high sensitivity of the setup installed on BM05, which allows revealing these faint effects) that, when dealing with very perfect crystals, the Kato’s interference fringes predicted by dynamical theory are also associated with very small modifications of the local FWHM and peak position (of the order of the µradian). This rather unexpected (at least for us) result appears to be in keeping with preliminary dynamical theory calculations.

Keywords: rocking curve imaging, X-ray diffraction, defect, distortion

Procedia PDF Downloads 130
1540 Characterization of Transmembrane Proteins with Five Alpha-Helical Regions

Authors: Misty Attwood, Helgi Schioth

Abstract:

Transmembrane proteins are important components in many essential cell processes such as signal transduction, cell-cell signalling, transport of solutes, structural adhesion activities, and protein trafficking. Due to their involvement in diverse critical activities, transmembrane proteins are implicated in different disease pathways and hence are the focus of intense interest in understanding functional activities, their pathogenesis in disease, and their potential as pharmaceutical targets. Further, as the structure and function of proteins are correlated, investigating a group of proteins with the same tertiary structure, i.e., the same number of transmembrane regions, may give understanding about their functional roles and potential as therapeutic targets. In this in silico bioinformatics analysis, we identify and comprehensively characterize the previously unstudied group of proteins with five transmembrane-spanning regions (5TM). We classify nearly 60 5TM proteins in which 31 are members of ten families that contain two or more family members and all members are predicted to contain the 5TM architecture. Furthermore, nine singlet proteins that contain the 5TM architecture without paralogues detected in humans were also identifying, indicating the evolution of single unique proteins with the 5TM structure. Interestingly, more than half of these proteins function in localization activities through movement or tethering of cell components and more than one-third are involved in transport activities, particularly in the mitochondria. Surprisingly, no receptor activity was identified within this family in sharp contrast with other TM families. Three major 5TM families were identified and include the Tweety family, which are pore-forming subunits of the swelling-dependent volume regulated anion channel in astrocytes; the sidoreflexin family that acts as mitochondrial amino acid transporters; and the Yip1 domain family engaged in vesicle budding and intra-Golgi transport. About 30% of the proteins have enhanced expression in the brain, liver, or testis. Importantly, 60% of these proteins are identified as cancer prognostic markers, where they are associated with clinical outcomes of various tumour types, indicating further investigation into the function and expression of these proteins is important. This study provides the first comprehensive analysis of proteins with 5TM regions and provides details of the unique characteristics and application in pharmaceutical development.

Keywords: 5TM, cancer prognostic marker, drug targets, transmembrane protein

Procedia PDF Downloads 108
1539 A Rare Case of Synchronous Colon Adenocarcinoma

Authors: Mohamed Shafi Bin Mahboob Ali

Abstract:

Introduction: Synchronous tumor is defined as the presence of more than one primary malignant lesion in the same patient at the indexed diagnosis. It is a rare occurrence, especially in the spectrum of colorectal cancer, which accounts for less than 4%. The underlying pathology of a synchronous tumor is thought to be due to a genomic factor, which is microsatellite instability (MIS) with the involvement of BRAF, KRAS, and the GSRM1 gene. There are no specific sites of occurrence for the synchronous colorectal tumor, but many studies have shown that a synchronous tumor has about 43% predominance in the ascending colon with rarity in the sigmoid colon. Case Report: We reported a case of a young lady in the middle of her 30's with no family history of colorectal cancer that was diagnosed with a synchronous adenocarcinoma at the descending colon and rectosigmoid region. The lady's presentation was quite perplexing as she presented to the district hospital initially with simple, uncomplicated hemorrhoids and constipation. She was then referred to our center for further management as she developed a 'football' sized right gluteal swelling with a complete intestinal obstruction and bilateral lower-limb paralysis. We performed a CT scan and biopsy of the lesion, which found that the tumor engulfed the sacrococcygeal region with more than one primary lesion in the colon as well as secondaries in the liver. The patient was operated on after a multidisciplinary meeting was held. Pelvic exenteration with tumor debulking and anterior resection were performed. Postoperatively, she was referred to the oncology team for chemotherapy. She had a tremendous recovery in eight months' time with a partial regain of her lower limb power. The patient is still under our follow-up with an improved quality of life post-intervention. Discussion: Synchronous colon cancer is rare, with an incidence of 2.4% to 12.4%. It has male predominance and is pathologically more advanced compared to a single colon lesion. Down staging the disease by means of chemoradiotherapy has shown to be effective in managing this tumor. It is seen commonly on the right colon, but in our case, we found it on the left colon and the rectosigmoid. Conclusion: Managing a synchronous colon tumor could be challenging to surgeons, especially in deciding the extent of resection and postoperative functional outcomes of the bowel; thus, individual treatment strategies are needed to tackle this pathology.

Keywords: synchronous, colon, tumor, adenocarcinoma

Procedia PDF Downloads 105
1538 Stretchable and Flexible Thermoelectric Polymer Composites for Self-Powered Volatile Organic Compound Vapors Detection

Authors: Petr Slobodian, Pavel Riha, Jiri Matyas, Robert Olejnik, Nuri Karakurt

Abstract:

Thermoelectric devices generate an electrical current when there is a temperature gradient between the hot and cold junctions of two dissimilar conductive materials typically n-type and p-type semiconductors. Consequently, also the polymeric semiconductors composed of polymeric matrix filled by different forms of carbon nanotubes with proper structural hierarchy can have thermoelectric properties which temperature difference transfer into electricity. In spite of lower thermoelectric efficiency of polymeric thermoelectrics in terms of the figure of merit, the properties as stretchability, flexibility, lightweight, low thermal conductivity, easy processing, and low manufacturing cost are advantages in many technological and ecological applications. Polyethylene-octene copolymer based highly elastic composites filled with multi-walled carbon nanotubes (MWCTs) were prepared by sonication of nanotube dispersion in a copolymer solution followed by their precipitation pouring into non-solvent. The electronic properties of MWCNTs were moderated by different treatment techniques such as chemical oxidation, decoration by Ag clusters or addition of low molecular dopants. In this concept, for example, the amounts of oxygenated functional groups attached on MWCNT surface by HNO₃ oxidation increase p-type charge carriers. p-type of charge carriers can be further increased by doping with molecules of triphenylphosphine. For partial altering p-type MWCNTs into less p-type ones, Ag nanoparticles were deposited on MWCNT surface and then doped with 7,7,8,8-tetracyanoquino-dimethane. Both types of MWCNTs with the highest difference in generated thermoelectric power were combined to manufacture polymeric based thermoelectric module generating thermoelectric voltage when the temperature difference is applied between hot and cold ends of the module. Moreover, it was found that the generated voltage by the thermoelectric module at constant temperature gradient was significantly affected when exposed to vapors of different volatile organic compounds representing then a self-powered thermoelectric sensor for chemical vapor detection.

Keywords: carbon nanotubes, polymer composites, thermoelectric materials, self-powered gas sensor

Procedia PDF Downloads 152
1537 Application and Utility of the Rale Score for Assessment of Clinical Severity in Covid-19 Patients

Authors: Naridchaya Aberdour, Joanna Kao, Anne Miller, Timothy Shore, Richard Maher, Zhixin Liu

Abstract:

Background: COVID-19 has and continues to be a strain on healthcare globally, with the number of patients requiring hospitalization exceeding the level of medical support available in many countries. As chest x-rays are the primary respiratory radiological investigation, the Radiological Assessment of Lung Edema (RALE) score was used to quantify the extent of pulmonary infection on baseline imaging. Assessment of RALE score's reproducibility and associations with clinical outcome parameters were then evaluated to determine implications for patient management and prognosis. Methods: A retrospective study was performed with the inclusion of patients testing positive for COVID-19 on nasopharyngeal swab within a single Local Health District in Sydney, Australia and baseline x-ray imaging acquired between January to June 2020. Two independent Radiologists viewed the studies and calculated the RALE scores. Clinical outcome parameters were collected and statistical analysis was performed to assess RALE score reproducibility and possible associations with clinical outcomes. Results: A total of 78 patients met inclusion criteria with the age range of 4 to 91 years old. RALE score concordance between the two independent Radiologists was excellent (interclass correlation coefficient = 0.93, 95% CI = 0.88-0.95, p<0.005). Binomial logistics regression identified a positive correlation with hospital admission (1.87 OR, 95% CI= 1.3-2.6, p<0.005), oxygen requirement (1.48 OR, 95% CI= 1.2-1.8, p<0.005) and invasive ventilation (1.2 OR, 95% CI= 1.0-1.3, p<0.005) for each 1-point increase in RALE score. For each one year increased in age, there was a negative correlation with recovery (0.05 OR, 95% CI= 0.92-1.0, p<0.01). RALE scores above three were positively associated with hospitalization (Youden Index 0.61, sensitivity 0.73, specificity 0.89) and above six were positively associated with ICU admission (Youden Index 0.67, sensitivity 0.91, specificity 0.78). Conclusion: The RALE score can be used as a surrogate to quantify the extent of COVID-19 infection and has an excellent inter-observer agreement. The RALE score could be used to prognosticate and identify patients at high risk of deterioration. Threshold values may also be applied to predict the likelihood of hospital and ICU admission.

Keywords: chest radiography, coronavirus, COVID-19, RALE score

Procedia PDF Downloads 177
1536 Using Two-Mode Network to Access the Connections of Film Festivals

Authors: Qiankun Zhong

Abstract:

In a global cultural context, film festival awards become authorities to define the aesthetic value of films. To study which genres and producing countries are valued by different film festivals and how those evaluations interact with each other, this research explored the interactions between the film festivals through their selection of movies and the factors that lead to the tendency of film festivals to nominate the same movies. To do this, the author employed a two-mode network on the movies that won the highest awards at five international film festivals with the highest attendance in the past ten years (the Venice Film Festival, the Cannes Film Festival, the Toronto International Film Festival, Sundance Film Festival, and the Berlin International Film Festival) and the film festivals that nominated those movies. The title, genre, producing country and language of 50 movies, and the range (regional, national or international) and organizing country or area of 129 film festivals were collected. These created networks connected by nominating the same films and awarding the same movies. The author then assessed the density and centrality of these networks to answer the question: What are the film festivals that tend to have more shared values with other festivals? Based on the Eigenvector centrality of the two-mode network, Palm Springs, Robert Festival, Toronto, Chicago, and San Sebastian are the festivals that tend to nominate commonly appreciated movies. In contrast, Black Movie Film Festival has the unique value of generally not sharing nominations with other film festivals. A homophily test was applied to access the clustering effects of film and film festivals. The result showed that movie genres (E-I index=0.55) and geographic location (E-I index=0.35) are possible indicators of film festival clustering. A blockmodel was also created to examine the structural roles of the film festivals and their meaning in real-world context. By analyzing the same blocks with film festival attributes, it was identified that film festivals either organized in the same area, with the same history, or with the same attitude on independent films would occupy the same structural roles in the network. Through the interpretation of the blocks, language was identified as an indicator that contributes to the role position of a film festival. Comparing the result of blockmodeling in the different periods, it is seen that international film festivals contrast with the Hollywood industry’s dominant value. The structural role dynamics provide evidence for a multi-value film festival network.

Keywords: film festivals, film studies, media industry studies, network analysis

Procedia PDF Downloads 315
1535 Assessment of Water Quality of Euphrates River at Babylon Governorate, for Drinking, Irrigation and general, Using Water Quality Index (Canadian Version) (CCMEWQI)

Authors: Amer Obaid Saud

Abstract:

Water quality index (WQI) is considered as an effective tool in categorization of water resources for its quality and suitability for different uses. The Canadian version of water quality index (CCME WQI) which based on the comparison of the water quality parameters to regulatory standards and give a single value to the water quality of a source was applied in this study to assess the water quality of Euphrates river in Iraq at Babylon Governorate north of Baghdad and determine its suitability for aquatic environment (GWQI), drinking water (PWSI) and irrigation(IWQI). Five stations were selected on the river in Babylon (Euphrates River/AL-Musiab, Hindia barrage, two stations at Hilla city and the fifth station at Al-Hshmeya north of Hilla. Fifteen water samples were collected every month during August 2013 to July 2014 at the study sites and analyzed for the physico-chemical parameters like (Temperature, pH, Electrical Conductivity, Total Dissolved Solids(TDS), Total Suspended Solids(TSS), Total Alkalinity, Total Hardness, Calcium and Magnesium Concentration, some of nutrient like Nitrite, Nitrate, Phosphate also the study of concentration of some heavy metals (Fe, Pb, Zn, Cu, Mn, and Cd) in water and comparison of measures to benchmarks such as guidelines and objectives to assess change in water quality. The result of Canadian version of(CCME .WQI) to assess the irrigation water quality (IWQI) of Euphrates river was (83-good) at site one during second seasonal period while the lowest was (66-Fair) in the second station during the fourth seasonal period, the values of potable water supply index (PWSI)that the highest value was (68-Fair) in the fifth site during the second period while the lowest value (42 -Poor) in the second site during the first seasonal period,the highest value for general water quality (GWQI) was (74-Fair) in site five during the second seasonal period, the lowest value (48-Marginal) in the second site during the first seasonal period. It was observed that the main cause of deterioration in water quality was due to the lack of, unprotected river sites ,high anthropogenic activities and direct discharge of industrial effluent.

Keywords: Babylon governorate, Canadian version, water quality, Euphrates river

Procedia PDF Downloads 397
1534 Assessing the Environmental Efficiency of China’s Power System: A Spatial Network Data Envelopment Analysis Approach

Authors: Jianli Jiang, Bai-Chen Xie

Abstract:

The climate issue has aroused global concern. Achieving sustainable development is a good path for countries to mitigate environmental and climatic pressures, although there are many difficulties. The first step towards sustainable development is to evaluate the environmental efficiency of the energy industry with proper methods. The power sector is a major source of CO2, SO2, and NOx emissions. Evaluating the environmental efficiency (EE) of power systems is the premise to alleviate the terrible situation of energy and the environment. Data Envelopment Analysis (DEA) has been widely used in efficiency studies. However, measuring the efficiency of a system (be it a nation, region, sector, or business) is a challenging task. The classic DEA takes the decision-making units (DMUs) as independent, which neglects the interaction between DMUs. While ignoring these inter-regional links may result in a systematic bias in the efficiency analysis; for instance, the renewable power generated in a certain region may benefit the adjacent regions while the SO2 and CO2 emissions act oppositely. This study proposes a spatial network DEA (SNDEA) with a slack measure that can capture the spatial spillover effects of inputs/outputs among DMUs to measure efficiency. This approach is used to study the EE of China's power system, which consists of generation, transmission, and distribution departments, using a panel dataset from 2014 to 2020. In the empirical example, the energy and patent inputs, the undesirable CO2 output, and the renewable energy (RE) power variables are tested for a significant spatial spillover effect. Compared with the classic network DEA, the SNDEA result shows an obvious difference tested by the global Moran' I index. From a dynamic perspective, the EE of the power system experiences a visible surge from 2015, then a sharp downtrend from 2019, which keeps the same trend with the power transmission department. This phenomenon benefits from the market-oriented reform in the Chinese power grid enacted in 2015. The rapid decline in the environmental efficiency of the transmission department in 2020 was mainly due to the Covid-19 epidemic, which hinders economic development seriously. While the EE of the power generation department witnesses a declining trend overall, this is reasonable, taking the RE power into consideration. The installed capacity of RE power in 2020 is 4.40 times that in 2014, while the power generation is 3.97 times; in other words, the power generation per installed capacity shrank. In addition, the consumption cost of renewable power increases rapidly with the increase of RE power generation. These two aspects make the EE of the power generation department show a declining trend. Incorporation of the interactions among inputs/outputs into the DEA model, this paper proposes an efficiency evaluation method on the basis of the DEA framework, which sheds some light on efficiency evaluation in regional studies. Furthermore, the SNDEA model and the spatial DEA concept can be extended to other fields, such as industry, country, and so on.

Keywords: spatial network DEA, environmental efficiency, sustainable development, power system

Procedia PDF Downloads 107
1533 Wearable Antenna for Diagnosis of Parkinson’s Disease Using a Deep Learning Pipeline on Accelerated Hardware

Authors: Subham Ghosh, Banani Basu, Marami Das

Abstract:

Background: The development of compact, low-power antenna sensors has resulted in hardware restructuring, allowing for wireless ubiquitous sensing. The antenna sensors can create wireless body-area networks (WBAN) by linking various wireless nodes across the human body. WBAN and IoT applications, such as remote health and fitness monitoring and rehabilitation, are becoming increasingly important. In particular, Parkinson’s disease (PD), a common neurodegenerative disorder, presents clinical features that can be easily misdiagnosed. As a mobility disease, it may greatly benefit from the antenna’s nearfield approach with a variety of activities that can use WBAN and IoT technologies to increase diagnosis accuracy and patient monitoring. Methodology: This study investigates the feasibility of leveraging a single patch antenna mounted (using cloth) on the wrist dorsal to differentiate actual Parkinson's disease (PD) from false PD using a small hardware platform. The semi-flexible antenna operates at the 2.4 GHz ISM band and collects reflection coefficient (Γ) data from patients performing five exercises designed for the classification of PD and other disorders such as essential tremor (ET) or those physiological disorders caused by anxiety or stress. The obtained data is normalized and converted into 2-D representations using the Gabor wavelet transform (GWT). Data augmentation is then used to expand the dataset size. A lightweight deep-learning (DL) model is developed to run on the GPU-enabled NVIDIA Jetson Nano platform. The DL model processes the 2-D images for feature extraction and classification. Findings: The DL model was trained and tested on both the original and augmented datasets, thus doubling the dataset size. To ensure robustness, a 5-fold stratified cross-validation (5-FSCV) method was used. The proposed framework, utilizing a DL model with 1.356 million parameters on the NVIDIA Jetson Nano, achieved optimal performance in terms of accuracy of 88.64%, F1-score of 88.54, and recall of 90.46%, with a latency of 33 seconds per epoch.

Keywords: antenna, deep-learning, GPU-hardware, Parkinson’s disease

Procedia PDF Downloads 4
1532 A Diurnal Light Based CO₂ Elevation Strategy for Up-Scaling Chlorella sp. Production by Minimizing Oxygen Accumulation

Authors: Venkateswara R. Naira, Debasish Das, Soumen K. Maiti

Abstract:

Achieving high cell densities of microalgae under obligatory light-limiting and high light conditions of diurnal (low-high-low variations of daylight intensity) sunlight are further limited by CO₂ supply and dissolved oxygen (DO) accumulation in large-scale photobioreactors. High DO levels cause low growth due to photoinhibition and/or photorespiration. Hence, scalable elevated CO₂ levels (% in air) and their effect on DO accumulation in a 10 L cylindrical membrane photobioreactor (a vertical tubular type) are studied in the present study. The CO₂ elevation strategies; biomass-based, pH control based (types II & I) and diurnal light based, were explored to study the growth of Chlorella sp. FC2 IITG under single-sided LED lighting in the laboratory, mimicking diurnal sunlight. All the experiments were conducted in fed-batch mode by maintaining N and P sources at least 50% of initial concentrations of the optimized BG-11 medium. It was observed that biomass-based (2% - 1st day, 2.5% - 2nd day and 3% - thereafter) and well-known pH control based, type-I (5.8 pH throughout) strategies were found lethal for FC2 growth. In both strategies, the highest peak DO accumulation of 150% air saturation was resulted due to high photosynthetic activity caused by higher CO₂ levels. In the pH control based type-I strategy, automatically resulted CO₂ levels for pH control were recorded so high (beyond the inhibition range, 5%). However, pH control based type-II strategy (5.8 – 2 days, 6.3 – 3 days, 6.7 – thereafter) showed final biomass titer up to 4.45 ± 0.05 g L⁻¹ with peak DO of 122% air saturation; high CO₂ levels beyond 5% (in air) were recorded thereafter. Thus, it became sustainable for obtaining high biomass. Finally, a diurnal light based (2% - low light, 2.5 % - medium light and 3% - high light) strategy was applied on the basis of increasing/decreasing photosynthesis due to increase/decrease in diurnal light intensity. It has resulted in maximum final biomass titer of 5.33 ± 0.12 g L⁻¹, with total biomass productivity of 0.59 ± 0.01 g L⁻¹ day⁻¹. The values are remarkably higher than constant 2% CO₂ level (final biomass titer: 4.26 ± 0.09 g L⁻¹; biomass productivity: 0.27 ± 0.005 g L⁻¹ day⁻¹). However, 135% air saturation of peak DO was observed. Thus, the diurnal light based elevation should be further improved by using CO₂ enriched N₂ instead of air. To the best of knowledge, the light-based CO₂ elevation strategy is not reported elsewhere.

Keywords: Chlorella sp., CO₂ elevation strategy, dissolved oxygen accumulation, diurnal light based CO₂ elevation, high cell density, microalgae, scale-up

Procedia PDF Downloads 124
1531 Performance Improvement of Long-Reach Optical Access Systems Using Hybrid Optical Amplifiers

Authors: Shreyas Srinivas Rangan, Jurgis Porins

Abstract:

The internet traffic has increased exponentially due to the high demand for data rates by the users, and the constantly increasing metro networks and access networks are focused on improving the maximum transmit distance of the long-reach optical networks. One of the common methods to improve the maximum transmit distance of the long-reach optical networks at the component level is to use broadband optical amplifiers. The Erbium Doped Fiber Amplifier (EDFA) provides high amplification with low noise figure but due to the characteristics of EDFA, its operation is limited to C-band and L-band. In contrast, the Raman amplifier exhibits a wide amplification spectrum, and negative noise figure values can be achieved. To obtain such results, high powered pumping sources are required. Operating Raman amplifiers with such high-powered optical sources may cause fire hazards and it may damage the optical system. In this paper, we implement a hybrid optical amplifier configuration. EDFA and Raman amplifiers are used in this hybrid setup to combine the advantages of both EDFA and Raman amplifiers to improve the reach of the system. Using this setup, we analyze the maximum transmit distance of the network by obtaining a correlation diagram between the length of the single-mode fiber (SMF) and the Bit Error Rate (BER). This hybrid amplifier configuration is implemented in a Wavelength Division Multiplexing (WDM) system with a BER of 10⁻⁹ by using NRZ modulation format, and the gain uniformity noise ratio (signal-to-noise ratio (SNR)), the efficiency of the pumping source, and the optical signal gain efficiency of the amplifier are studied experimentally in a mathematical modelling environment. Numerical simulations were implemented in RSoft OptSim simulation software based on the nonlinear Schrödinger equation using the Split-Step method, the Fourier transform, and the Monte Carlo method for estimating BER.

Keywords: Raman amplifier, erbium doped fibre amplifier, bit error rate, hybrid optical amplifiers

Procedia PDF Downloads 68