Search results for: optimal binary linear codes
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7150

Search results for: optimal binary linear codes

1810 Designing Web Application to Simulate Agricultural Management for Smart Farmer: Land Development Department’s Integrated Management Farm

Authors: Panasbodee Thachaopas, Duangdorm Gamnerdsap, Waraporn Inthip, Arissara Pungpa

Abstract:

LDD’s IM Farm or Land Development Department’s Integrated Management Farm is the agricultural simulation application developed by Land Development Department relies on actual data in simulation game to grow 12 cash crops which are rice, corn, cassava, sugarcane, soybean, rubber tree, oil palm, pineapple, longan, rambutan, durian, and mangosteen. Launching in simulation game, players could select preferable areas for cropping from base map or Orthophoto map scale 1:4,000. Farm management is simulated from field preparation to harvesting. The system uses soil group, and present land use database to facilitate player to know whether what kind of crop is suitable to grow in each soil groups and integrate LDD’s data with other agencies which are soil types, soil properties, soil problems, climate, cultivation cost, fertilizer use, fertilizer price, socio-economic data, plant diseases, weed, pest, interest rate for taking on loan from Bank for Agriculture and Agricultural Cooperatives (BAAC), labor cost, market prices. These mentioned data affect the cost and yield differently to each crop. After completing, the player will know the yield, income and expense, profit/loss. The player could change to other crops that are more suitable to soil groups for optimal yields and profits.

Keywords: agricultural simulation, smart farmer, web application, factors of agricultural production

Procedia PDF Downloads 187
1809 hsa-miR-1204 and hsa-miR-639 Prominent Role in Tamoxifen's Molecular Mechanisms on the EMT Phenomenon in Breast Cancer Patients

Authors: Mahsa Taghavi

Abstract:

In the treatment of breast cancer, tamoxifen is a regularly prescribed medication. The effect of tamoxifen on breast cancer patients' EMT pathways was studied. In this study to see if it had any effect on the cancer cells' resistance to tamoxifen and to look for specific miRNAs associated with EMT. In this work, we used continuous and integrated bioinformatics analysis to choose the optimal GEO datasets. Once we had sorted the gene expression profile, we looked at the mechanism of signaling, the ontology of genes, and the protein interaction of each gene. In the end, we used the GEPIA database to confirm the candidate genes. after that, I investigated critical miRNAs related to candidate genes. There were two gene expression profiles that were categorized into two distinct groups. Using the expression profile of genes that were lowered in the EMT pathway, the first group was examined. The second group represented the polar opposite of the first. A total of 253 genes from the first group and 302 genes from the second group were found to be common. Several genes in the first category were linked to cell death, focal adhesion, and cellular aging. Two genes in the second group were linked to cell death, focal adhesion, and cellular aging. distinct cell cycle stages were observed. Finally, proteins such as MYLK, SOCS3, and STAT5B from the first group and BIRC5, PLK1, and RAPGAP1 from the second group were selected as potential candidates linked to tamoxifen's influence on the EMT pathway. hsa-miR-1204 and hsa-miR-639 have a very close relationship with the candidates genes according to the node degrees and betweenness index. With this, the action of tamoxifen on the EMT pathway was better understood. It's important to learn more about how tamoxifen's target genes and proteins work so that we can better understand the drug.

Keywords: tamoxifen, breast cancer, bioinformatics analysis, EMT, miRNAs

Procedia PDF Downloads 117
1808 Expert System for Road Bridge Constructions

Authors: Michael Dimmer, Holger Flederer

Abstract:

The basis of realizing a construction project is a technically flawless concept which satisfies conditions regarding environment and costs, as well as static-constructional terms. The presented software system actively supports civil engineers during the setup of optimal designs, by giving advice regarding durability, life-cycle costs, sustainability and much more. A major part of the surrounding conditions of a design process is gathered and assimilated by experienced engineers subconsciously. It is a question about eligible building techniques and their practicability by considering emerging costs. Planning engineers have acquired many of this experience during their professional life and use them for their daily work. Occasionally, the planning engineer should disassociate himself from his experience to be open for new and better solutions which meet the functional demands, as well. The developed expert system gives planning engineers recommendations for preferred design options of new constructions as well as for existing bridge constructions. It is possible to analyze construction elements and techniques regarding sustainability and life-cycle costs. This way the software provides recommendations for future constructions. Furthermore, there is an option to design existing road bridges especially for heavy duty transport. This implies a route planning tool to get quick and reliable information as to whether the bridge support structures of a transport route have been measured sufficiently for a certain heavy duty transport. The use of this expert system in bridge planning companies and building authorities will save costs massively for new and existent bridge constructions. This is achieved by consequently considering parameters like life-cycle costs and sustainability for its planning recommendations.

Keywords: expert system, planning process, road bridges, software system

Procedia PDF Downloads 263
1807 Cryptocurrency as a Payment Method in the Tourism Industry: A Comparison of Volatility, Correlation and Portfolio Performance

Authors: Shu-Han Hsu, Jiho Yoon, Chwen Sheu

Abstract:

With the rapidly growing of blockchain technology and cryptocurrency, various industries which include tourism has added in cryptocurrency as the payment method of their transaction. More and more tourism companies accept payments in digital currency for flights, hotel reservations, transportation, and more. For travellers and tourists, using cryptocurrency as a payment method has become a way to circumvent costs and prevent risks. Understanding volatility dynamics and interdependencies between standard currency and cryptocurrency is important for appropriate financial risk management to assist policy-makers and investors in marking more informed decisions. The purpose of this paper has been to understand and explain the risk spillover effects between six major cryptocurrencies and the top ten most traded standard currencies. Using data for the daily closing price of cryptocurrencies and currency exchange rates from 7 August 2015 to 10 December 2019, with 1,133 observations. The diagonal BEKK model was used to analyze the co-volatility spillover effects between cryptocurrency returns and exchange rate returns, which are measures of how the shocks to returns in different assets affect each other’s subsequent volatility. The empirical results show there are co-volatility spillover effects between the cryptocurrency returns and GBP/USD, CNY/USD and MXN/USD exchange rate returns. Therefore, currencies (British Pound, Chinese Yuan and Mexican Peso) and cryptocurrencies (Bitcoin, Ethereum, Ripple, Tether, Litecoin and Stellar) are suitable for constructing a financial portfolio from an optimal risk management perspective and also for dynamic hedging purposes.

Keywords: blockchain, co-volatility effects, cryptocurrencies, diagonal BEKK model, exchange rates, risk spillovers

Procedia PDF Downloads 128
1806 GA3C for Anomalous Radiation Source Detection

Authors: Chia-Yi Liu, Bo-Bin Xiao, Wen-Bin Lin, Hsiang-Ning Wu, Liang-Hsun Huang

Abstract:

In order to reduce the risk of radiation damage that personnel may suffer during operations in the radiation environment, the use of automated guided vehicles to assist or replace on-site personnel in the radiation environment has become a key technology and has become an important trend. In this paper, we demonstrate our proof of concept for autonomous self-learning radiation source searcher in an unknown environment without a map. The research uses GPU version of Asynchronous Advantage Actor-Critic network (GA3C) of deep reinforcement learning to search for radiation sources. The searcher network, based on GA3C architecture, has self-directed learned and improved how search the anomalous radiation source by training 1 million episodes under three simulation environments. In each episode of training, the radiation source position, the radiation source intensity, starting position, are all set randomly in one simulation environment. The input for searcher network is the fused data from a 2D laser scanner and a RGB-D camera as well as the value of the radiation detector. The output actions are the linear and angular velocities. The searcher network is trained in a simulation environment to accelerate the learning process. The well-performance searcher network is deployed to the real unmanned vehicle, Dashgo E2, which mounts LIDAR of YDLIDAR G4, RGB-D camera of Intel D455, and radiation detector made by Institute of Nuclear Energy Research. In the field experiment, the unmanned vehicle is enable to search out the radiation source of the 18.5MBq Na-22 by itself and avoid obstacles simultaneously without human interference.

Keywords: deep reinforcement learning, GA3C, source searching, source detection

Procedia PDF Downloads 97
1805 Feature Selection of Personal Authentication Based on EEG Signal for K-Means Cluster Analysis Using Silhouettes Score

Authors: Jianfeng Hu

Abstract:

Personal authentication based on electroencephalography (EEG) signals is one of the important field for the biometric technology. More and more researchers have used EEG signals as data source for biometric. However, there are some disadvantages for biometrics based on EEG signals. The proposed method employs entropy measures for feature extraction from EEG signals. Four type of entropies measures, sample entropy (SE), fuzzy entropy (FE), approximate entropy (AE) and spectral entropy (PE), were deployed as feature set. In a silhouettes calculation, the distance from each data point in a cluster to all another point within the same cluster and to all other data points in the closest cluster are determined. Thus silhouettes provide a measure of how well a data point was classified when it was assigned to a cluster and the separation between them. This feature renders silhouettes potentially well suited for assessing cluster quality in personal authentication methods. In this study, “silhouettes scores” was used for assessing the cluster quality of k-means clustering algorithm is well suited for comparing the performance of each EEG dataset. The main goals of this study are: (1) to represent each target as a tuple of multiple feature sets, (2) to assign a suitable measure to each feature set, (3) to combine different feature sets, (4) to determine the optimal feature weighting. Using precision/recall evaluations, the effectiveness of feature weighting in clustering was analyzed. EEG data from 22 subjects were collected. Results showed that: (1) It is possible to use fewer electrodes (3-4) for personal authentication. (2) There was the difference between each electrode for personal authentication (p<0.01). (3) There is no significant difference for authentication performance among feature sets (except feature PE). Conclusion: The combination of k-means clustering algorithm and silhouette approach proved to be an accurate method for personal authentication based on EEG signals.

Keywords: personal authentication, K-mean clustering, electroencephalogram, EEG, silhouettes

Procedia PDF Downloads 267
1804 Comparative Fragility Analysis of Shallow Tunnels Subjected to Seismic and Blast Loads

Authors: Siti Khadijah Che Osmi, Mohammed Ahmad Syed

Abstract:

Underground structures are crucial components which required detailed analysis and design. Tunnels, for instance, are massively constructed as transportation infrastructures and utilities network especially in urban environments. Considering their prime importance to the economy and public safety that cannot be compromised, thus any instability to these tunnels will be highly detrimental to their performance. Recent experience suggests that tunnels become vulnerable during earthquakes and blast scenarios. However, a very limited amount of studies has been carried out to study and understanding the dynamic response and performance of underground tunnels under those unpredictable extreme hazards. In view of the importance of enhancing the resilience of these structures, the overall aims of the study are to evaluate probabilistic future performance of shallow tunnels subjected to seismic and blast loads by developing detailed fragility analysis. Critical non-linear time history numerical analyses using sophisticated finite element software Midas GTS NX have been presented about the current methods of analysis, taking into consideration of structural typology, ground motion and explosive characteristics, effect of soil conditions and other associated uncertainties on the tunnel integrity which may ultimately lead to the catastrophic failure of the structures. The proposed fragility curves for both extreme loadings are discussed and compared which provide significant information the performance of the tunnel under extreme hazards which may beneficial for future risk assessment and loss estimation.

Keywords: fragility analysis, seismic loads, shallow tunnels, blast loads

Procedia PDF Downloads 322
1803 Optimization of Smart Beta Allocation by Momentum Exposure

Authors: J. B. Frisch, D. Evandiloff, P. Martin, N. Ouizille, F. Pires

Abstract:

Smart Beta strategies intend to be an asset management revolution with reference to classical cap-weighted indices. Indeed, these strategies allow a better control on portfolios risk factors and an optimized asset allocation by taking into account specific risks or wishes to generate alpha by outperforming indices called 'Beta'. Among many strategies independently used, this paper focuses on four of them: Minimum Variance Portfolio, Equal Risk Contribution Portfolio, Maximum Diversification Portfolio, and Equal-Weighted Portfolio. Their efficiency has been proven under constraints like momentum or market phenomenon, suggesting a reconsideration of cap-weighting.
 To further increase strategy return efficiency, it is proposed here to compare their strengths and weaknesses inside time intervals corresponding to specific identifiable market phases, in order to define adapted strategies depending on pre-specified situations. 
Results are presented as performance curves from different combinations compared to a benchmark. If a combination outperforms the applicable benchmark in well-defined actual market conditions, it will be preferred. It is mainly shown that such investment 'rules', based on both historical data and evolution of Smart Beta strategies, and implemented according to available specific market data, are providing very interesting optimal results with higher return performance and lower risk.
 Such combinations have not been fully exploited yet and justify present approach aimed at identifying relevant elements characterizing them.

Keywords: smart beta, minimum variance portfolio, equal risk contribution portfolio, maximum diversification portfolio, equal weighted portfolio, combinations

Procedia PDF Downloads 326
1802 Fault Analysis of Induction Machine Using Finite Element Method (FEM)

Authors: Wiem Zaabi, Yemna Bensalem, Hafedh Trabelsi

Abstract:

The paper presents a finite element (FE) based efficient analysis procedure for induction machine (IM). The FE formulation approaches are proposed to achieve this goal: the magnetostatic and the non-linear transient time stepped formulations. The study based on finite element models offers much more information on the phenomena characterizing the operation of electrical machines than the classical analytical models. This explains the increase of the interest for the finite element investigations in electrical machines. Based on finite element models, this paper studies the influence of the stator and the rotor faults on the behavior of the IM. In this work, a simple dynamic model for an IM with inter-turn winding fault and a broken bar fault is presented. This fault model is used to study the IM under various fault conditions and severity. The simulation results are conducted to validate the fault model for different levels of fault severity. The comparison of the results obtained by simulation tests allowed verifying the precision of the proposed FEM model. This paper presents a technical method based on Fast Fourier Transform (FFT) analysis of stator current and electromagnetic torque to detect the faults of broken rotor bar. The technique used and the obtained results show clearly the possibility of extracting signatures to detect and locate faults.

Keywords: Finite element Method (FEM), Induction motor (IM), short-circuit fault, broken rotor bar, Fast Fourier Transform (FFT) analysis

Procedia PDF Downloads 282
1801 Assessment of Personal Level Exposures to Particulate Matter among Children in Rural Preliminary Schools as an Indoor Air Pollution Monitoring

Authors: Seyedtaghi Mirmohammadi, J. Yazdani, S. M. Asadi, M. Rokni, A. Toosi

Abstract:

There are many indoor air quality studies with an emphasis on indoor particulate matters (PM2.5) monitoring. Whereas, there is a lake of data about indoor PM2.5 concentrations in rural area schools (especially in classrooms), since preliminary children are assumed to be more defenseless to health hazards and spend a large part of their time in classrooms. The objective of this study was indoor PM2.5 concentration quality assessment. Fifteen preliminary schools by time-series sampling were selected to evaluate the indoor air quality in the rural district of Sari city, Iran. Data on indoor air climate parameters (temperature, relative humidity and wind speed) were measured by a hygrometer and thermometer. Particulate matters (PM2.5) were collected and assessed by Real Time Dust Monitor, (MicroDust Pro, Casella, UK). The mean indoor PM2.5 concentration in the studied classrooms was 135µg/m3 in average. The multiple linear regression revealed that a correlation between PM2.5 concentration and relative humidity, distance from city center and classroom size. Classroom size yields reasonable negative relationship, the PM2.5 concentration was ranged from 65 to 540μg/m3 and statistically significant at 0.05 level and the relative humidity was ranged from 70 to 85% and dry bulb temperature ranged from 28 to 29°C were statistically significant at 0.035 and 0.05 level, respectively. A statistical predictive model was obtained from multiple regressions modeling for PM2.5 and indoor psychrometric parameters.

Keywords: particulate matters, classrooms, regression, concentration, humidity

Procedia PDF Downloads 297
1800 Challenges for Adopting Circular Economy Toward Business Innovation and Supply Chain

Authors: Kapil Khanna, Swee Kuik, Joowon Ban

Abstract:

The current linear economic system is unsustainable due to its dependence on the uncontrolled exploitation of diminishing natural resources. The integration of business innovation and supply chain management has brought about the redesign of business processes through the implementation of a closed-loop approach. The circular economy (CE) offers a sustainable solution to improve business opportunities in the near future by following the principles of rejuvenation and reuse inspired by nature. Those business owners start to rethink and consider using waste as raw material to make new products for consumers. The implementation of CE helps organisations to incorporate new strategic plans for decreasing the use of virgin materials and nature resources. Supply chain partners that are geographically dispersed rely heavily on innovative approaches to support supply chain management. Presently, numerous studies have attempted to establish the concept of supply chain management (SCM) by integrating CE principles, which are commonly denoted as circular SCM. While many scholars have recognised the challenges of transitioning to CE, there is still a lack of consensus on business best practices that can facilitate companies in embracing CE across the supply chain. Hence, this paper strives to scrutinize the SCM practices utilised for CE, identify the obstacles, and recommend best practices that can enhance a company's ability to incorporate CE principles toward business innovation and supply chain performance. Further, the paper proposes future research in the field of using specific technologies such as artificial intelligence, Internet of Things, and blockchain as business innovation tools for supply chain management and CE adoption.

Keywords: business innovation, challenges, circular supply chain, supply chain management, technology

Procedia PDF Downloads 78
1799 IoT and Deep Learning approach for Growth Stage Segregation and Harvest Time Prediction of Aquaponic and Vermiponic Swiss Chards

Authors: Praveen Chandramenon, Andrew Gascoyne, Fideline Tchuenbou-Magaia

Abstract:

Aquaponics offers a simple conclusive solution to the food and environmental crisis of the world. This approach combines the idea of Aquaculture (growing fish) to Hydroponics (growing vegetables and plants in a soilless method). Smart Aquaponics explores the use of smart technology including artificial intelligence and IoT, to assist farmers with better decision making and online monitoring and control of the system. Identification of different growth stages of Swiss Chard plants and predicting its harvest time is found to be important in Aquaponic yield management. This paper brings out the comparative analysis of a standard Aquaponics with a Vermiponics (Aquaponics with worms), which was grown in the controlled environment, by implementing IoT and deep learning-based growth stage segregation and harvest time prediction of Swiss Chards before and after applying an optimal freshwater replenishment. Data collection, Growth stage classification and Harvest Time prediction has been performed with and without water replenishment. The paper discusses the experimental design, IoT and sensor communication with architecture, data collection process, image segmentation, various regression and classification models and error estimation used in the project. The paper concludes with the results comparison, including best models that performs growth stage segregation and harvest time prediction of the Aquaponic and Vermiponic testbed with and without freshwater replenishment.

Keywords: aquaponics, deep learning, internet of things, vermiponics

Procedia PDF Downloads 53
1798 A Constructivist Grounded Theory Study on the Impact of Automation on People and Gardening

Authors: Hamilton V. Niculescu

Abstract:

Following a three year study conducted on eighteen Irish people that are involved in growing vegetables in various community gardens around Dublin, Republic of Ireland, it was revealed that addition of some automated features aimed at improving agricultural practices represented a process which was regarded as potentially beneficial, and as a great tool to closely monitor climate conditions inside the greenhouses. The participants were provided with a free custom-built mobile app through which they could remotely monitor and control features such as irrigation, air ventilation, and windows to ensure optimal growing conditions for vegetables growing inside purpose-built greenhouses. While the initial interest was generally high, within weeks, the participants' level of interaction with the enclosures slowly declined. By employing a constructivist grounded theory methodology, following focus group discussions, in-depth semi-structured interviews, and observations, it was revealed that participants' trust in newer technologies, and renewables, in particular, was low. There are various reasons for this, but because the participants in this study consist of mainly working-class people, it can be argued that lack of education and knowledge are the main barriers acting against the adoption of innovations. Consequently, it was revealed that most participants eventually decided to "set and forget" the systems in automatic working mode, indicating that the immediate effect of introducing people to assisting technologies also introduced some unintended consequences into their lifestyle. It is argued that this occurrence also indicates the fact that people initially "read" newer technologies and only adopt those features that they find useful and less intrusive in regards to their current lifestyle.

Keywords: automation, communication, greenhouse, sustainable

Procedia PDF Downloads 106
1797 Process Monitoring Based on Parameterless Self-Organizing Map

Authors: Young Jae Choung, Seoung Bum Kim

Abstract:

Statistical Process Control (SPC) is a popular technique for process monitoring. A widely used tool in SPC is a control chart, which is used to detect the abnormal status of a process and maintain the controlled status of the process. Traditional control charts, such as Hotelling’s T2 control chart, are effective techniques to detect abnormal observations and monitor processes. However, many complicated manufacturing systems exhibit nonlinearity because of the different demands of the market. In this case, the unregulated use of a traditional linear modeling approach may not be effective. In reality, many industrial processes contain the nonlinear and time-varying properties because of the fluctuation of process raw materials, slowing shift of the set points, aging of the main process components, seasoning effects, and catalyst deactivation. The use of traditional SPC techniques with time-varying data will degrade the performance of the monitoring scheme. To address these issues, in the present study, we propose a parameterless self-organizing map (PLSOM)-based control chart. The PLSOM-based control chart not only can manage a situation where the distribution or parameter of the target observations changes, but also address the nonlinearity of modern manufacturing systems. The control limits of the proposed PLSOM chart are established by estimating the empirical level of significance on the percentile using a bootstrap method. Experimental results with simulated data and actual process data from a thin-film transistor-liquid crystal display process demonstrated the effectiveness and usefulness of the proposed chart.

Keywords: control chart, parameter-less self-organizing map, self-organizing map, time-varying property

Procedia PDF Downloads 255
1796 Pellegrini-Stieda Syndrome: A Physical Medicine and Rehabilitation Approach

Authors: Pedro Ferraz-Gameiro

Abstract:

Introduction: The Pellegrini-Stieda lesion is the result of post-traumatic calcification and/or ossification on the medial collateral ligament (MCL) of the knee. When this calcification is accompanied by gonalgia and limitation of knee flexion, it is called Pellegrini-Stieda syndrome. The pathogenesis is probably the calcification of a post-traumatic hematoma at least three weeks after the initial trauma or secondary to repetitive microtrauma. On anteroposterior radiographs, a Pellegrini-Stieda lesion is a linear vertical ossification or calcification of the proximal portion of the MCL and usually near the medial femoral condyle. Patients with Pellegrini-Stieda syndrome present knee pain associated with loss of range of motion. The treatment is usually conservative with analgesic and anti-inflammatory drugs, either systemic or intra-articular. Physical medicine and rehabilitation techniques associated with shock wave therapy can be a way of reduction of pain/inflammation. Patients who maintain instability with significant limitation of knee mobility may require surgical excision. Methods: Research was done using PubMed central using the terms Pellegrini-Stieda syndrome. Discussion/conclusion: Medical treatment is the rule, with initial rest, anti-inflammatory, and physiotherapy. If left untreated, this ossification can potentially form a significant bone mass, which can compromise the range of motion of the knee. Physical medicine and rehabilitation techniques associated with shock wave therapy are a way of reduction of pain/inflammation.

Keywords: knee, Pellegrini-Stieda syndrome, rehabilitation, shock waves therapy

Procedia PDF Downloads 117
1795 Analysis Influence Variation Frequency on Characterization of Nano-Particles in Preteatment Bioetanol Oil Palm Stem (Elaeis guineensis JACQ) Use Sonication Method with Alkaline Peroxide Activators on Improvement of Celullose

Authors: Luristya Nur Mahfut, Nada Mawarda Rilek, Ameiga Cautsarina Putri, Mujaroh Khotimah

Abstract:

The use of bioetanol from lignocellulosic material has begone to be developed. In Indonesia the most abundant lignocellulosic material is stem of palm which contain 32.22% of cellulose. Indonesia produces approximatelly 300.375.000 tons of stem of palm each year. To produce bioetanol from lignocellulosic material, the first process is pretreatment. But, until now the method of lignocellulosic pretretament is uneffective. This is related to the particle size and the method of pretreatment of less than optimal so that led to an overhaul of the lignin insufficient, consequently increased levels of cellulose was not significant resulting in low yield of bioetanol. To solve the problem, this research was implemented by using the process of pretreatment method ultasonifikasi in order to produce higher pulp with nano-sized particles that will obtain higher of yield ethanol from stem of palm. Research methods used in this research is the RAK that is composed of one factor which is the frequency ultrasonic waves with three varians, they are 30 kHz, 40 kHz, 50 kHz, and use constant variable is concentration of NaOH. The analysis conducted in this research is the influence of the frequency of the wave to increase levels of cellulose and change size on the scale of nanometers on pretreatment process by using the PSA methods (Particle Size Analyzer), and a Cheason. For the analysis of the results, data, and best treatment using ANOVA and test BNT with confidence interval 5%. The best treatment was obtained by combination X3 (frequency of sonication 50 kHz) and lignin (19,6%) cellulose (59,49%) and hemicellulose (11,8%) with particle size 385,2nm (18,8%).

Keywords: bioethanol, pretreatment, stem of palm, cellulosa

Procedia PDF Downloads 315
1794 Distributed Generation Connection to the Network: Obtaining Stability Using Transient Behavior

Authors: A. Hadadi, M. Abdollahi, A. Dustmohammadi

Abstract:

The growing use of DGs in distribution networks provide many advantages and also cause new problems which should be anticipated and be solved with appropriate solutions. One of the problems is transient voltage drop and short circuit in the electrical network, in the presence of distributed generation - which can lead to instability. The appearance of the short circuit will cause loss of generator synchronism, even though if it would be able to recover synchronizing mode after removing faulty generator, it will be stable. In order to increase system reliability and generator lifetime, some strategies should be planned to apply even in some situations which a fault prevent generators from separation. In this paper, one fault current limiter is installed due to prevent DGs separation from the grid when fault occurs. Furthermore, an innovative objective function is applied to determine the impedance optimal amount of fault current limiter in order to improve transient stability of distributed generation. Fault current limiter can prevent generator rotor's sudden acceleration after fault occurrence and thereby improve the network transient stability by reducing the current flow in a fast and effective manner. In fact, by applying created impedance by fault current limiter when a short circuit happens on the path of current injection DG to the fault location, the critical fault clearing time improve remarkably. Therefore, protective relay has more time to clear fault and isolate the fault zone without any instability. Finally, different transient scenarios of connection plan sustainability of small scale synchronous generators to the distribution network are presented.

Keywords: critical clearing time, fault current limiter, synchronous generator, transient stability, transient states

Procedia PDF Downloads 179
1793 Physiological Normoxia and Cellular Adhesion of Diffuse Large B-Cell Lymphoma Primary Cells: Real-Time PCR and Immunohistochemistry Study

Authors: Kamila Duś-Szachniewicz, Kinga M. Walaszek, Paweł Skiba, Paweł Kołodziej, Piotr Ziółkowski

Abstract:

Cell adhesion is of fundamental importance in the cell communication, signaling, and motility, and its dysfunction occurs prevalently during cancer progression. The knowledge of the molecular and cellular processes involved in abnormalities in cancer cells adhesion has greatly increased, and it has been focused mainly on cellular adhesion molecules (CAMs) and tumor microenvironment. Unfortunately, most of the data regarding CAMs expression relates to study on cells maintained in standard oxygen condition of 21%, while the emerging evidence suggests that culturing cells in ambient air is far from physiological. In fact, oxygen in human tissues ranges from 1 to 11%. The aim of this study was to compare the effects of physiological lymph node normoxia (5% O2), and hyperoxia (21% O2) on the expression of cellular adhesion molecules of primary diffuse large B-cell lymphoma cells (DLBCL) isolated from 10 lymphoma patients. Quantitative RT-PCR and immunohistochemistry were used to confirm the differential expression of several CAMs, including ICAM, CD83, CD81, CD44, depending on the level of oxygen. Our findings also suggest that DLBCL cells maintained at ambient O2 (21%) exhibit reduced growth rate and migration ability compared to the cells growing in normoxia conditions. Taking into account all the observations, we emphasize the need to identify the optimal human cell culture conditions mimicking the physiological aspects of tumor growth and differentiation.

Keywords: adhesion molecules, diffuse large B-cell lymphoma, physiological normoxia, quantitative RT-PCR

Procedia PDF Downloads 264
1792 Impact of Drought in Farm Level Income in the United States

Authors: Anil Giri, Kyle Lovercamp, Sankalp Sharma

Abstract:

Farm level incomes fluctuate significantly due to extreme weather events such as drought. In the light of recent extreme weather events it is important to understand the implications of extreme weather events, flood and drought, on farm level incomes. This study examines the variation in farm level incomes for the United States in drought and no- drought years. Factoring heterogeneity in different enterprises (crop, livestock) and geography this paper analyzes the impact of drought in farm level incomes at state and national level. Livestock industry seems to be affected more by the lag in production of input feed for production, crops, as preliminary results show. Furthermore, preliminary results also show that while crop producers are not affected much due to drought, as price and quantity effect worked on opposite direction with same magnitude, that was not the case for livestock and horticulture enterprises. Results also showed that even when price effect was not as high the crop insurance component helped absorb much of shock for crop producers. Finally, the effect was heterogeneous for different states more on the coastal states compared Midwest region. This study should generate a lot of interest from policy makers across the world as some countries are actively seeking to increase subsidies in their agriculture sector. This study shows how subsidies absorb the shocks for one enterprise more than others. Finally, this paper should also be able to give an insight to economists to design/recommend policies such that it is optimal given the production level of different enterprises in different countries.

Keywords: farm level income, United States, crop, livestock

Procedia PDF Downloads 265
1791 Obsession of Time and the New Musical Ontologies. The Concert for Saxophone, Daniel Kientzy and Orchestra by Myriam Marbe

Authors: Dutica Luminita

Abstract:

For the music composer Myriam Marbe the musical time and memory represent 2 (complementary) phenomena with conclusive impact on the settlement of new musical ontologies. Summarizing the most important achievements of the contemporary techniques of composition, her vision on the microform presented in The Concert for Daniel Kientzy, saxophone and orchestra transcends the linear and unidirectional time in favour of a flexible, multi-vectorial speech with spiral developments, where the sound substance is auto(re)generated by analogy with the fundamental processes of the memory. The conceptual model is of an archetypal essence, the music composer being concerned with identifying the mechanisms of the creation process, especially of those specific to the collective creation (of oral tradition). Hence the spontaneity of expression, improvisation tint, free rhythm, micro-interval intonation, coloristic-timbral universe dominated by multiphonics and unique sound effects. Hence the atmosphere of ritual, however purged by the primary connotations and reprojected into a wonderful spectacular space. The Concert is a work of artistic maturity and enforces respect, among others, by the timbral diversity of the three species of saxophone required by the music composer (baritone, sopranino and alt), in Part III Daniel Kientzy shows the performance of playing two saxophones concomitantly. The score of the music composer Myriam Marbe contains a deeply spiritualized music, full or archetypal symbols, a music whose drama suggests a real cinematographic movement.

Keywords: archetype, chronogenesis, concert, multiphonics

Procedia PDF Downloads 528
1790 The Imminent Other in Anna Deavere Smith’s Performance

Authors: Joy Shihyi Huang

Abstract:

This paper discusses the concept of community in Anna Deavere Smith’s performance, one that challenges and explores existing notions of justice and the other. In contrast to unwavering assumptions of essentialism that have helped to propel a discourse on moral agency within the black community, Smith employs postmodern ideas in which the theatrical attributes of doubling and repetition are conceptualized as part of what Marvin Carlson coined as a ‘memory machine.’ Her dismissal of the need for linear time, such as that regulated by Aristotle’s The Poetics and its concomitant ethics, values, and emotions as a primary ontological and epistemological construct produced by the existing African American historiography, demonstrates an urgency to produce an alternative communal self to override metanarratives in which the African Americans’ lives are contained and sublated by specific historical confines. Drawing on Emmanuel Levinas’ theories in ethics, specifically his notion of ‘proximity’ and ‘the third,’ the paper argues that Smith enacts a new model of ethics by launching an acting method that eliminates the boundary of self and other. Defying psychological realism, Smith conceptualizes an approach to acting that surpasses the mere mimetic value of invoking a ‘likeness’ of an actor to a character, which as such, resembles the mere attribution of various racial or sexual attributes in identity politics. Such acting, she contends, reduces the other to a representation of, at best, an ultimate rendering of me/my experience. She instead appreciates ‘unlikeness,’ recognizes the unavoidable actor/character gap as a power that humbles the self, whose irreversible journey to the other carves out its own image.

Keywords: Anna Deavere Smith, Emmanuel Levinas, other, performance

Procedia PDF Downloads 140
1789 Correlation Volumic Shrinkage, Conversion Degree of Dental Composites

Authors: A. Amirouche, M. Mouzali, D. C. Watts

Abstract:

During polymerization of dental composites, the volumic shrinkage is related to the conversion degree. The variation of the volumic shrinkage (S max according to the degree of conversion CD.), was examined for the experimental composites: (BisGMA/TEGDMA): (50/50), (75/25), (25/75) mixed with seven radiopac fillers: La2O3, BaO, BaSO4, SrO, ZrO2 , SrZrO3 and BaZrO 3 with different contents in weight, from 0 to 80%. We notice that whatever the filler and the composition in monomers, Smax increases with the increase in CD. This variation is, linear in particular in the case of the fillers containing only one heavy metal, and that whatever the composition in monomers. For a given salt, the increase of BisGMA composition leads to significant increase of S max more pronounced than the increase in CD. The variation of ratio (S max / CD.) with the increase of filler content is negligible. However the fillers containing two types of heavy metals have more effect on the volumic shrinkage than on the degree of conversion. Whatever the composition in monomer, and the content of filler containing only one heavy atom, S max increases with the increase in CD. Nevertheless, S max is affected by the viscosity of the medium compared with CD. For high percentages of mineral fillers (≥ 70% in weight), the diagrams S max according to CD are deviated of the linearity, owing to the fact that S max is affected by the high percentage of fillers compared with CD. The number of heavy atoms influences directly correlation (S max / CD.). In the case of the two mineral fillers: SrZrO3 and BaZrO3 ratio (S max / CD) moves away from the proportionality. The linearity of the diagrams Smax according to CD is less regular, due to the viscosity of high content of BisGMA. The study of Smax and DC of four commercial composites are presented and compared to elaborate experimental composites.

Keywords: Dental composites, degree of conversion, volumic shrinkage, photopolymerization

Procedia PDF Downloads 354
1788 Feature Analysis of Predictive Maintenance Models

Authors: Zhaoan Wang

Abstract:

Research in predictive maintenance modeling has improved in the recent years to predict failures and needed maintenance with high accuracy, saving cost and improving manufacturing efficiency. However, classic prediction models provide little valuable insight towards the most important features contributing to the failure. By analyzing and quantifying feature importance in predictive maintenance models, cost saving can be optimized based on business goals. First, multiple classifiers are evaluated with cross-validation to predict the multi-class of failures. Second, predictive performance with features provided by different feature selection algorithms are further analyzed. Third, features selected by different algorithms are ranked and combined based on their predictive power. Finally, linear explainer SHAP (SHapley Additive exPlanations) is applied to interpret classifier behavior and provide further insight towards the specific roles of features in both local predictions and global model behavior. The results of the experiments suggest that certain features play dominant roles in predictive models while others have significantly less impact on the overall performance. Moreover, for multi-class prediction of machine failures, the most important features vary with type of machine failures. The results may lead to improved productivity and cost saving by prioritizing sensor deployment, data collection, and data processing of more important features over less importance features.

Keywords: automated supply chain, intelligent manufacturing, predictive maintenance machine learning, feature engineering, model interpretation

Procedia PDF Downloads 114
1787 Examination of State of Repair of Buildings in Private Housing Estates in Enugu Metropolis, Enugu State Nigeria

Authors: Umeora Chukwunonso Obiefuna

Abstract:

The private sector in housing provision continually take steps towards addressing part of the problem of cushioning the effect of the housing shortage in Nigeria by establishing housing estates since the government alone cannot provide housing for everyone. This research examined and reported findings from research conducted on the state of repair of buildings in private housing estates in Enugu metropolis, Enugu state Nigeria. The objectives of the study were to examine the physical conditions of the building fabrics and appraise the performance of infrastructural services provided in the buildings. The questionnaire was used as a research instrument to elicit data from respondents. Stratified sampling of the estates based on building type was adopted as a sampling method for this study. Findings from the research show that the state of repair of most buildings require minor repairs to make them fit for habitation and sound to ensure the well-being of the residents. In addition, four independent variables from the nine independent variables investigated significantly explained residual variation in the dependent variable - state of repair of the buildings in the study area. These variables are: Average Monthly Income of Residents (AMIR), Length of Stay of the Residents in the estates (LSY), Type of Wall Finishes on the buildings (TWF), and Time Taken to Respond to Resident’s complaints by the estate managers (TTRC). With this, the linear model was established for predicting the state of repair of buildings in private housing estates in the study area. This would assist in identifying variables that are lucid in predicting the state of repair of the buildings.

Keywords: building, housing estate, private, repair

Procedia PDF Downloads 125
1786 The Minimum Patch Size Scale for Seagrass Canopy Restoration

Authors: Aina Barcelona, Carolyn Oldham, Jordi Colomer, Teresa Serra

Abstract:

The loss of seagrass meadows worldwide is being tackled by formulating coastal restoration strategies. Seagrass loss results in a network of vegetated patches which are barely interconnected, and consequently, the ecological services they provide may be highly compromised. Hence, there is a need to optimize coastal management efforts in order to implement successful restoration strategies, not only through modifying the architecture of the canopies but also by gathering together information on the hydrodynamic conditions of the seabeds. To obtain information on the hydrodynamics within the patches of vegetation, this study deals with the scale analysis of the minimum lengths of patch management strategies that can be effectively used on. To this aim, a set of laboratory experiments were conducted in a laboratory flume where the plant densities, patch lengths, and hydrodynamic conditions were varied to discern the vegetated patch lengths that can provide optimal ecosystem services for canopy development. Two possible patch behaviours based on the turbulent kinetic energy (TKE) production were determined: one where plants do not interact with the flow and the other where plants interact with waves and produce TKE. Furthermore, this study determines the minimum patch lengths that can provide successful management restoration. A canopy will produce TKE, depending on its density, the length of the vegetated patch, and the wave velocities. Therefore, a vegetated patch will produce plant-wave interaction under high wave velocities when it presents large lengths and high canopy densities.

Keywords: seagrass, minimum patch size, turbulent kinetic energy, oscillatory flow

Procedia PDF Downloads 180
1785 Non-Linear Assessment of Chromatographic Lipophilicity and Model Ranking of Newly Synthesized Steroid Derivatives

Authors: Milica Karadzic, Lidija Jevric, Sanja Podunavac-Kuzmanovic, Strahinja Kovacevic, Anamarija Mandic, Katarina Penov Gasi, Marija Sakac, Aleksandar Okljesa, Andrea Nikolic

Abstract:

The present paper deals with chromatographic lipophilicity prediction of newly synthesized steroid derivatives. The prediction was achieved using in silico generated molecular descriptors and quantitative structure-retention relationship (QSRR) methodology with the artificial neural networks (ANN) approach. Chromatographic lipophilicity of the investigated compounds was expressed as retention factor value logk. For QSRR modeling, a feedforward back-propagation ANN with gradient descent learning algorithm was applied. Using the novel sum of ranking differences (SRD) method generated ANN models were ranked. The aim was to distinguish the most consistent QSRR model that can be found, and similarity or dissimilarity between the models that could be noticed. In this study, SRD was performed with average values of retention factor value logk as reference values. An excellent correlation between experimentally observed retention factor value logk and values predicted by the ANN was obtained with a correlation coefficient higher than 0.9890. Statistical results show that the established ANN models can be applied for required purpose. This article is based upon work from COST Action (TD1305), supported by COST (European Cooperation in Science and Technology).

Keywords: artificial neural networks, liquid chromatography, molecular descriptors, steroids, sum of ranking differences

Procedia PDF Downloads 300
1784 Surgical Outcomes of Lung Cancer Surgery in Tasmania

Authors: Ayeshmanthe Rathnayake, Ashutosh Hardikar

Abstract:

Introduction: Lung cancer is the most common cause of cancer death in Australia, with more than 13000 cases per year. Until now, there has been a major deficiency of national comprehensive thoracic surgery data. The thoracic workload for surgeons as well as caseload per unit, is highly variable, with some centres performing less than 15 cases per annum, thus raising concerns about optimal care at low-volume sites. This is an attempt to review the outcomes of lung cancer surgery in Tasmania. Method: The objective of this study is to determine the surgical outcomes of lung cancer surgery at Royal Hobart Hospital (RHH) with the primary outcome of surgical mortality. Four hundred fifty-one cases were analysed retrospectively from 2010 to May 2022. Results: A total of 451 patients underwent thoracic surgery with a primary diagnosis of lung cancer. The primary outcome of 30-day mortality was <0.5%. The mean age was 65.3 years, with male predominance and a 4.2% prevalence of Indigenous Australians. The mean LOS was 7.5 days. The surgical approach was either VATS (50.3%) or Thoracotomy (49.7%), with a trend towards the former in recent years with an increase in the proportion of VATS from 18.2% to 51% (p<0.05) in complex resections since 2019. A corresponding reduction in conversion rate to open was observed (18% vs. 5.5%), and there were no deaths within this subgroup. Lung resections were divided into lobectomy (55.4%), wedge resection (36.8%), segmentectomy (2.9%) and pneumonectomy (4.9%). The RHH demonstrates good surgical outcomes for lung cancer and provides a sustainable service for Tasmania. Conclusion: This retrospective study reports the surgical outcomes of lung cancer surgery at the Royal Hobart Hospital, thereby providing insight into the surgical management of lung cancer in the state thus far. The state has been slow to catch up on the minimally invasive program, but the overall results have been comparable to most peers.

Keywords: lung cancer, thoracic surgery, lung resection, surgical outcomes

Procedia PDF Downloads 78
1783 Effect of Acid-Basic Treatments of Lingocellulosic Material Forest Wastes Wild Carob on Ethyl Violet Dye Adsorption

Authors: Abdallah Bouguettoucha, Derradji Chebli, Tariq Yahyaoui, Hichem Attout

Abstract:

The effect of acid -basic treatment of lingocellulosic material (forest wastes wild carob) on Ethyl violet adsorption was investigated. It was found that surface chemistry plays an important role in Ethyl violet (EV) adsorption. HCl treatment produces more active acidic surface groups such as carboxylic and lactone, resulting in an increase in the adsorption of EV dye. The adsorption efficiency was higher for treated of lingocellulosic material with HCl than for treated with KOH. Maximum biosorption capacity was 170 and 130 mg/g, for treated of lingocellulosic material with HCl than for treated with KOH at pH 6 respectively. It was also found that the time to reach equilibrium takes less than 25 min for both treated materials. The adsorption of basic dye (i.e., ethyl violet or basic violet 4) was carried out by varying some process parameters, such as initial concentration, pH and temperature. The adsorption process can be well described by means of a pseudo-second-order reaction model showing that boundary layer resistance was not the rate-limiting step, as confirmed by intraparticle diffusion since the linear plot of Qt versus t^0.5 did not pass through the origin. In addition, experimental data were accurately expressed by the Sips equation if compared with the Langmuir and Freundlich isotherms. The values of ΔG° and ΔH° confirmed that the adsorption of EV on acid-basic treated forest wast wild carob was spontaneous and endothermic in nature. The positive values of ΔS° suggested an irregular increase of the randomness at the treated lingocellulosic material -solution interface during the adsorption process.

Keywords: adsorption, isotherm models, thermodynamic parameters, wild carob

Procedia PDF Downloads 259
1782 Circular Economy and Remedial Frameworks in Contract Law

Authors: Reza Beheshti

Abstract:

This paper examines remedies for defective manufactured goods in commercial circular economic transactions. The linear ‘take-make-dispose’ model fits well with the conventional remedial framework in which damages are considered the primary remedy. Damages under English Sales Law encourages buyers to look for a substitute seller with broadly similar goods to the ones agreed on in the original contract, enter into contract with this new seller and hence terminate the original contract. By doing so, the buyer ends the contractual relationship. This seems contrary to the core principles of the circular economy: keeping products, components, and materials in longer use, which can partly be achieved by product refurbishment. This process involves returning a product to good working condition by replacing or repairing major components that are faulty or close to failure and making ‘cosmetic’ changes to update the appearance of a product. This remedy has not been widely accepted or applied in commercial cases, which in turn flags up the secondary nature of performance-related remedies. This paper critically analyses the laws concerning the seller’s duty to cure in English law and the extent to which they correspond with core principles of the circular economy. In addition, this paper takes into account the potential of circular economic transactions being characterised as something other than sales. In such situations, the likely outcome will be a license to use products, which may limit the choice of remedy further. Consequently, this paper suggests an outline remedial framework specifically for commercial circular economic transactions in manufactured goods.

Keywords: circular economy, contract law, remedies, English Sales Law

Procedia PDF Downloads 130
1781 Randomized, Controlled Blind Study Comparing Sacroiliac Intra-Articular Steroid Injection to Radiofrequency Denervation for Management of Sacroiliac Joint Pain

Authors: Ossama Salman

Abstract:

Background and objective: Sacroiliac joint pain is a common cause for chronic axial low back pain, with up to 20% prevalence rate. To date, no effective long-term treatment intervention has been embarked on yet. The aim of our study was to compare steroid block to radiofrequency ablation for SIJ pain conditions. Methods: A randomized, blind, study was conducted in 30 patients with sacroiliac joint pain. Fifteen patients received radiofrequency denervation of L4-5 primary dorsal rami and S1-3 lateral sacral branch, and 15 patients received steroid under fluoroscopy. Those in the steroid group who did not respond to steroid injections were offered to cross over to get radiofrequency ablation. Results: At 1-, 3- and 6-months post-intervention, 73%, 60% and 53% of patients, respectively, gained ≥ 50 % pain relief in the radiofrequency (RF) ablation group. In the steroid group, at one month post intervention follow up, only 20% gained ≥ 50 % pain relief, but failed to show any improvement at 3 months and 6 months follow up. Conclusions: Radiofrequency ablation at L4 and L5 primary dorsal rami and S1-3 lateral sacral branch may provide effective and longer pain relief compared to the classic intra-articular steroid injection, in properly selected patients with suspected sacroiliac joint pain. Larger studies are called for to confirm our results and lay out the optimal patient selection and treatment parameters for this poorly comprehended disorder.

Keywords: lateral branch denervation, LBD, radio frequency, RF, sacroiliac joint, SIJ, visual analogue scale, VAS

Procedia PDF Downloads 205