Search results for: conventional and nonconventional methods of clustering
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 17771

Search results for: conventional and nonconventional methods of clustering

17441 Impact of Terrorism as an Asymmetrical Threat on the State's Conventional Security Forces

Authors: Igor Pejic

Abstract:

The main focus of this research will be on analyzing correlative links between terrorism as an asymmetrical threat and the consequences it leaves on conventional security forces. The methodology behind the research will include qualitative research methods focusing on comparative analysis of books, scientific papers, documents and other sources, in order to deduce, explore and formulate the results of the research. With the coming of the 21st century and the rising multi-polar, new world threats quickly emerged. The realistic approach in international relations deems that relations among nations are in a constant state of anarchy since there are no definitive rules and the distribution of power varies widely. International relations are further characterized by egoistic and self-orientated human nature, anarchy or absence of a higher government, security and lack of morality. The asymmetry of power is also reflected on countries' security capabilities and its abilities to project power. With the coming of the new millennia and the rising multi-polar world order, the asymmetry of power can be also added as an important trait of the global society which consequently brought new threats. Among various others, terrorism is probably the most well-known, well-based and well-spread asymmetric threat. In today's global political arena, terrorism is used by state and non-state actors to fulfill their political agendas. Terrorism is used as an all-inclusive tool for regime change, subversion or a revolution. Although the nature of terrorist groups is somewhat inconsistent, terrorism as a security and social phenomenon has a one constant which is reflected in its political dimension. The state's security apparatus, which was embodied in the form of conventional armed forces, is now becoming fragile, unable to tackle new threats and to a certain extent outdated. Conventional security forces were designed to defend or engage an exterior threat which is more or less symmetric and visible. On the other hand, terrorism as an asymmetrical threat is a part of hybrid, special or asymmetric warfare in which specialized units, institutions or facilities represent the primary pillars of security. In today's global society, terrorism is probably the most acute problem which can paralyze entire countries and their political systems. This problem, however, cannot be engaged on an open field of battle, but rather it requires a different approach in which conventional armed forces cannot be used traditionally and their role must be adjusted. The research will try to shed light on the phenomena of modern day terrorism and to prove its correlation with the state conventional armed forces. States are obliged to adjust their security apparatus to the new realism of global society and terrorism as an asymmetrical threat which is a side-product of the unbalanced world.

Keywords: asymmetrical warfare, conventional forces, security, terrorism

Procedia PDF Downloads 237
17440 Evaluation of DNA Microarray System in the Identification of Microorganisms Isolated from Blood

Authors: Merih Şimşek, Recep Keşli, Özgül Çetinkaya, Cengiz Demir, Adem Aslan

Abstract:

Bacteremia is a clinical entity with high morbidity and mortality rates when immediate diagnose, or treatment cannot be achieved. Microorganisms which can cause sepsis or bacteremia are easily isolated from blood cultures. Fifty-five positive blood cultures were included in this study. Microorganisms in 55 blood cultures were isolated by conventional microbiological methods; afterwards, microorganisms were defined in terms of the phenotypic aspects by the Vitek-2 system. The same microorganisms in all blood culture samples were defined in terms of genotypic aspects again by Multiplex-PCR DNA Low-Density Microarray System. At the end of the identification process, the DNA microarray system’s success in identification was evaluated based on the Vitek-2 system. The Vitek-2 system and DNA Microarray system were able to identify the same microorganisms in 53 samples; on the other hand, different microorganisms were identified in the 2 blood cultures by DNA Microarray system. The microorganisms identified by Vitek-2 system were found to be identical to 96.4 % of microorganisms identified by DNA Microarrays system. In addition to bacteria identified by Vitek-2, the presence of a second bacterium has been detected in 5 blood cultures by the DNA Microarray system. It was identified 18 of 55 positive blood culture as E.coli strains with both Vitek 2 and DNA microarray systems. The same identification numbers were found 6 and 8 for Acinetobacter baumanii, 10 and 10 for K.pneumoniae, 5 and 5 for S.aureus, 7 and 11 for Enterococcus spp, 5 and 5 for P.aeruginosa, 2 and 2 for C.albicans respectively. According to these results, DNA Microarray system requires both a technical device and experienced staff support; besides, it requires more expensive kits than Vitek-2. However, this method should be used in conjunction with conventional microbiological methods. Thus, large microbiology laboratories will produce faster, more sensitive and more successful results in the identification of cultured microorganisms.

Keywords: microarray, Vitek-2, blood culture, bacteremia

Procedia PDF Downloads 320
17439 An Overview of the Islamic Banking Development in the United Kingdom, Malaysia, Saudi Arabia, Iran, Nigeria, Kenya and Uganda

Authors: Pradeep Kulshrestha, Maulana Ayoub Ali

Abstract:

The level of penetration of Islamic banking products and services has recorded a reasonable growth at an exponential rate in many parts of the world. There are many factors which have contributed to this growth including, but not limited to the rapid growth of number of Muslims who are uncomfortable with the conventional ways of banking, interest and higher interest rates scheduled by conventional banks and financial institutions as well as the financial inclusion campaign conducted in many countries. The system is facing legal challenges which open the research fdoor for practitioners and academicians for the sake of finding out solutions to those challenges. This paper tries to investigate the development of the Islamic banking system in the United Kingdom (UK), Saudi Arabia, Malaysia, Iran, Kenya, Nigeria and Uganda in order to understand the modalities which have been employed to run an Islamic banking system in the aforementioned countries. The methodology which has been employed in doing this research paper is Doctrinal, of which legislations, policies and other legal tools have been carefully studied and analysed. Again, papers from academic journals, books and financial reports have been deeply analysed for the purpose of enriching the paper and come up with a tangible results. The paper found that in Asia, Malaysia has created the smoothest legal platform for Islamic banking system to work properly in the country. The United Kingdom has tried harder to smooth the banking system without affecting the conventional banking methods and without favouring the operations of Islamic banks. It also tries harder to make UK as an Islamic banking and finance hub in Europe. The entire banking system in Iran is Islamic, while Nigeria has undergone several legal reforms to suit Islamic banking system in the country. Kenya and Uganda are at a different pace in making Islamic Banking system work alongside the conventional banking system.  

Keywords: shariah, Islamic banking, law, alternative banking

Procedia PDF Downloads 125
17438 Clustering Using Cooperative Multihop Mini-Groups in Wireless Sensor Network: A Novel Approach

Authors: Virender Ranga, Mayank Dave, Anil Kumar Verma

Abstract:

Recently wireless sensor networks (WSNs) are used in many real life applications like environmental monitoring, habitat monitoring, health monitoring etc. Due to power constraint cheaper devices used in these applications, the energy consumption of each device should be kept as low as possible such that network operates for longer period of time. One of the techniques to prolong the network lifetime is an intelligent grouping of sensor nodes such that they can perform their operation in cooperative and energy efficient manner. With this motivation, we propose a novel approach by organize the sensor nodes in cooperative multihop mini-groups so that the total global energy consumption of the network can be reduced and network lifetime can be improved. Our proposed approach also reduces the number of transmitted messages inside the WSNs, which further minimizes the energy consumption of the whole network. The experimental simulations show that our proposed approach outperforms over the state-of-the-art approach in terms of stability period and aggregated data.

Keywords: clustering, cluster-head, mini-group, stability period

Procedia PDF Downloads 326
17437 Minimally Invasive versus Conventional Sternotomy for Aortic Valve Replacement: A Systematic Review and Meta-Analysis

Authors: Ahmed Shaboub, Yusuf Jasim Althawadi, Shadi Alaa Abdelaal, Mohamed Hussein Abdalla, Hatem Amr Elzahaby, Mohamed Mohamed, Hazem S. Ghaith, Ahmed Negida

Abstract:

Objectives: We aimed to compare the safety and outcomes of the minimally invasive approaches versus conventional sternotomy procedures for aortic valve replacement. Methods: We conducted a PRISMA-compliant systematic review and meta-analysis. We ran an electronic search of PubMed, Cochrane CENTRAL, Scopus, and Web of Science to identify the relevant published studies. Data were extracted and pooled as standardized mean difference (SMD) or risk ratio (RR) using StataMP version 17 for macOS. Results: Forty-one studies with a total of 15,065 patients were included in this meta-analysis (minimally invasive approaches n=7231 vs. conventional sternotomy n=7834). The pooled effect size showed that minimally invasive approaches had lower mortality rate (RR 0.76, 95%CI [0.59 to 0.99]), intensive care unit and hospital stays (SMD -0.16 and -0.31, respectively), ventilation time (SMD -0.26, 95%CI [-0.38 to -0.15]), 24-h chest tube drainage (SMD -1.03, 95%CI [-1.53 to -0.53]), RBCs transfusion (RR 0.81, 95%CI [0.70 to 0.93]), wound infection (RR 0.66, 95%CI [0.47 to 0.92]) and acute renal failure (RR 0.65, 95%CI [0.46 to 0.93]). However, minimally invasive approaches had longer operative time, cross-clamp, and bypass times (SMD 0.47, 95%CI [0.22 to 0.72], SMD 0.27, 95%CI [0.07 to 0.48], and SMD 0.37, 95%CI [0.20 to 0.45], respectively). There were no differences between the two groups in blood loss, endocarditis, cardiac tamponade, stroke, arrhythmias, pneumonia, pneumothorax, bleeding reoperation, tracheostomy, hemodialysis, or myocardial infarction (all P>0.05). Conclusion: Current evidence showed higher safety and better operative outcomes with minimally invasive aortic valve replacement compared to the conventional approach. Future RCTs with long-term follow-ups are recommended.

Keywords: aortic replacement, minimally invasive, sternotomy, mini-sternotomy, aortic valve, meta analysis

Procedia PDF Downloads 95
17436 Multi-Response Optimization of EDM for Ti-6Al-4V Using Taguchi-Grey Relational Analysis

Authors: Ritesh Joshi, Kishan Fuse, Gopal Zinzala, Nishit Nirmal

Abstract:

Ti-6Al-4V is a titanium alloy having high strength, low weight and corrosion resistant which is a required characteristic for a material to be used in aerospace industry. Titanium, being a hard alloy is difficult to the machine via conventional methods, so it is a call to use non-conventional processes. In present work, the effects on Ti-6Al-4V by drilling a hole of Ø 6 mm using copper (99%) electrode in Electric Discharge Machining (EDM) process is analyzed. Effect of various input parameters like peak current, pulse-on time and pulse-off time on output parameters viz material removal rate (MRR) and electrode wear rate (EWR) is studied. Multi-objective optimization technique Grey relational analysis is used for process optimization. Experiments are designed using an L9 orthogonal array. ANOVA is used for finding most contributing parameter followed by confirmation tests for validating the results. Improvement of 7.45% in gray relational grade is observed.

Keywords: ANOVA, electric discharge machining, grey relational analysis, Ti-6Al-4V

Procedia PDF Downloads 344
17435 Low Cost Technique for Measuring Luminance in Biological Systems

Authors: N. Chetty, K. Singh

Abstract:

In this work, the relationship between the melanin content in a tissue and subsequent absorption of light through that tissue was determined using a digital camera. This technique proved to be simple, cost effective, efficient and reliable. Tissue phantom samples were created using milk and soy sauce to simulate the optical properties of melanin content in human tissue. Increasing the concentration of soy sauce in the milk correlated to an increase in melanin content of an individual. Two methods were employed to measure the light transmitted through the sample. The first was direct measurement of the transmitted intensity using a conventional lux meter. The second method involved correctly calibrating an ordinary digital camera and using image analysis software to calculate the transmitted intensity through the phantom. The results from these methods were then graphically compared to the theoretical relationship between the intensity of transmitted light and the concentration of absorbers in the sample. Conclusions were then drawn about the effectiveness and efficiency of these low cost methods.

Keywords: tissue phantoms, scattering coefficient, albedo, low-cost method

Procedia PDF Downloads 250
17434 Study on the Geometric Similarity in Computational Fluid Dynamics Calculation and the Requirement of Surface Mesh Quality

Authors: Qian Yi Ooi

Abstract:

At present, airfoil parameters are still designed and optimized according to the scale of conventional aircraft, and there are still some slight deviations in terms of scale differences. However, insufficient parameters or poor surface mesh quality is likely to occur if these small deviations are embedded in a future civil aircraft with a size that is quite different from conventional aircraft, such as a blended-wing-body (BWB) aircraft with future potential, resulting in large deviations in geometric similarity in computational fluid dynamics (CFD) simulations. To avoid this situation, the study on the CFD calculation on the geometric similarity of airfoil parameters and the quality of the surface mesh is conducted to obtain the ability of different parameterization methods applied on different airfoil scales. The research objects are three airfoil scales, including the wing root and wingtip of conventional civil aircraft and the wing root of the giant hybrid wing, used by three parameterization methods to compare the calculation differences between different sizes of airfoils. In this study, the constants including NACA 0012, a Reynolds number of 10 million, an angle of attack of zero, a C-grid for meshing, and the k-epsilon (k-ε) turbulence model are used. The experimental variables include three airfoil parameterization methods: point cloud method, B-spline curve method, and class function/shape function transformation (CST) method. The airfoil dimensions are set to 3.98 meters, 17.67 meters, and 48 meters, respectively. In addition, this study also uses different numbers of edge meshing and the same bias factor in the CFD simulation. Studies have shown that with the change of airfoil scales, different parameterization methods, the number of control points, and the meshing number of divisions should be used to improve the accuracy of the aerodynamic performance of the wing. When the airfoil ratio increases, the most basic point cloud parameterization method will require more and larger data to support the accuracy of the airfoil’s aerodynamic performance, which will face the severe test of insufficient computer capacity. On the other hand, when using the B-spline curve method, average number of control points and meshing number of divisions should be set appropriately to obtain higher accuracy; however, the quantitative balance cannot be directly defined, but the decisions should be made repeatedly by adding and subtracting. Lastly, when using the CST method, it is found that limited control points are enough to accurately parameterize the larger-sized wing; a higher degree of accuracy and stability can be obtained by using a lower-performance computer.

Keywords: airfoil, computational fluid dynamics, geometric similarity, surface mesh quality

Procedia PDF Downloads 196
17433 Application of Artificial Neural Network in Assessing Fill Slope Stability

Authors: An-Jui. Li, Kelvin Lim, Chien-Kuo Chiu, Benson Hsiung

Abstract:

This paper details the utilization of artificial intelligence (AI) in the field of slope stability whereby quick and convenient solutions can be obtained using the developed tool. The AI tool used in this study is the artificial neural network (ANN), while the slope stability analysis methods are the finite element limit analysis methods. The developed tool allows for the prompt prediction of the safety factors of fill slopes and their corresponding probability of failure (depending on the degree of variation of the soil parameters), which can give the practicing engineer a reasonable basis in their decision making. In fact, the successful use of the Extreme Learning Machine (ELM) algorithm shows that slope stability analysis is no longer confined to the conventional methods of modeling, which at times may be tedious and repetitive during the preliminary design stage where the focus is more on cost saving options rather than detailed design. Therefore, similar ANN-based tools can be further developed to assist engineers in this aspect.

Keywords: landslide, limit analysis, artificial neural network, soil properties

Procedia PDF Downloads 178
17432 Cryptography Based Authentication Methods

Authors: Mohammad A. Alia, Abdelfatah Aref Tamimi, Omaima N. A. Al-Allaf

Abstract:

This paper reviews a comparison study on the most common used authentication methods. Some of these methods are actually based on cryptography. In this study, we show the main cryptographic services. Also, this study presents a specific discussion about authentication service, since the authentication service is classified into several categorizes according to their methods. However, this study gives more about the real life example for each of the authentication methods. It talks about the simplest authentication methods as well about the available biometric authentication methods such as voice, iris, fingerprint, and face authentication.

Keywords: information security, cryptography, system access control, authentication, network security

Procedia PDF Downloads 437
17431 Comparing Xbar Charts: Conventional versus Reweighted Robust Estimation Methods for Univariate Data Sets

Authors: Ece Cigdem Mutlu, Burak Alakent

Abstract:

Maintaining the quality of manufactured products at a desired level depends on the stability of process dispersion and location parameters and detection of perturbations in these parameters as promptly as possible. Shewhart control chart is the most widely used technique in statistical process monitoring to monitor the quality of products and control process mean and variability. In the application of Xbar control charts, sample standard deviation and sample mean are known to be the most efficient conventional estimators in determining process dispersion and location parameters, respectively, based on the assumption of independent and normally distributed datasets. On the other hand, there is no guarantee that the real-world data would be normally distributed. In the cases of estimated process parameters from Phase I data clouded with outliers, efficiency of traditional estimators is significantly reduced, and performance of Xbar charts are undesirably low, e.g. occasional outliers in the rational subgroups in Phase I data set may considerably affect the sample mean and standard deviation, resulting a serious delay in detection of inferior products in Phase II. For more efficient application of control charts, it is required to use robust estimators against contaminations, which may exist in Phase I. In the current study, we present a simple approach to construct robust Xbar control charts using average distance to the median, Qn-estimator of scale, M-estimator of scale with logistic psi-function in the estimation of process dispersion parameter, and Harrell-Davis qth quantile estimator, Hodge-Lehmann estimator and M-estimator of location with Huber psi-function and logistic psi-function in the estimation of process location parameter. Phase I efficiency of proposed estimators and Phase II performance of Xbar charts constructed from these estimators are compared with the conventional mean and standard deviation statistics both under normality and against diffuse-localized and symmetric-asymmetric contaminations using 50,000 Monte Carlo simulations on MATLAB. Consequently, it is found that robust estimators yield parameter estimates with higher efficiency against all types of contaminations, and Xbar charts constructed using robust estimators have higher power in detecting disturbances, compared to conventional methods. Additionally, utilizing individuals charts to screen outlier subgroups and employing different combination of dispersion and location estimators on subgroups and individual observations are found to improve the performance of Xbar charts.

Keywords: average run length, M-estimators, quality control, robust estimators

Procedia PDF Downloads 170
17430 Assessing Significance of Correlation with Binomial Distribution

Authors: Vijay Kumar Singh, Pooja Kushwaha, Prabhat Ranjan, Krishna Kumar Ojha, Jitendra Kumar

Abstract:

Present day high-throughput genomic technologies, NGS/microarrays, are producing large volume of data that require improved analysis methods to make sense of the data. The correlation between genes and samples has been regularly used to gain insight into many biological phenomena including, but not limited to, co-expression/co-regulation, gene regulatory networks, clustering and pattern identification. However, presence of outliers and violation of assumptions underlying Pearson correlation is frequent and may distort the actual correlation between the genes and lead to spurious conclusions. Here, we report a method to measure the strength of association between genes. The method assumes that the expression values of a gene are Bernoulli random variables whose outcome depends on the sample being probed. The method considers the two genes as uncorrelated if the number of sample with same outcome for both the genes (Ns) is equal to certainly expected number (Es). The extent of correlation depends on how far Ns can deviate from the Es. The method does not assume normality for the parent population, fairly unaffected by the presence of outliers, can be applied to qualitative data and it uses the binomial distribution to assess the significance of association. At this stage, we would not claim about the superiority of the method over other existing correlation methods, but our method could be another way of calculating correlation in addition to existing methods. The method uses binomial distribution, which has not been used until yet, to assess the significance of association between two variables. We are evaluating the performance of our method on NGS/microarray data, which is noisy and pierce by the outliers, to see if our method can differentiate between spurious and actual correlation. While working with the method, it has not escaped our notice that the method could also be generalized to measure the association of more than two variables which has been proven difficult with the existing methods.

Keywords: binomial distribution, correlation, microarray, outliers, transcriptome

Procedia PDF Downloads 384
17429 Fault-Detection and Self-Stabilization Protocol for Wireless Sensor Networks

Authors: Ather Saeed, Arif Khan, Jeffrey Gosper

Abstract:

Sensor devices are prone to errors and sudden node failures, which are difficult to detect in a timely manner when deployed in real-time, hazardous, large-scale harsh environments and in medical emergencies. Therefore, the loss of data can be life-threatening when the sensed phenomenon is not disseminated due to sudden node failure, battery depletion or temporary malfunctioning. We introduce a set of partial differential equations for localizing faults, similar to Green’s and Maxwell’s equations used in Electrostatics and Electromagnetism. We introduce a node organization and clustering scheme for self-stabilizing sensor networks. Green’s theorem is applied to regions where the curve is closed and continuously differentiable to ensure network connectivity. Experimental results show that the proposed GTFD (Green’s Theorem fault-detection and Self-stabilization) protocol not only detects faulty nodes but also accurately generates network stability graphs where urgent intervention is required for dynamically self-stabilizing the network.

Keywords: Green’s Theorem, self-stabilization, fault-localization, RSSI, WSN, clustering

Procedia PDF Downloads 46
17428 Investigating the Environmental Impact of Additive Manufacturing Compared to Conventional Manufacturing through Life Cycle Assessment

Authors: Gustavo Menezes De Souza Melo, Arnaud Heitz, Johannes Henrich Schleifenbaum

Abstract:

Additive manufacturing is a growing market that is taking over in many industries as it offers numerous advantages like new design possibilities, weight-saving solutions, ease of manufacture, and simplification of assemblies. These are all unquestionable technical or financial assets. As to the environmental aspect, additive manufacturing is often discussed whether it is the best solution to decarbonize our industries or if conventional manufacturing remains cleaner. This work presents a life cycle assessment (LCA) comparison based on the technological case of a motorbike swing-arm. We compare the original equipment manufacturer part made with conventional manufacturing (CM) methods to an additive manufacturing (AM) version printed using the laser powder bed fusion process. The AM version has been modified and optimized to achieve better dynamic performance without any regard to weight saving. Lightweight not being a priority in the creation of the 3D printed part brings us a unique perspective in this study. To achieve the LCA, we are using the open-source life cycle, and sustainability software OpenLCA combined with the ReCiPe 2016 at midpoint and endpoint level method. This allows the calculation and the presentation of the results through indicators such as global warming, water use, resource scarcity, etc. The results are then showing the relative impact of the AM version compared to the CM one and give us a key to understand and answer questions about the environmental sustainability of additive manufacturing.

Keywords: additive manufacturing, environmental impact, life cycle assessment, laser powder bed fusion

Procedia PDF Downloads 230
17427 Estimation and Validation of Free Lime Analysis of Clinker by Quantitative Phase Analysis Using X ray diffraction

Authors: Suresh Palla, Kalpna Sharma, Gaurav Bhatnagar, S. K. Chaturvedi, B. N. Mohapatra

Abstract:

Determining the content of free lime is especially important to judge reactivity of the raw materials and clinker quality. The free lime limit isn’t the same for all cements; it depends on several factors, especially the temperature reached during the cooking and the grain size distribution in cement after grinding. Estimation of free lime by conventional method is influenced by the presence of portlandite and misleads the actual free lime content in the clinker for quality check up conditions. To ensure the product quality according to the standard specifications in terms of within the quality limits or not, a reliable, precise, and very reproducible method to quantify the relative phase abundances in the Portland Cement clinker and Portland Cements is to use X-ray diffraction (XRD) in combination with the Rietveld method. In the present study, a methodology was proposed using XRD to validate the obtained results of free lime by conventional method. The XRD and TG/DTA results confirm the presence of portlandite in the clinker to take the decision on the obtained free lime results through conventional method.

Keywords: free lime, quantitative phase analysis, conventional method, x ray diffraction

Procedia PDF Downloads 111
17426 Investigation and Estimation of State of Health of Battery Pack in Battery Electric Vehicles-Online Battery Characterization

Authors: Ali Mashayekh, Mahdiye Khorasani, Thomas Weyh

Abstract:

The tendency to use the Battery-Electric vehicle (BEV) for the low and medium driving range or even high driving range has been growing more and more. As a result, higher safety, reliability, and durability of the battery pack as a component of electric vehicles, which has a great share of cost and weight of the final product, are the topics to be considered and investigated. Battery aging can be considered as the predominant factor regarding the reliability and durability of BEV. To better understand the aging process, offline battery characterization has been widely used, which is time-consuming and needs very expensive infrastructures. This paper presents the substitute method for the conventional battery characterization methods, which is based on battery Modular Multilevel Management (BM3). According to this Topology, the battery cells can be drained and charged concerning their capacity, which allows varying battery pack structures. Due to the integration of the power electronics, the output voltage of the battery pack is no longer fixed but can be dynamically adjusted in small steps. In other words, each cell can have three different states, namely series, parallel, and bypass in connection with the neighbor cells. With the help of MATLAB/Simulink and by using the BM3 modules, the battery string model is created. This model allows us to switch two cells with the different SoC as parallel, which results in the internal balancing of the cells. But if the parallel switching lasts just for a couple of ms, we can have a perturbation pulse which can stimulate the cells out of the relaxation phase. With the help of modeling the voltage response pulse of the battery, it would be possible to characterize the cell. The Online EIS method, which is discussed in this paper, can be a robust substitute for the conventional battery characterization methods.

Keywords: battery characterization, SoH estimation, RLS, BEV

Procedia PDF Downloads 124
17425 Evaluation of Paper Effluent with Two Bacterial Strain and Their Consortia

Authors: Priya Tomar, Pallavi Mittal

Abstract:

As industrialization is inevitable and progress with rapid acceleration, the need for innovative ways to get rid of waste has increased. Recent advancement in bioresource technology paves novel ideas for recycling of factory waste that has been polluting the agro-industry, soil and water bodies. Paper industries in India are in a considerable number, where molasses and impure alcohol are still being used as raw materials for manufacturing of paper. Paper mills based on nonconventional agro residues are being encouraged due to increased demand of paper and acute shortage of forest-based raw materials. The colouring body present in the wastewater from pulp and paper mill is organic in nature and is comprised of wood extractives, tannin, resins, synthetic dyes, lignin and its degradation products formed by the action of chlorine on lignin which imparts an offensive colour to the water. These mills use different chemical process for paper manufacturing due to which lignified chemicals are released into the environment. Therefore, the chemical oxygen demand (COD) of the emanating stream is quite high. This paper presents some new techniques that were developed for the efficiency of bioremediation on paper industry. A short introduction to paper industry and a variety of presently available methods of bioremediation on paper industry and different strategies are also discussed here. For solving the above problem, two bacterial strains (Pseudomonas aeruginosa and Bacillus subtilis) and their consortia (Pseudomonas aeruginosa and Bacillus subtilis) were utilized for the pulp and paper mill effluent. Pseudomonas aeruginosa and Bacillus subtilis named as T–1, T–2, T–3, T–4, T–5, T–6, for the decolourisation of paper industry effluent. The results indicated that a maximum colour reduction is (60.5%) achieved by Pseudomonas aeruginosa and COD reduction is (88.8%) achieved by Bacillus subtilis, maximum pH changes is (4.23) achieved by Pseudomonas aeruginosa, TSS reduction is (2.09 %) achieved by Bacillus subtilis, and TDS reduction is (0.95 %) achieved by Bacillus subtilis. When the wastewater was supplemented with carbon (glucose) and nitrogen (yeast extract) source and data revealed the efficiency of Bacillus subtilis, having more with glucose than Pseudomonas aeruginosa.

Keywords: bioremediation, paper and pulp mill effluent, treated effluent, lignin

Procedia PDF Downloads 230
17424 Photon Blockade in Non-Hermitian Optomechanical Systems with Nonreciprocal Couplings

Authors: J. Y. Sun, H. Z. Shen

Abstract:

We study the photon blockade at exceptional points for a non-Hermitian optomechanical system coupled to the driven whispering-gallery-mode microresonator with two nanoparticles under the weak optomechanical coupling approximation, where exceptional points emerge periodically by controlling the relative angle of the nanoparticles. We find that conventional photon blockade occurs at exceptional points for the eigenenergy resonance of the single-excitation subspace driven by a laser field and discuss the physical origin of conventional photon blockade. Under the weak driving condition, we analyze the influences of the different parameters on conventional photon blockade. We investigate conventional photon blockade at nonexceptional points, which exists at two optimal detunings due to the eigenstates in the single-excitation subspace splitting from one (coalescence) at exceptional points to two at nonexceptional points. Unconventional photon blockade can occur at nonexceptional points, while it does not exist at exceptional points since the destructive quantum interference cannot occur due to the two different quantum pathways to the two-photon state not being formed. The realization of photon blockade in our proposal provides a viable and flexible way for the preparation of single-photon sources in the non-Hermitian optomechanical system.

Keywords: optomechanical systems, photon blockade, non-hermitian, exceptional points

Procedia PDF Downloads 98
17423 Development of a Robust Protein Classifier to Predict EMT Status of Cervical Squamous Cell Carcinoma and Endocervical Adenocarcinoma (CESC) Tumors

Authors: ZhenlinJu, Christopher P. Vellano, RehanAkbani, Yiling Lu, Gordon B. Mills

Abstract:

The epithelial–mesenchymal transition (EMT) is a process by which epithelial cells acquire mesenchymal characteristics, such as profound disruption of cell-cell junctions, loss of apical-basolateral polarity, and extensive reorganization of the actin cytoskeleton to induce cell motility and invasion. A hallmark of EMT is its capacity to promote metastasis, which is due in part to activation of several transcription factors and subsequent downregulation of E-cadherin. Unfortunately, current approaches have yet to uncover robust protein marker sets that can classify tumors as possessing strong EMT signatures. In this study, we utilize reverse phase protein array (RPPA) data and consensus clustering methods to successfully classify a subset of cervical squamous cell carcinoma and endocervical adenocarcinoma (CESC) tumors into an EMT protein signaling group (EMT group). The overall survival (OS) of patients in the EMT group is significantly worse than those in the other Hormone and PI3K/AKT signaling groups. In addition to a shrinkage and selection method for linear regression (LASSO), we applied training/test set and Monte Carlo resampling approaches to identify a set of protein markers that predicts the EMT status of CESC tumors. We fit a logistic model to these protein markers and developed a classifier, which was fixed in the training set and validated in the testing set. The classifier robustly predicted the EMT status of the testing set with an area under the curve (AUC) of 0.975 by Receiver Operating Characteristic (ROC) analysis. This method not only identifies a core set of proteins underlying an EMT signature in cervical cancer patients, but also provides a tool to examine protein predictors that drive molecular subtypes in other diseases.

Keywords: consensus clustering, TCGA CESC, Silhouette, Monte Carlo LASSO

Procedia PDF Downloads 438
17422 Contour Defects of Face with Hyperpigmentation

Authors: Afzaal Bashir, Sunaina Afzaal

Abstract:

Background: Facial contour deformities associated with pigmentary changes are of major concern for plastic surgeons, both being important and difficult in treating such issues. No definite ideal treatment option is available to simultaneously address both the contour defect as well as related pigmentation. Objectives: The aim of the current study is to compare the long-term effects of conventional adipose tissue grafting and ex-vivo expanded Mesenchymal Stem Cells enriched adipose tissue grafting for the treatment of contour deformities with pigmentary changes on the face. Material and Methods: In this study, eighty (80) patients with contour deformities of the face with hyperpigmentation were recruited after informed consent. Two techniques i.e., conventional fat grafting (C-FG) and fat grafts enriched with expanded adipose stem cells (FG-ASCs), were used to address the pigmentation. Both techniques were explained to patients, and enrolled patients were divided into two groups i.e., C-FG and FG-ASCS, per patients’ choice and satisfaction. Patients of the FG-ASCs group were treated with fat grafts enriched with expanded adipose stem cells, while patients of the C-FGs group were treated with conventional fat grafting. Patients were followed for 12 months, and improvement in face pigmentation was assessed clinically as well as measured objectively. Patient satisfaction was also documented as highly satisfied, satisfied, and unsatisfied. Results: Mean age of patients was 24.42(±4.49), and 66 patients were females. The forehead was involved in 61.20% of cases, the cheek in 21.20% of cases, the chin in 11.20% of cases, and the nose in 6.20% of cases. In the GF-ASCs group, the integrated color density (ICD) was decreased (1.08×10⁶ ±4.64×10⁵) as compared to the C-FG group (2.80×10⁵±1.69×10⁵). Patients treated with fat grafts enriched with expanded adipose stem cells were significantly more satisfied as compared to patients treated with conventional fat grafting only. Conclusion: Mesenchymal stem cell-enriched autologous fat grafting is a preferred option for improving the contour deformities related to increased pigmentation of face skin.

Keywords: hyperpigmentation, color density, enriched adipose tissue graft, fat grafting, contour deformities, Image J

Procedia PDF Downloads 77
17421 Unseen Classes: The Paradigm Shift in Machine Learning

Authors: Vani Singhal, Jitendra Parmar, Satyendra Singh Chouhan

Abstract:

Unseen class discovery has now become an important part of a machine-learning algorithm to judge new classes. Unseen classes are the classes on which the machine learning model is not trained on. With the advancement in technology and AI replacing humans, the amount of data has increased to the next level. So while implementing a model on real-world examples, we come across unseen new classes. Our aim is to find the number of unseen classes by using a hierarchical-based active learning algorithm. The algorithm is based on hierarchical clustering as well as active sampling. The number of clusters that we will get in the end will give the number of unseen classes. The total clusters will also contain some clusters that have unseen classes. Instead of first discovering unseen classes and then finding their number, we directly calculated the number by applying the algorithm. The dataset used is for intent classification. The target data is the intent of the corresponding query. We conclude that when the machine learning model will encounter real-world data, it will automatically find the number of unseen classes. In the future, our next work would be to label these unseen classes correctly.

Keywords: active sampling, hierarchical clustering, open world learning, unseen class discovery

Procedia PDF Downloads 142
17420 Neural Networks with Different Initialization Methods for Depression Detection

Authors: Tianle Yang

Abstract:

As a common mental disorder, depression is a leading cause of various diseases worldwide. Early detection and treatment of depression can dramatically promote remission and prevent relapse. However, conventional ways of depression diagnosis require considerable human effort and cause economic burden, while still being prone to misdiagnosis. On the other hand, recent studies report that physical characteristics are major contributors to the diagnosis of depression, which inspires us to mine the internal relationship by neural networks instead of relying on clinical experiences. In this paper, neural networks are constructed to predict depression from physical characteristics. Two initialization methods are examined - Xaiver and Kaiming initialization. Experimental results show that a 3-layers neural network with Kaiming initialization achieves 83% accuracy.

Keywords: depression, neural network, Xavier initialization, Kaiming initialization

Procedia PDF Downloads 100
17419 Methodology of Islamic Economics: Scope and Prospects

Authors: Ahmad Abdulkadir Ibrahim

Abstract:

Observation of the methodology of Islamic economics laid down for the methods and instruments of analysis and even some of its basic assumptions in the modern world; is a matter that is of paramount importance. There is a need to examine the implications of different suggested definitions of Islamic economics, exploring its scope and attempting to outline its methodology. This paper attempts to deal with the definition of Islamic economics, its methodology, and its scope. It will outline the main methodological problem by addressing the question of whether Islamic economics calls for a methodology of its own or as an expanded economics. It also aims at drawing the attention of economists in the modern world to the obligation and consideration of the methodology of Islamic economics. The methodology adopted in this research is library research through the consultation of relevant literature, which focuses on the thematic study of the subject matter. This is followed by an analysis and discussion of the contents of the materials used. It is concluded that there is a certain degree of inconsistency in the way assumptions are incorporated that perhaps are alien to Islamic economics. The paper also observed that there is a difference between Islamic economists and other (conventional) economists in the profession. An important conclusion is that Islamic economists need to rethink what economics is all about and whether we really have to create an alternative to economics in the form of Islamic economics or simply have an Islamic perspective of the same discipline.

Keywords: methodology, Islamic economics, conventional economics, Muslim economists, framework, knowledge

Procedia PDF Downloads 101
17418 Study for an Optimal Cable Connection within an Inner Grid of an Offshore Wind Farm

Authors: Je-Seok Shin, Wook-Won Kim, Jin-O Kim

Abstract:

The offshore wind farm needs to be designed carefully considering economics and reliability aspects. There are many decision-making problems for designing entire offshore wind farm, this paper focuses on an inner grid layout which means the connection between wind turbines as well as between wind turbines and an offshore substation. A methodology proposed in this paper determines the connections and the cable type for each connection section using K-clustering, minimum spanning tree and cable selection algorithms. And then, a cost evaluation is performed in terms of investment, power loss and reliability. Through the cost evaluation, an optimal layout of inner grid is determined so as to have the lowest total cost. In order to demonstrate the validity of the methodology, the case study is conducted on 240MW offshore wind farm, and the results show that it is helpful to design optimally offshore wind farm.

Keywords: offshore wind farm, optimal layout, k-clustering algorithm, minimum spanning algorithm, cable type selection, power loss cost, reliability cost

Procedia PDF Downloads 360
17417 Analysis of Shallow Foundation Using Conventional and Finite Element Approach

Authors: Sultan Al Shafian, Mozaher Ul Kabir, Khondoker Istiak Ahmad, Masnun Abrar, Mahfuza Khanum, Hossain M. Shahin

Abstract:

For structural evaluation of shallow foundation, the modulus of subgrade reaction is one of the most widely used and accepted parameter for its ease of calculations. To determine this parameter, one of the most common field method is Plate Load test method. In this field test method, the subgrade modulus is considered for a specific location and according to its application, it is assumed that the displacement occurred in one place does not affect other adjacent locations. For this kind of assumptions, the modulus of subgrade reaction sometimes forced the engineers to overdesign the underground structure, which eventually results in increasing the cost of the construction and sometimes failure of the structure. In the present study, the settlement of a shallow foundation has been analyzed using both conventional and numerical analysis. Around 25 plate load tests were conducted on a sand fill site in Bangladesh to determine the Modulus of Subgrade reaction of ground which is later used to design a shallow foundation considering different depth. After the collection of the field data, the field condition was appropriately simulated in a finite element software. Finally results obtained from both the conventional and numerical approach has been compared. A significant difference has been observed in the case of settlement while comparing the results. A proper correlation has also been proposed at the end of this research work between the two methods of in order to provide the most efficient way to calculate the subgrade modulus of the ground for designing the shallow foundation.

Keywords: modulus of subgrade reaction, shallow foundation, finite element analysis, settlement, plate load test

Procedia PDF Downloads 162
17416 Comparison of Different Methods of Microorganism's Identification from a Copper Mining in Pará, Brazil

Authors: Louise H. Gracioso, Marcela P.G. Baltazar, Ingrid R. Avanzi, Bruno Karolski, Luciana J. Gimenes, Claudio O. Nascimento, Elen A. Perpetuo

Abstract:

Introduction: Higher copper concentrations promote a selection pressure on organisms such as plants, fungi and bacteria, which allows surviving only the resistant organisms to the contaminated site. This selective pressure keeps only the organisms most resistant to a specific condition and subsequently increases their bioremediation potential. Despite the bacteria importance for biosphere maintenance, it is estimated that only a small fraction living microbial species has been described and characterized. Due to the molecular biology development, tools based on analysis 16S ribosomal RNA or another specific gene are making a new scenario for the characterization studies and identification of microorganisms in the environment. News identification of microorganisms methods have also emerged like Biotyper (MALDI / TOF), this method mass spectrometry is subject to the recognition of spectroscopic patterns of conserved and features proteins for different microbial species. In view of this, this study aimed to isolate bacteria resistant to copper present in a Copper Processing Area (Sossego Mine, Canaan, PA) and identifies them in two different methods: Recent (spectrometry mass) and conventional. This work aimed to use them for a future bioremediation of this Mining. Material and Methods: Samples were collected at fifteen different sites of five periods of times. Microorganisms were isolated from mining wastes by culture enrichment technique; this procedure was repeated 4 times. The isolates were inoculated into MJS medium containing different concentrations of chloride copper (1mM, 2.5mM, 5mM, 7.5mM and 10 mM) and incubated in plates for 72 h at 28 ºC. These isolates were subjected to mass spectrometry identification methods (Biotyper – MALDI/TOF) and 16S gene sequencing. Results: A total of 105 strains were isolated in this area, bacterial identification by mass spectrometry method (MALDI/TOF) achieved 74% agreement with the conventional identification method (16S), 31% have been unsuccessful in MALDI-TOF and 2% did not obtain identification sequence the 16S. These results show that Biotyper can be a very useful tool in the identification of bacteria isolated from environmental samples, since it has a better value for money (cheap and simple sample preparation and MALDI plates are reusable). Furthermore, this technique is more rentable because it saves time and has a high performance (the mass spectra are compared to the database and it takes less than 2 minutes per sample).

Keywords: copper mining area, bioremediation, microorganisms, identification, MALDI/TOF, RNA 16S

Procedia PDF Downloads 355
17415 Conventional and Hybrid Network Energy Systems Optimization for Canadian Community

Authors: Mohamed Ghorab

Abstract:

Local generated and distributed system for thermal and electrical energy is sighted in the near future to reduce transmission losses instead of the centralized system. Distributed Energy Resources (DER) is designed at different sizes (small and medium) and it is incorporated in energy distribution between the hubs. The energy generated from each technology at each hub should meet the local energy demands. Economic and environmental enhancement can be achieved when there are interaction and energy exchange between the hubs. Network energy system and CO2 optimization between different six hubs presented Canadian community level are investigated in this study. Three different scenarios of technology systems are studied to meet both thermal and electrical demand loads for the six hubs. The conventional system is used as the first technology system and a reference case study. The conventional system includes boiler to provide the thermal energy, but the electrical energy is imported from the utility grid. The second technology system includes combined heat and power (CHP) system to meet the thermal demand loads and part of the electrical demand load. The third scenario has integration systems of CHP and Organic Rankine Cycle (ORC) where the thermal waste energy from the CHP system is used by ORC to generate electricity. General Algebraic Modeling System (GAMS) is used to model DER system optimization based on energy economics and CO2 emission analyses. The results are compared with the conventional energy system. The results show that scenarios 2 and 3 provide an annual total cost saving of 21.3% and 32.3 %, respectively compared to the conventional system (scenario 1). Additionally, Scenario 3 (CHP & ORC systems) provides 32.5% saving in CO2 emission compared to conventional system subsequent case 2 (CHP system) with a value of 9.3%.  

Keywords: distributed energy resources, network energy system, optimization, microgeneration system

Procedia PDF Downloads 170
17414 Factors Influencing the Profitability of the Conventional and Islamic Banks in Four Asian Countries

Authors: Vijay Kumar, Ron Bird

Abstract:

The study investigates the effect of bank-specific, industry-specific and macroeconomic variables on the profitability of conventional and Islamic banks. Our sample comprises 1,781 bank-year observations of 205 banks from four countries in the Asian region for the period 2004-2014. Our results suggest that credit quality, cost management and bank size are the keys factors that contribute positively to bank profitability in Asia. The banks with high non-performing loans and high cost-to-income ratio are more likely to be exposed to losses. The impacts of the bank-specific variables are stronger than are the industry-specific and macroeconomic variables. We find that Malaysian banks are the least profitable compared to the banks in Bangladesh, Indonesia and Pakistan. There is strong evidence to suggest that conventional banks are more profitable than Islamic banks. Our results suggest that the impact of capital adequacy ratio and bank size and loan to deposit ratio vary across Islamic and conventional banks and across different subsamples.

Keywords: capital adequacy ratio, Islamic banks, non-performing loan ratio, ownership

Procedia PDF Downloads 129
17413 Experimental Study of a Solar Still with Four Glass Cover

Authors: Zakaria Haddad, Azzedine Nahoui, Mohamed Salmi, Ali Djagham

Abstract:

Solar distillation is an effective and practical method for the production of drinking water in arid and semi-arid areas; however, this production is very limited. The aim of this work is to increase the latter by means of single slope solar still with four glass cover without augmenting volume and surface of a conventional solar still, using local materials and simple design. The equipment was tested under the climatic condition of Msila city (35°70′ N, 4°54′ E), Algeria. Performance of the use of four glass cover was studied, and exhaustive data were collected, analyzed, and presented. To show the effectiveness of the system, its performance was compared with that of the conventional solar still. The experimental study shows that the production of the proposed system achieves 5.3 l/m²/day and 5.8 l/m²/day respectively for the months of April and May, with an increase of 10% and 17% compared to the conventional solar still.

Keywords: drinking water, four glass cover, production, solar distillation

Procedia PDF Downloads 108
17412 A Periodogram-Based Spectral Method Approach: The Relationship between Tourism and Economic Growth in Turkey

Authors: Mesut BALIBEY, Serpil TÜRKYILMAZ

Abstract:

A popular topic in the econometrics and time series area is the cointegrating relationships among the components of a nonstationary time series. Engle and Granger’s least squares method and Johansen’s conditional maximum likelihood method are the most widely-used methods to determine the relationships among variables. Furthermore, a method proposed to test a unit root based on the periodogram ordinates has certain advantages over conventional tests. Periodograms can be calculated without any model specification and the exact distribution under the assumption of a unit root is obtained. For higher order processes the distribution remains the same asymptotically. In this study, in order to indicate advantages over conventional test of periodograms, we are going to examine a possible relationship between tourism and economic growth during the period 1999:01-2010:12 for Turkey by using periodogram method, Johansen’s conditional maximum likelihood method, Engle and Granger’s ordinary least square method.

Keywords: cointegration, economic growth, periodogram ordinate, tourism

Procedia PDF Downloads 246