Search results for: undesirable outputs
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 736

Search results for: undesirable outputs

676 Statistical and Analytical Comparison of GIS Overlay Modelings: An Appraisal on Groundwater Prospecting in Precambrian Metamorphics

Authors: Tapas Acharya, Monalisa Mitra

Abstract:

Overlay modeling is the most widely used conventional analysis for spatial decision support system. Overlay modeling requires a set of themes with different weightage computed in varied manners, which gives a resultant input for further integrated analysis. In spite of the popularity and most widely used technique; it gives inconsistent and erroneous results for similar inputs while processed in various GIS overlay techniques. This study is an attempt to compare and analyse the differences in the outputs of different overlay methods using GIS platform with same set of themes of the Precambrian metamorphic to obtain groundwater prospecting in Precambrian metamorphic rocks. The objective of the study is to emphasize the most suitable overlay method for groundwater prospecting in older Precambrian metamorphics. Seven input thematic layers like slope, Digital Elevation Model (DEM), soil thickness, lineament intersection density, average groundwater table fluctuation, stream density and lithology have been used in the spatial overlay models of fuzzy overlay, weighted overlay and weighted sum overlay methods to yield the suitable groundwater prospective zones. Spatial concurrence analysis with high yielding wells of the study area and the statistical comparative studies among the outputs of various overlay models using RStudio reveal that the Weighted Overlay model is the most efficient GIS overlay model to delineate the groundwater prospecting zones in the Precambrian metamorphic rocks.

Keywords: fuzzy overlay, GIS overlay model, groundwater prospecting, Precambrian metamorphics, weighted overlay, weighted sum overlay

Procedia PDF Downloads 127
675 Fintech Credit and Bank Efficiency Two-way Relationship: A Comparison Study Across Country Groupings

Authors: Tan Swee Liang

Abstract:

This paper studies the two-way relationship between fintech credit and banking efficiency using the Generalized panel Method of Moment (GMM) estimation in structural equation modeling (SEM). Banking system efficiency, defined as its ability to produce the existing level of outputs with minimal inputs, is measured using input-oriented data envelopment analysis (DEA), where the whole banking system of an economy is treated as a single DMU. Banks are considered an intermediary between depositors and borrowers, utilizing inputs (deposits and overhead costs) to provide outputs (increase credits to the private sector and its earnings). Analysis of the interrelationship between fintech credit and bank efficiency is conducted to determine the impact in different country groupings (ASEAN, Asia and OECD), in particular the banking system response to fintech credit platforms. Our preliminary results show that banks do respond to the greater pressure caused by fintech platforms to enhance their efficiency, but differently across the different groups. The author’s earlier research on ASEAN-5 high bank overhead costs (as a share of total assets) as the determinant of economic growth suggests that expenses may not have been channeled efficiently to income-generating activities. One practical implication of the findings is that policymakers should enable alternative financing, such as fintech credit, as a warning or encouragement for banks to improve their efficiency.

Keywords: fintech lending, banking efficiency, data envelopment analysis, structural equation modeling

Procedia PDF Downloads 91
674 Data Transformations in Data Envelopment Analysis

Authors: Mansour Mohammadpour

Abstract:

Data transformation refers to the modification of any point in a data set by a mathematical function. When applying transformations, the measurement scale of the data is modified. Data transformations are commonly employed to turn data into the appropriate form, which can serve various functions in the quantitative analysis of the data. This study addresses the investigation of the use of data transformations in Data Envelopment Analysis (DEA). Although data transformations are important options for analysis, they do fundamentally alter the nature of the variable, making the interpretation of the results somewhat more complex.

Keywords: data transformation, data envelopment analysis, undesirable data, negative data

Procedia PDF Downloads 20
673 Business Continuity Risk Review for a Large Petrochemical Complex

Authors: Michel A. Thomet

Abstract:

A discrete-event simulation model was used to perform a Reliability-Availability-Maintainability (RAM) study of a large petrochemical complex which included sixteen process units, and seven feeds and intermediate streams. All the feeds and intermediate streams have associated storage tanks, so that if a processing unit fails and shuts down, the downstream units can keep producing their outputs. This also helps the upstream units which do not have to reduce their outputs, but can store their excess production until the failed unit restart. Each process unit and each pipe section carrying the feeds and intermediate streams has a probability of failure with an associated distribution and a Mean Time Between Failure (MTBF), as well as a distribution of the time to restore and a Mean Time To Restore (MTTR). The utilities supporting the process units can also fail and have their own distributions with specific MTBF and MTTR. The model runs are for ten years or more and the runs are repeated several times to obtain statistically relevant results. One of the main results is the On-Stream factor (OSF) of each process unit (percent of hours in a year when the unit is running in nominal conditions). One of the objectives of the study was to investigate if the storage capacity of each of the feeds and the intermediate stream was adequate. This was done by increasing the storage capacities in several steps and through running the simulation to see if the OSF were improved and by how much. Other objectives were to see if the failure of the utilities were an important factor in the overall OSF, and what could be done to reduce their failure rates through redundant equipment.

Keywords: business continuity, on-stream factor, petrochemical, RAM study, simulation, MTBF

Procedia PDF Downloads 219
672 Deep Well Grounded Magnetite Anode Chains Retrieval and Installation for Raslanuf Complex Impressed Current Cathodic Protection System Rectification

Authors: Mohamed Ahmed Khali

Abstract:

Numbers of deep well anode ground beds (GBs) have been retrieved due to un operated anode chains. New identical magnetite anode chains(MAC) have been installed at Raslanuf complex impressed current Cathodic protection(ICCP) system, distributed at different plants(Utility, ethylene and polyethylene). All problems associated with retrieving and installation of MACs have been discussed, rectified and presented. All GB associated severely corroded wellhead casings were well maintained and/ or replaced by new fabricated and modified ones. The main cause of wellhead casings internal corrosion was discussed, and the conducted remedy action to overcome future corrosion problem is presented. All GB connected anode junction boxes (AJBs) and shunts were closely inspected, maintained, and necessary replacement/and or modification were carried out on shunts. All damaged GB concrete foundations (CF) have been inspected and completely replaced. All GB associated Transformer-Rectifiers units (TRUs) were subjected to through inspection, and necessary maintenance has been performed on each individual TRU. After completion of all MACs and TRU maintenance activities, each cathodic protection station (CPS) has been re-operated. An alternative current (AC), direct current (DC), voltage and structure to soil potential (S/P) measurements have been conducted, recorded, and all obtained test results are presented. DC current outputs has been adjusted, and DC current outputs of each MAC has been recorded for each GB AJB.

Keywords: magnatite anode, deep well, ground bed, cathodic protection, transformer rectifies, impreced current, junction box

Procedia PDF Downloads 112
671 Deep Well-Grounded Magnetite Anode Chains Retrieval and Installation for Raslanuf Complex Impressed Current Cathodic Protection System Rectification

Authors: Mohamed Ahmed Khalil

Abstract:

The number of deep well anode ground beds (GBs) have been retrieved due to unoperated anode chains. New identical magnetite anode chains (MAC) have been installed at Raslanuf complex impressed current Cathodic protection (ICCP) system, distributed at different plants (Utility, ethylene and polyethylene). All problems associated with retrieving and installation of MACs have been discussed, rectified and presented. All GB-associated severely corroded wellhead casings were well maintained and/or replaced by new fabricated and modified ones. The main cause of the wellhead casing's severe internal corrosion was discussed and the conducted remedy action to overcome future corrosion problems is presented. All GB-connected anode junction boxes (AJBs) and shunts were closely inspected, maintained and necessary replacement and/or modifications were carried out on shunts. All damaged GB concrete foundations (CF) have been inspected and completely replaced. All GB-associated Transformer-Rectifiers Units (TRU) were subjected to thorough inspection and necessary maintenance was performed on each individual TRU. After completion of all MACs and TRU maintenance activities, each cathodic protection station (CPS) has been re-operated, alternative current (AC), direct current (DC), voltage and structure to soil potential (S/P) measurements have been conducted, recorded and all obtained test results are presented. DC current outputs have been adjusted and DC current outputs of each MAC have been recorded for each GB AJB.

Keywords: magnetite anodes, deep well, ground beds, cathodic protection, transformer rectifier, impressed current, junction boxes

Procedia PDF Downloads 119
670 Out of Order: The Rise of Stop and Search in Civil Orders Legislation

Authors: Jodie Bradshaw, Rebecca Dooley, Habib Kadiri, Holly Bird, Aaliyah Felix-West, Udit Mahalingam, Ella Thomson

Abstract:

The sharp rise of civil orders has led to an expansion of police powers, particularly in the realm of stop and search activities. The broad scope and objectives of these civil orders –addressing issues as varied as public safety, crime prevention, and counter-terrorism – has led to ‘mission-creep’, whereby orders were being imposed in a wider range of contexts than initially intended. The ever-widening purview of civil orders in practice necessitates proactive measures by law enforcement which often rely heavily on the utilisation of stop and search, leading to an expansion of stop and search practices and the regulation of public space. Civil liberties organisations, criminal justice and legal practitioners, activist groups, and researchers have argued that civil orders dilute and undermine foundational legal principles, pose a threat to our basic rights and freedoms, facilitate dangerous criminal justice net-widening, and disproportionately target young, working-class people of colour. Many of the provisions in these orders are potentially incompatible with the right to liberty and security. The conditions of an order (whether negative restrictions or positive obligations) tend to be extremely easy to breach –and in some cases, almost impossible for the person subject to the order not to breach. When the conditions of an order are breached, the result is criminal punishment – often in the form of imprisonment. This paper argues that civil orders set people up to fail, sending them down a path towards incarceration and the ultimate deprivation of liberty. The proclaimed intentions underpinning these civil orders – to tackle purportedly ‘undesirable’ behaviour (which in and of itself is not a crime) committed by ‘undesirable’ people – paves the way for justifying violent and racially disproportionate policing practices.

Keywords: civil orders, policing, stop and search, crime, civil liberties, criminal punishment, anti-social behaviour

Procedia PDF Downloads 2
669 Application of Bayesian Model Averaging and Geostatistical Output Perturbation to Generate Calibrated Ensemble Weather Forecast

Authors: Muhammad Luthfi, Sutikno Sutikno, Purhadi Purhadi

Abstract:

Weather forecast has necessarily been improved to provide the communities an accurate and objective prediction as well. To overcome such issue, the numerical-based weather forecast was extensively developed to reduce the subjectivity of forecast. Yet the Numerical Weather Predictions (NWPs) outputs are unfortunately issued without taking dynamical weather behavior and local terrain features into account. Thus, NWPs outputs are not able to accurately forecast the weather quantities, particularly for medium and long range forecast. The aim of this research is to aid and extend the development of ensemble forecast for Meteorology, Climatology, and Geophysics Agency of Indonesia. Ensemble method is an approach combining various deterministic forecast to produce more reliable one. However, such forecast is biased and uncalibrated due to its underdispersive or overdispersive nature. As one of the parametric methods, Bayesian Model Averaging (BMA) generates the calibrated ensemble forecast and constructs predictive PDF for specified period. Such method is able to utilize ensemble of any size but does not take spatial correlation into account. Whereas space dependencies involve the site of interest and nearby site, influenced by dynamic weather behavior. Meanwhile, Geostatistical Output Perturbation (GOP) reckons the spatial correlation to generate future weather quantities, though merely built by a single deterministic forecast, and is able to generate an ensemble of any size as well. This research conducts both BMA and GOP to generate the calibrated ensemble forecast for the daily temperature at few meteorological sites nearby Indonesia international airport.

Keywords: Bayesian Model Averaging, ensemble forecast, geostatistical output perturbation, numerical weather prediction, temperature

Procedia PDF Downloads 280
668 In-Situ Synthesis of Zinc-Containing MCM-41 and Investigation of Its Capacity for Removal of Hydrogen Sulfide from Crude Oil

Authors: Nastaran Hazrati, Ali Akbar Miran Beigi, Majid Abdouss, Amir Vahid

Abstract:

Hydrogen sulfide is the most toxic gas of crude oil. Adsorption is an energy-efficient process used to remove undesirable compounds such as H2S in gas or liquid streams by passing the stream through a media bed composed of an adsorbent. In this study, H2S of Iran crude oil was separated via cold stripping then zinc incorporated MCM-41 was synthesized via an in-situ method. ZnO functionalized mesoporous silica samples were characterized by XRD, N2 adsorption and TEM. The obtained results of adsorption of H2S showed superior ability of all the materials and with an increase in ZnO amount adsorption was increased.

Keywords: MCM-41, ZnO, H2S removal, adsorption

Procedia PDF Downloads 467
667 The Challenge of Assessing Social AI Threats

Authors: Kitty Kioskli, Theofanis Fotis, Nineta Polemi

Abstract:

The European Union (EU) directive Artificial Intelligence (AI) Act in Article 9 requires that risk management of AI systems includes both technical and human oversight, while according to NIST_AI_RFM (Appendix C) and ENISA AI Framework recommendations, claim that further research is needed to understand the current limitations of social threats and human-AI interaction. AI threats within social contexts significantly affect the security and trustworthiness of the AI systems; they are interrelated and trigger technical threats as well. For example, lack of explainability (e.g. the complexity of models can be challenging for stakeholders to grasp) leads to misunderstandings, biases, and erroneous decisions. Which in turn impact the privacy, security, accountability of the AI systems. Based on the NIST four fundamental criteria for explainability it can also classify the explainability threats into four (4) sub-categories: a) Lack of supporting evidence: AI systems must provide supporting evidence or reasons for all their outputs. b) Lack of Understandability: Explanations offered by systems should be comprehensible to individual users. c) Lack of Accuracy: The provided explanation should accurately represent the system's process of generating outputs. d) Out of scope: The system should only function within its designated conditions or when it possesses sufficient confidence in its outputs. Biases may also stem from historical data reflecting undesired behaviors. When present in the data, biases can permeate the models trained on them, thereby influencing the security and trustworthiness of the of AI systems. Social related AI threats are recognized by various initiatives (e.g., EU Ethics Guidelines for Trustworthy AI), standards (e.g. ISO/IEC TR 24368:2022 on AI ethical concerns, ISO/IEC AWI 42105 on guidance for human oversight of AI systems) and EU legislation (e.g. the General Data Protection Regulation 2016/679, the NIS 2 Directive 2022/2555, the Directive on the Resilience of Critical Entities 2022/2557, the EU AI Act, the Cyber Resilience Act). Measuring social threats, estimating the risks to AI systems associated to these threats and mitigating them is a research challenge. In this paper it will present the efforts of two European Commission Projects (FAITH and THEMIS) from the HorizonEurope programme that analyse the social threats by building cyber-social exercises in order to study human behaviour, traits, cognitive ability, personality, attitudes, interests, and other socio-technical profile characteristics. The research in these projects also include the development of measurements and scales (psychometrics) for human-related vulnerabilities that can be used in estimating more realistically the vulnerability severity, enhancing the CVSS4.0 measurement.

Keywords: social threats, artificial Intelligence, mitigation, social experiment

Procedia PDF Downloads 65
666 Analysis of Enhanced Built-up and Bare Land Index in the Urban Area of Yangon, Myanmar

Authors: Su Nandar Tin, Wutjanun Muttitanon

Abstract:

The availability of free global and historical satellite imagery provides a valuable opportunity for mapping and monitoring the year by year for the built-up area, constantly and effectively. Land distribution guidelines and identification of changes are important in preparing and reviewing changes in the ground overview data. This study utilizes Landsat images for thirty years of information to acquire significant, and land spread data that are extremely valuable for urban arranging. This paper is mainly introducing to focus the basic of extracting built-up area for the city development area from the satellite images of LANDSAT 5,7,8 and Sentinel 2A from USGS in every five years. The purpose analyses the changing of the urban built-up area according to the year by year and to get the accuracy of mapping built-up and bare land areas in studying the trend of urban built-up changes the periods from 1990 to 2020. The GIS tools such as raster calculator and built-up area modelling are using in this study and then calculating the indices, which include enhanced built-up and bareness index (EBBI), Normalized difference Built-up index (NDBI), Urban index (UI), Built-up index (BUI) and Normalized difference bareness index (NDBAI) are used to get the high accuracy urban built-up area. Therefore, this study will point out a variable approach to automatically mapping typical enhanced built-up and bare land changes (EBBI) with simple indices and according to the outputs of indexes. Therefore, the percentage of the outputs of enhanced built-up and bareness index (EBBI) of the sentinel-2A can be realized with 48.4% of accuracy than the other index of Landsat images which are 15.6% in 1990 where there is increasing urban expansion area from 43.6% in 1990 to 92.5% in 2020 on the study area for last thirty years.

Keywords: built-up area, EBBI, NDBI, NDBAI, urban index

Procedia PDF Downloads 171
665 Self-serving Anchoring of Self-judgments

Authors: Elitza Z. Ambrus, Bjoern Hartig, Ryan McKay

Abstract:

Individuals’ self-judgments might be malleable and influenced by comparison with a random value. On the one hand, self-judgments reflect our self-image, which is typically considered to be stable in adulthood. Indeed, people also strive hard to maintain a fixed, positive moral image of themselves. On the other hand, research has shown the robustness of the so-called anchoring effect on judgments and decisions. The anchoring effect refers to the influence of a previously considered comparative value (anchor) on a consecutive absolute judgment and reveals that individuals’ estimates of various quantities are flexible and can be influenced by a salient random value. The present study extends the anchoring paradigm to the domain of the self. We also investigate whether participants are more susceptible to self-serving anchors, i.e., anchors that enhance participant’s self-image, especially their moral self-image. In a pre-reregistered study via the online platform Prolific, 249 participants (156 females, 89 males, 3 other and 1 who preferred not to specify their gender; M = 35.88, SD = 13.91) ranked themselves on eight personality characteristics. However, in the anchoring conditions, respondents were asked to first indicate whether they thought they would rank higher or lower than a given anchor value before providing their estimated rank in comparison to 100 other anonymous participants. A high and a low anchor value were employed to differentiate between anchors in a desirable (self-serving) direction and anchors in an undesirable (self-diminishing) direction. In the control treatment, there was no comparison question. Subsequently, participants provided their self-rankings on the eight personality traits with two personal characteristics for each combination of the factors desirable/undesirable and moral/non-moral. We found evidence of an anchoring effect for self-judgments. Moreover, anchoring was more efficient when people were anchored in a self-serving direction: the anchoring effect was enhanced when supporting a more favorable self-view and mitigated (even reversed) when implying a deterioration of the self-image. The self-serving anchoring was more pronounced for moral than for non-moral traits. The data also provided evidence in support of a better-than-average effect in general as well as a magnified better-than-average effect for moral traits. Taken together, these results suggest that self-judgments might not be as stable in adulthood as previously thought. In addition, considerations of constructing and maintaining a positive self-image might interact with the anchoring effect on self-judgments. Potential implications of our results concern the construction and malleability of self-judgments as well as the psychological mechanism shaping anchoring.

Keywords: anchoring, better-than-average effect, self-judgments, self-serving anchoring

Procedia PDF Downloads 180
664 Issues in Travel Demand Forecasting

Authors: Huey-Kuo Chen

Abstract:

Travel demand forecasting including four travel choices, i.e., trip generation, trip distribution, modal split and traffic assignment constructs the core of transportation planning. In its current application, travel demand forecasting has associated with three important issues, i.e., interface inconsistencies among four travel choices, inefficiency of commonly used solution algorithms, and undesirable multiple path solutions. In this paper, each of the three issues is extensively elaborated. An ideal unified framework for the combined model consisting of the four travel choices and variable demand functions is also suggested. Then, a few remarks are provided in the end of the paper.

Keywords: travel choices, B algorithm, entropy maximization, dynamic traffic assignment

Procedia PDF Downloads 458
663 Valorisation of Food Waste Residue into Sustainable Bioproducts

Authors: Krishmali N. Ekanayake, Brendan J. Holland, Colin J. Barrow, Rick Wood

Abstract:

Globally, more than one-third of all food produced is lost or wasted, equating to 1.3 billion tonnes per year. Around 31.2 million tonnes of food waste are generated across the production, supply, and consumption chain in Australia. Generally, the food waste management processes adopt environmental-friendly and more sustainable approaches such as composting, anerobic digestion and energy implemented technologies. However, unavoidable, and non-recyclable food waste ends up as landfilling and incineration that involve many undesirable impacts and challenges on the environment. A biorefinery approach contributes to a waste-minimising circular economy by converting food and other organic biomass waste into valuable outputs, including feeds, nutrition, fertilisers, and biomaterials. As a solution, Green Eco Technologies has developed a food waste treatment process using WasteMaster system. The system uses charged oxygen and moderate temperatures to convert food waste, without bacteria, additives, or water, into a virtually odour-free, much reduced quantity of reusable residual material. In the context of a biorefinery, the WasteMaster dries and mills food waste into a form suitable for storage or downstream extraction/separation/concentration to create products. The focus of the study is to determine the nutritional composition of WasteMaster processed residue to potential develop aquafeed ingredients. The global aquafeed industry is projected to reach a high value market in future, which has shown high demand for the aquafeed products. Therefore, food waste can be utilized for aquaculture feed development by reducing landfill. This framework will lessen the requirement of raw crops cultivation for aquafeed development and reduce the aquaculture footprint. In the present study, the nutritional elements of processed residue are consistent with the input food waste type, which has shown that the WasteMaster is not affecting the expected nutritional distribution. The macronutrient retention values of protein, lipid, and nitrogen free extract (NFE) are detected >85%, >80%, and >95% respectively. The sensitive food components including omega 3 and omega 6 fatty acids, amino acids, and phenolic compounds have been found intact in each residue material. Preliminary analysis suggests a price comparability with current aquafeed ingredient cost making the economic feasibility. The results suggest high potentiality of aquafeed development as 5 to 10% of the ingredients to replace/partially substitute other less sustainable ingredients across biorefinery setting. Our aim is to improve the sustainability of aquaculture and reduce the environmental impacts of food waste.

Keywords: biorefinery, ffood waste residue, input, wasteMaster

Procedia PDF Downloads 67
662 Modeling Binomial Dependent Distribution of the Values: Synthesis Tables of Probabilities of Errors of the First and Second Kind of Biometrics-Neural Network Authentication System

Authors: B. S.Akhmetov, S. T. Akhmetova, D. N. Nadeyev, V. Yu. Yegorov, V. V. Smogoonov

Abstract:

Estimated probabilities of errors of the first and second kind for nonideal biometrics-neural transducers 256 outputs, the construction of nomograms based error probability of 'own' and 'alien' from the mathematical expectation and standard deviation of the normalized measures Hamming.

Keywords: modeling, errors, probability, biometrics, neural network, authentication

Procedia PDF Downloads 482
661 Cultural and Group Understandings of Disability and Sexuality

Authors: Luke Galvani

Abstract:

The cultural representations of people with disabilities are frequently biased which can lead to a general misunderstanding of disability. Representations of disabled deviance are especially problematic given that they typify or generally abstract disability as being abnormal, which then begin to take root in the cultural mind. This study utilizes critical discourse analysis to investigate how discourses of disabled sexual deviance are promoted within two major films that portray disabled sexual subjects. The findings indicate that perceptions of disabled sexual deviance are heightened by cinematic representations of sex and disability, which characterize disabled sexual expression as being undesirable due to the ephemeral and abnormal qualities ascribed to it.

Keywords: deviance, disability, discourse analysis, sexuality

Procedia PDF Downloads 168
660 Theoretical Study of Carbonic Anhydrase-Ii Inhibitors for Treatment of Glaucoma

Authors: F. Boukli Hacene, W. Soufi, S. Ghalem

Abstract:

Glaucoma disease is a progressive degenerative optic neuropathy, with irreversible visual field deficits and high eye pressure being one of the risk factors. Sulfonamides are carbonic anhydrase-II inhibitors that aim to decrease the secretion of aqueous humor by direct inhibition of this enzyme at the level of the ciliary processes. These drugs present undesirable effects that are difficult to accept by the patient. In our study, we are interested in the inhibition of carbonic anhydrase-II by different natural ligands (curcumin analogues) using molecular modeling methods using molecular operating environment (MOE) software to predict their interaction with this enzyme.

Keywords: carbonic anhydrase-II, curcumin analogues, drug research, molecular modeling

Procedia PDF Downloads 89
659 Metal Layer Based Vertical Hall Device in a Complementary Metal Oxide Semiconductor Process

Authors: Se-Mi Lim, Won-Jae Jung, Jin-Sup Kim, Jun-Seok Park, Hyung-Il Chae

Abstract:

This paper presents a current-mode vertical hall device (VHD) structure using metal layers in a CMOS process. The proposed metal layer based vertical hall device (MLVHD) utilizes vertical connection among metal layers (from M1 to the top metal) to facilitate hall effect. The vertical metal structure unit flows a bias current Ibias from top to bottom, and an external magnetic field changes the current distribution by Lorentz force. The asymmetric current distribution can be detected by two differential-mode current outputs on each side at the bottom (M1), and each output sinks Ibias/2 ± Ihall. A single vertical metal structure generates only a small amount of hall effect of Ihall due to the short length from M1 to the top metal as well as the low conductivity of the metal, and a series connection between thousands of vertical structure units can solve the problem by providing NxIhall. The series connection between two units is another vertical metal structure flowing current in the opposite direction, and generates negative hall effect. To mitigate the negative hall effect from the series connection, the differential current outputs at the bottom (M1) from one unit merges on the top metal level of the other unit. The proposed MLVHD is simulated in a 3-dimensional model simulator in COMSOL Multiphysics, with 0.35 μm CMOS process parameters. The simulated MLVHD unit size is (W) 10 μm × (L) 6 μm × (D) 10 μm. In this paper, we use an MLVHD with 10 units; the overall hall device size is (W) 10 μm × (L)78 μm × (D) 10 μm. The COMSOL simulation result is as following: the maximum hall current is approximately 2 μA with a 12 μA bias current and 100mT magnetic field; This work was supported by Institute for Information & communications Technology Promotion(IITP) grant funded by the Korea government(MSIP) (No.R7117-16-0165, Development of Hall Effect Semiconductor for Smart Car and Device).

Keywords: CMOS, vertical hall device, current mode, COMSOL

Procedia PDF Downloads 302
658 Education-based, Graphical User Interface Design for Analyzing Phase Winding Inter-Turn Faults in Permanent Magnet Synchronous Motors

Authors: Emir Alaca, Hasbi Apaydin, Rohullah Rahmatullah, Necibe Fusun Oyman Serteller

Abstract:

In recent years, Permanent Magnet Synchronous Motors (PMSMs) have found extensive applications in various industrial sectors, including electric vehicles, wind turbines, and robotics, due to their high performance and low losses. Accurate mathematical modeling of PMSMs is crucial for advanced studies in electric machines. To enhance the effectiveness of graduate-level education, incorporating virtual or real experiments becomes essential to reinforce acquired knowledge. Virtual laboratories have gained popularity as cost-effective alternatives to physical testing, mitigating the risks associated with electrical machine experiments. This study presents a MATLAB-based Graphical User Interface (GUI) for PMSMs. The GUI offers a visual interface that allows users to observe variations in motor outputs corresponding to different input parameters. It enables users to explore healthy motor conditions and the effects of short-circuit faults in the one-phase winding. Additionally, the interface includes menus through which users can access equivalent circuits related to the motor and gain hands-on experience with the mathematical equations used in synchronous motor calculations. The primary objective of this paper is to enhance the learning experience of graduate and doctoral students by providing a GUI-based approach in laboratory studies. This interactive platform empowers students to examine and analyze motor outputs by manipulating input parameters, facilitating a deeper understanding of PMSM operation and control.

Keywords: magnet synchronous motor, mathematical modelling, education tools, winding inter-turn fault

Procedia PDF Downloads 51
657 Evolution of Multimodulus Algorithm Blind Equalization Based on Recursive Least Square Algorithm

Authors: Sardar Ameer Akram Khan, Shahzad Amin Sheikh

Abstract:

Blind equalization is an important technique amongst equalization family. Multimodulus algorithms based on blind equalization removes the undesirable effects of ISI and cater ups the phase issues, saving the cost of rotator at the receiver end. In this paper a new algorithm combination of recursive least square and Multimodulus algorithm named as RLSMMA is proposed by providing few assumption, fast convergence and minimum Mean Square Error (MSE) is achieved. The excellence of this technique is shown in the simulations presenting MSE plots and the resulting filter results.

Keywords: blind equalizations, constant modulus algorithm, multi-modulus algorithm, recursive least square algorithm, quadrature amplitude modulation (QAM)

Procedia PDF Downloads 644
656 In Silico Study of Alpha glucosidase Inhibitors by Flavonoids

Authors: Boukli Hacene Faiza, Soufi Wassila, Ghalem Said

Abstract:

The oral antidiabetics drugs such as alpha glucosidase inhibitors present undesirable effects like acarbose. Flavonoids are class of molecules widely distributed in plants, for this reason we are interested in our work to study the inhibition in silico of alpha glucosidase by natural ligands ( flavonoids analogues) using molecular modeling methods using MOE (Molecular Operating Environment) software to predict their interaction with this enzyme with score energy, ADME /T tests and druglikeness properties experiments. Two flavonoids Beicalein and Apigenin have high binding affinity with alpha glucosidase with lower IC50 supposed potent inhibitors.

Keywords: alpha glucosidase, flavonoides analogues, drug research, molecular modeling

Procedia PDF Downloads 107
655 A Challenge of the 3ʳᵈ Millenium: The Emotional Intelligence Development

Authors: Florentina Hahaianu, Mihaela Negrescu

Abstract:

The analysis of the positive and negative effects of technology use and abuse in Generation Z comes as a necessity in order to understand their ever-changing emotional development needs. The article quantitatively analyzes the findings of a sociological questionnaire on a group of students in social sciences. It aimed to identify the changes generated by the use of digital resources in the emotional intelligence development. Among the outcomes of our study we include a predilection for IT related activities – be they social, learning, entertainment, etc. which undermines the manifestation of emotional intelligence, especially the reluctance to face-to-face interaction. In this context, the issue of emotional intelligence development comes into focus as a solution to compensate for the undesirable effects that contact with technology has on this generation.

Keywords: digital resources, emotional intelligence, generation Z, students

Procedia PDF Downloads 206
654 Inputs and Outputs of Innovation Processes in the Colombian Services Sector

Authors: Álvaro Turriago-Hoyos

Abstract:

Most research tends to see innovation as an explanatory factor in achieving high levels of competitiveness and productivity. More recent studies have begun to analyze the determinants of innovation in the services sector as opposed to the much-discussed industrial sector of a country’s economy. This research paper focuses on the services sector in Colombia, one of Latin America’s fastest growing and biggest economies. Over the past decade, much of Colombia’s economic expansion has relied on commodity exports (mainly oil and coffee) whilst the industrial sector has performed relatively poorly. Such developments highlight the potential of the innovative role played by the services sector of the Colombian economy and its future growth prospects. This research paper analyzes the relationship between inputs, which at the same time are internal sources of innovation (such as R&D activities), and external sources that are improved by technology acquisition. The outputs are basically the four kinds of innovation that the OECD Oslo Manual recognizes: product, process, marketing and organizational innovations. The instrument used to measure this input-output relationship is based on Knowledge Production Function approaches. We run Probit models in order to identify the existing relationships between the above inputs and outputs, but also to identify spill-overs derived from interactions of the components of the value chain of the services firms analyzed: customers, suppliers, competitors, and complementary firms. Data are obtained from the Colombian National Administrative Department of Statistics for the period 2008 to 2013 published in the II and III Colombian National Innovation Survey. A short summary of the results obtained lead to conclude that firm size and a firm’s level of technological development turn out to be important discriminating factors for the description of the innovative process at the firm level. The model’s outcomes show a positive impact on the probability of introducing any kind of innovation both on R&D and Technology Acquisition investment. Also, cooperation agreements with customers, research institutes, competitors, and the suppliers are significant. Belonging to a particular industrial group is an important determinant but only to product and organizational innovation. It is possible to establish that Health Services, Education, Computer, Wholesale trade, and Financial Intermediation are the ISIC sectors, which report the highest number of frequencies of the considered set of firms. Those five sectors of the sixteen considered, in all cases, explained more than half of the total of all kinds of innovations. Product Innovation, which is followed by Marketing Innovation, gets the highest results. Displaying the same set of firms distinguishing by size, and belonging to high and low tech services sector shows that the larger the firms the larger a number of innovations, but also that always high-tech firms show a better innovation performance.

Keywords: Colombia, determinants of innovation, innovation, services sector

Procedia PDF Downloads 267
653 An Overview of Adaptive Channel Equalization Techniques and Algorithms

Authors: Navdeep Singh Randhawa

Abstract:

Wireless communication system has been proved as the best for any communication. However, there are some undesirable threats of a wireless communication channel on the information transmitted through it, such as attenuation, distortions, delays and phase shifts of the signals arriving at the receiver end which are caused by its band limited and dispersive nature. One of the threat is ISI (Inter Symbol Interference), which has been found as a great obstacle in high speed communication. Thus, there is a need to provide perfect and accurate technique to remove this effect to have an error free communication. Thus, different equalization techniques have been proposed in literature. This paper presents the equalization techniques followed by the concept of adaptive filter equalizer, its algorithms (LMS and RLS) and applications of adaptive equalization technique.

Keywords: channel equalization, adaptive equalizer, least mean square, recursive least square

Procedia PDF Downloads 450
652 Investigation of Oscillation Mechanism of a Large-scale Solar Photovoltaic and Wind Hybrid Power Plant

Authors: Ting Kai Chia, Ruifeng Yan, Feifei Bai, Tapan Saha

Abstract:

This research presents a real-world power system oscillation incident in 2022 originated by a hybrid solar photovoltaic (PV) and wind renewable energy farm with a rated capacity of approximately 300MW in Australia. The voltage and reactive power outputs recorded at the point of common coupling (PCC) oscillated at a sub-synchronous frequency region, which sustained for approximately five hours in the network. The reactive power oscillation gradually increased over time and reached a recorded maximum of approximately 250MVar peak-to-peak (from inductive to capacitive). The network service provider was not able to quickly identify the location of the oscillation source because the issue was widespread across the network. After the incident, the original equipment manufacturer (OEM) concluded that the oscillation problem was caused by the incorrect setting recovery of the hybrid power plant controller (HPPC) in the voltage and reactive power control loop after a loss of communication event. The voltage controller normally outputs a reactive (Q) reference value to the Q controller which controls the Q dispatch setpoint of PV and wind plants in the hybrid farm. Meanwhile, a feed-forward (FF) configuration is used to bypass the Q controller in case there is a loss of communication. Further study found that the FF control mode was still engaged when communication was re-established, which ultimately resulted in the oscillation event. However, there was no detailed explanation of why the FF control mode can cause instability in the hybrid farm. Also, there was no duplication of the event in the simulation to analyze the root cause of the oscillation. Therefore, this research aims to model and replicate the oscillation event in a simulation environment and investigate the underlying behavior of the HPPC and the consequent oscillation mechanism during the incident. The outcome of this research will provide significant benefits to the safe operation of large-scale renewable energy generators and power networks.

Keywords: PV, oscillation, modelling, wind

Procedia PDF Downloads 37
651 Automatic Generation of Census Enumeration Area and National Sampling Frame to Achieve Sustainable Development Goals

Authors: Sarchil H. Qader, Andrew Harfoot, Mathias Kuepie, Sabrina Juran, Attila Lazar, Andrew J. Tatem

Abstract:

The need for high-quality, reliable, and timely population data, including demographic information, to support the achievement of the sustainable development goals (SDGs) in all countries was recognized by the United Nations' 2030 Agenda for sustainable development. However, many low and middle-income countries lack reliable and recent census data. To achieve reliable and accurate census and survey outputs, up-to-date census enumeration areas and digital national sampling frames are critical. Census enumeration areas (EAs) are the smallest geographic units for collection, disseminating, and analyzing census data and are often used as a national sampling frame to serve various socio-economic surveys. Even for countries that are wealthy and stable, creating and updating EAs is a difficult yet crucial step in preparing for a national census. Such a process is commonly done manually, either by digitizing small geographic units on high-resolution satellite imagery or walking the boundaries of units, both of which are extremely expensive. We have developed a user-friendly tool that could be employed to generate draft EA boundaries automatically. The tool is based on high-resolution gridded population and settlement datasets, GPS household locations, building footprints and uses publicly available natural, man-made and administrative boundaries. Initial outputs were produced in Burkina Faso, Paraguay, Somalia, Togo, Niger, Guinea, and Zimbabwe. The results indicate that the EAs are in line with international standards, including boundaries that are easily identifiable and follow ground features, have no overlaps, are compact and free of pockets and disjoints, and the boundaries are nested within administrative boundaries.

Keywords: enumeration areas, national sampling frame, gridded population data, preEA tool

Procedia PDF Downloads 144
650 Frequency Decomposition Approach for Sub-Band Common Spatial Pattern Methods for Motor Imagery Based Brain-Computer Interface

Authors: Vitor M. Vilas Boas, Cleison D. Silva, Gustavo S. Mafra, Alexandre Trofino Neto

Abstract:

Motor imagery (MI) based brain-computer interfaces (BCI) uses event-related (de)synchronization (ERS/ ERD), typically recorded using electroencephalography (EEG), to translate brain electrical activity into control commands. To mitigate undesirable artifacts and noise measurements on EEG signals, methods based on band-pass filters defined by a specific frequency band (i.e., 8 – 30Hz), such as the Infinity Impulse Response (IIR) filters, are typically used. Spatial techniques, such as Common Spatial Patterns (CSP), are also used to estimate the variations of the filtered signal and extract features that define the imagined motion. The CSP effectiveness depends on the subject's discriminative frequency, and approaches based on the decomposition of the band of interest into sub-bands with smaller frequency ranges (SBCSP) have been suggested to EEG signals classification. However, despite providing good results, the SBCSP approach generally increases the computational cost of the filtering step in IM-based BCI systems. This paper proposes the use of the Fast Fourier Transform (FFT) algorithm in the IM-based BCI filtering stage that implements SBCSP. The goal is to apply the FFT algorithm to reduce the computational cost of the processing step of these systems and to make them more efficient without compromising classification accuracy. The proposal is based on the representation of EEG signals in a matrix of coefficients resulting from the frequency decomposition performed by the FFT, which is then submitted to the SBCSP process. The structure of the SBCSP contemplates dividing the band of interest, initially defined between 0 and 40Hz, into a set of 33 sub-bands spanning specific frequency bands which are processed in parallel each by a CSP filter and an LDA classifier. A Bayesian meta-classifier is then used to represent the LDA outputs of each sub-band as scores and organize them into a single vector, and then used as a training vector of an SVM global classifier. Initially, the public EEG data set IIa of the BCI Competition IV is used to validate the approach. The first contribution of the proposed method is that, in addition to being more compact, because it has a 68% smaller dimension than the original signal, the resulting FFT matrix maintains the signal information relevant to class discrimination. In addition, the results showed an average reduction of 31.6% in the computational cost in relation to the application of filtering methods based on IIR filters, suggesting FFT efficiency when applied in the filtering step. Finally, the frequency decomposition approach improves the overall system classification rate significantly compared to the commonly used filtering, going from 73.7% using IIR to 84.2% using FFT. The accuracy improvement above 10% and the computational cost reduction denote the potential of FFT in EEG signal filtering applied to the context of IM-based BCI implementing SBCSP. Tests with other data sets are currently being performed to reinforce such conclusions.

Keywords: brain-computer interfaces, fast Fourier transform algorithm, motor imagery, sub-band common spatial patterns

Procedia PDF Downloads 128
649 Train Timetable Rescheduling Using Sensitivity Analysis: Application of Sobol, Based on Dynamic Multiphysics Simulation of Railway Systems

Authors: Soha Saad, Jean Bigeon, Florence Ossart, Etienne Sourdille

Abstract:

Developing better solutions for train rescheduling problems has been drawing the attention of researchers for decades. Most researches in this field deal with minor incidents that affect a large number of trains due to cascading effects. They focus on timetables, rolling stock and crew duties, but do not take into account infrastructure limits. The present work addresses electric infrastructure incidents that limit the power available for train traction, and hence the transportation capacity of the railway system. Rescheduling is needed in order to optimally share the available power among the different trains. We propose a rescheduling process based on dynamic multiphysics railway simulations that include the mechanical and electrical properties of all the system components and calculate physical quantities such as the train speed profiles, voltage along the catenary lines, temperatures, etc. The optimization problem to solve has a large number of continuous and discrete variables, several output constraints due to physical limitations of the system, and a high computation cost. Our approach includes a phase of sensitivity analysis in order to analyze the behavior of the system and help the decision making process and/or more precise optimization. This approach is a quantitative method based on simulation statistics of the dynamic railway system, considering a predefined range of variation of the input parameters. Three important settings are defined. Factor prioritization detects the input variables that contribute the most to the outputs variation. Then, factor fixing allows calibrating the input variables which do not influence the outputs. Lastly, factor mapping is used to study which ranges of input values lead to model realizations that correspond to feasible solutions according to defined criteria or objectives. Generalized Sobol indexes are used for factor prioritization and factor fixing. The approach is tested in the case of a simple railway system, with a nominal traffic running on a single track line. The considered incident is the loss of a feeding power substation, which limits the power available and the train speed. Rescheduling is needed and the variables to be adjusted are the trains departure times, train speed reduction at a given position and the number of trains (cancellation of some trains if needed). The results show that the spacing between train departure times is the most critical variable, contributing to more than 50% of the variation of the model outputs. In addition, we identify the reduced range of variation of this variable which guarantees that the output constraints are respected. Optimal solutions are extracted, according to different potential objectives: minimizing the traveling time, the train delays, the traction energy, etc. Pareto front is also built.

Keywords: optimization, rescheduling, railway system, sensitivity analysis, train timetable

Procedia PDF Downloads 399
648 Internal Node Stabilization for Voltage Sense Amplifiers in Multi-Channel Systems

Authors: Sanghoon Park, Ki-Jin Kim, Kwang-Ho Ahn

Abstract:

This paper discusses the undesirable charge transfer by the parasitic capacitances of the input transistors in a voltage sense amplifier. Due to its intrinsic rail-to-rail voltage transition, the input sides are inevitably disturbed. It can possible disturb the stabilities of the reference voltage levels. Moreover, it becomes serious in multi-channel systems by altering them for other channels, and so degrades the linearity of the systems. In order to alleviate the internal node voltage transition, the internal node stabilization technique is proposed by utilizing an additional biasing circuit. It achieves 47% and 43% improvements for node stabilization and input referred disturbance, respectively.

Keywords: voltage sense amplifier, voltage transition, node stabilization, biasing circuits

Procedia PDF Downloads 478
647 High-Fidelity Materials Screening with a Multi-Fidelity Graph Neural Network and Semi-Supervised Learning

Authors: Akeel A. Shah, Tong Zhang

Abstract:

Computational approaches to learning the properties of materials are commonplace, motivated by the need to screen or design materials for a given application, e.g., semiconductors and energy storage. Experimental approaches can be both time consuming and costly. Unfortunately, computational approaches such as ab-initio electronic structure calculations and classical or ab-initio molecular dynamics are themselves can be too slow for the rapid evaluation of materials, often involving thousands to hundreds of thousands of candidates. Machine learning assisted approaches have been developed to overcome the time limitations of purely physics-based approaches. These approaches, on the other hand, require large volumes of data for training (hundreds of thousands on many standard data sets such as QM7b). This means that they are limited by how quickly such a large data set of physics-based simulations can be established. At high fidelity, such as configuration interaction, composite methods such as G4, and coupled cluster theory, gathering such a large data set can become infeasible, which can compromise the accuracy of the predictions - many applications require high accuracy, for example band structures and energy levels in semiconductor materials and the energetics of charge transfer in energy storage materials. In order to circumvent this problem, multi-fidelity approaches can be adopted, for example the Δ-ML method, which learns a high-fidelity output from a low-fidelity result such as Hartree-Fock or density functional theory (DFT). The general strategy is to learn a map between the low and high fidelity outputs, so that the high-fidelity output is obtained a simple sum of the physics-based low-fidelity and correction, Although this requires a low-fidelity calculation, it typically requires far fewer high-fidelity results to learn the correction map, and furthermore, the low-fidelity result, such as Hartree-Fock or semi-empirical ZINDO, is typically quick to obtain, For high-fidelity outputs the result can be an order of magnitude or more in speed up. In this work, a new multi-fidelity approach is developed, based on a graph convolutional network (GCN) combined with semi-supervised learning. The GCN allows for the material or molecule to be represented as a graph, which is known to improve accuracy, for example SchNet and MEGNET. The graph incorporates information regarding the numbers of, types and properties of atoms; the types of bonds; and bond angles. They key to the accuracy in multi-fidelity methods, however, is the incorporation of low-fidelity output to learn the high-fidelity equivalent, in this case by learning their difference. Semi-supervised learning is employed to allow for different numbers of low and high-fidelity training points, by using an additional GCN-based low-fidelity map to predict high fidelity outputs. It is shown on 4 different data sets that a significant (at least one order of magnitude) increase in accuracy is obtained, using one to two orders of magnitude fewer low and high fidelity training points. One of the data sets is developed in this work, pertaining to 1000 simulations of quinone molecules (up to 24 atoms) at 5 different levels of fidelity, furnishing the energy, dipole moment and HOMO/LUMO.

Keywords: .materials screening, computational materials, machine learning, multi-fidelity, graph convolutional network, semi-supervised learning

Procedia PDF Downloads 39