Search results for: digital supply chain
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1988

Search results for: digital supply chain

248 Experiment Study on the Influence of Tool Materials on the Drilling of Thick Stacked Plate of 2219 Aluminum Alloy

Authors: G. H. Li, M. Liu, H. J. Qi, Q. Zhu, W. Z. He

Abstract:

The drilling and riveting processes are widely used in the assembly of carrier rocket, which makes the efficiency and quality of drilling become the important factor affecting the assembly process. According to the problem existing in the drilling of thick stacked plate (thickness larger than 10mm) of carrier rocket, such as drill break, large noise and burr etc., experimental study of the influence of tool material on the drilling was carried out. The cutting force was measured by a piezoelectric dynamometer, the aperture was measured with an outline projector, and the burr is observed and measured by a digital stereo microscope. Through the measurement, the effects of tool material on the drilling were analyzed from the aspects of drilling force, diameter, and burr. The results show that, compared with carbide drill and coated carbide one, the drilling force of high speed steel is larger. But, the application of high speed steel also has some advantages, e.g. a higher number of hole can be obtained, the height of burr is small, the exit is smooth and the slim burr is less, and the tool experiences wear but not fracture. Therefore, the high speed steel tool is suitable for the drilling of thick stacked plate of 2219 Aluminum alloy.

Keywords: 2219 aluminum alloy, thick stacked plate, drilling, tool material.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1254
247 A Novel VLSI Architecture of Hybrid Image Compression Model based on Reversible Blockade Transform

Authors: C. Hemasundara Rao, M. Madhavi Latha

Abstract:

Image compression can improve the performance of the digital systems by reducing time and cost in image storage and transmission without significant reduction of the image quality. Furthermore, the discrete cosine transform has emerged as the new state-of-the art standard for image compression. In this paper, a hybrid image compression technique based on reversible blockade transform coding is proposed. The technique, implemented over regions of interest (ROIs), is based on selection of the coefficients that belong to different transforms, depending on the coefficients is proposed. This method allows: (1) codification of multiple kernals at various degrees of interest, (2) arbitrary shaped spectrum,and (3) flexible adjustment of the compression quality of the image and the background. No standard modification for JPEG2000 decoder was required. The method was applied over different types of images. Results show a better performance for the selected regions, when image coding methods were employed for the whole set of images. We believe that this method is an excellent tool for future image compression research, mainly on images where image coding can be of interest, such as the medical imaging modalities and several multimedia applications. Finally VLSI implementation of proposed method is shown. It is also shown that the kernal of Hartley and Cosine transform gives the better performance than any other model.

Keywords: VLSI, Discrete Cosine Transform, JPEG, Hartleytransform, Radon Transform

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1802
246 Impact of Changes of the Conceptual Framework for Financial Reporting on the Indicators of the Financial Statement

Authors: Nadezhda Kvatashidze

Abstract:

The International Accounting Standards Board updated the conceptual framework for financial reporting. The main reason behind it is to resolve the tasks of the accounting, which are caused by the market development and business-transactions of a new economic content. Also, the investors call for higher transparency of information and responsibility for the results in order to make a more accurate risk assessment and forecast. All these make it necessary to further develop the conceptual framework for financial reporting so that the users get useful information. The market development and certain shortcomings of the conceptual framework revealed in practice require its reconsideration and finding new solutions. Some issues and concepts, such as disclosure and supply of information, its qualitative characteristics, assessment, and measurement uncertainty had to be supplemented and perfected. The criteria of recognition of certain elements (assets and liabilities) of reporting had to be updated, too and all this is set out in the updated edition of the conceptual framework for financial reporting, a comprehensive collection of concepts underlying preparation of the financial statement. The main objective of conceptual framework revision is to improve financial reporting and development of clear concepts package. This will support International Accounting Standards Board (IASB) to set common “Approach & Reflection” for similar transactions on the basis of mutually accepted concepts. As a result, companies will be able to develop coherent accounting policies for those transactions or events that are occurred from particular deals to which no standard is used or when standard allows choice of accounting policy.

Keywords: Conceptual framework, measurement basis, measurement uncertainty, neutrality, prudence, stewardship.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2427
245 A Four-Step Ortho-Rectification Procedure for Geo-Referencing Video Streams from a Low-Cost UAV

Authors: B. O. Olawale, C. R. Chatwin, R. C. D. Young, P. M. Birch, F. O. Faithpraise, A. O. Olukiran

Abstract:

In this paper, we present a four-step ortho-rectification procedure for real-time geo-referencing of video data from a low-cost UAV equipped with a multi-sensor system. The basic procedures for the real-time ortho-rectification are: (1) decompilation of the video stream into individual frames; (2) establishing the interior camera orientation parameters; (3) determining the relative orientation parameters for each video frame with respect to each other; (4) finding the absolute orientation parameters, using a self-calibration bundle and adjustment with the aid of a mathematical model. Each ortho-rectified video frame is then mosaicked together to produce a mosaic image of the test area, which is then merged with a well referenced existing digital map for the purpose of geo-referencing and aerial surveillance. A test field located in Abuja, Nigeria was used to evaluate our method. Video and telemetry data were collected for about fifteen minutes, and they were processed using the four-step ortho-rectification procedure. The results demonstrated that the geometric measurement of the control field from ortho-images is more accurate when compared with those from original perspective images when used to pin point the exact location of targets on the video imagery acquired by the UAV. The 2-D planimetric accuracy when compared with the 6 control points measured by a GPS receiver is between 3 to 5 metres.

Keywords: Geo-referencing, ortho-rectification, video frame, self-calibration, UAV, target tracking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1591
244 Star-Hexagon Transformer Supported UPQC

Authors: Yash Pal, A.Swarup, Bhim Singh

Abstract:

A new topology of unified power quality conditioner (UPQC) is proposed for different power quality (PQ) improvement in a three-phase four-wire (3P-4W) distribution system. For neutral current mitigation, a star-hexagon transformer is connected in shunt near the load along with three-leg voltage source inverters (VSIs) based UPQC. For the mitigation of source neutral current, the uses of passive elements are advantageous over the active compensation due to ruggedness and less complexity of control. In addition to this, by connecting a star-hexagon transformer for neutral current mitigation the over all rating of the UPQC is reduced. The performance of the proposed topology of 3P-4W UPQC is evaluated for power-factor correction, load balancing, neutral current mitigation and mitigation of voltage and currents harmonics. A simple control algorithm based on Unit Vector Template (UVT) technique is used as a control strategy of UPQC for mitigation of different PQ problems. In this control scheme, the current/voltage control is applied over the fundamental supply currents/voltages instead of fast changing APFs currents/voltages, thereby reducing the computational delay. Moreover, no extra control is required for neutral source current compensation; hence the numbers of current sensors are reduced. The performance of the proposed topology of UPQC is analyzed through simulations results using MATLAB software with its Simulink and Power System Block set toolboxes.

Keywords: Power-factor correction, Load balancing, UPQC, Voltage and Current harmonics, Neutral current mitigation, Starhexagon transformer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2297
243 Simulation and Assessment of Carbon Dioxide Separation by Piperazine Blended Solutions Using E-NRTL and Peng-Robinson Models: A Study of Regeneration Heat Duty

Authors: Arash Esmaeili, Zhibang Liu, Yang Xiang, Jimmy Yun, Lei Shao

Abstract:

High pressure carbon dioxide (CO2) absorption from a specific off-gas in a conventional column has been evaluated for the environmental concerns by the Aspen HYSYS simulator using a wide range of single absorbents and piperazine (PZ) blended solutions to estimate the outlet CO2 concentration, CO2 loading, reboiler power supply and regeneration heat duty to choose the most efficient solution in terms of CO2 removal and required heat duty. The property package, which is compatible with all applied solutions for the simulation in this study, estimates the properties based on electrolyte non-random two-liquid (E-NRTL) model for electrolyte thermodynamics and Peng-Robinson equation of state for vapor phase and liquid hydrocarbon phase properties. The results of the simulation indicate that PZ in addition to the mixture of PZ and monoethanolamine (MEA) demand the highest regeneration heat duty compared with other studied single and blended amine solutions respectively. The blended amine solutions with the lowest PZ concentrations (5wt% and 10wt%) were considered and compared to reduce the cost of process, among which the blended solution of 10wt%PZ+35wt%MDEA (methyldiethanolamine) was found as the most appropriate solution in terms of CO2 content in the outlet gas, rich-CO2 loading and regeneration heat duty.

Keywords: Absorption, amine solutions, Aspen HYSYS, CO2 loading, piperazine, regeneration heat duty.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 556
242 Endeavoring Innovation via Research and Development Management: A Case of Iranian Industrial Sector

Authors: Reihaneh Montazeri Shatouri, Rosmini Omar, Wan Khairuzzaman Wan Ismail

Abstract:

This study aims at investigating factors in research and development (R&D) growth and exploring the role of R&D management in enhancing social innovation and productivity improvement in Iran-s industrial sector. It basically explores the common types of R&D activities and the industries which benefited the most from active R&D units in Iran. The researchers generated qualitative analyses obtained from primary and secondary data. The primary data have been retrieved through interviews with five key players (Managing Director, Internal Manager, General Manager, Executive Manager, and Project Manager) in the industrial sector. The secondary data acquired from an investigation on Mazandaran, a province of northern Iran. The findings highlight Iran-s focuses of R & D on cost reduction and upgrading productivity. Industries that have benefited the most from active R&D units are metallic, machinery and equipment design, and automotive. We rank order the primary effects of R&D on productivity improvement as follows, industry improvement, economic growth, using professional human resources, generating productivity and creativity culture, creating a competitive and innovative environment, and increasing people-s knowledge. Generally, low budget dedication and insufficient supply of highly skilled scientists and engineers are two important obstacles for R&D in Iran. Whereas, R&D has resulted in improvement in Iranian society, transfer of contemporary knowledge into the international market is still lacking.

Keywords: Productivity, R&D, Transfer of Knowledge

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1616
241 Natural Ventilation for the Sustainable Tall Office Buildings of the Future

Authors: Ayşin Sev, Görkem Aslan

Abstract:

Sustainable tall buildings that provide comfortable, healthy and efficient indoor environments are clearly desirable as the densification of living and working space for the world’s increasing population proceeds. For environmental concerns, these buildings must also be energy efficient. One component of these tasks is the provision of indoor air quality and thermal comfort, which can be enhanced with natural ventilation by the supply of fresh air. Working spaces can only be naturally ventilated with connections to the outdoors utilizing operable windows, double facades, ventilation stacks, balconies, patios, terraces and skygardens. Large amounts of fresh air can be provided to the indoor spaces without mechanical air-conditioning systems, which are widely employed in contemporary tall buildings. This paper tends to present the concept of natural ventilation for sustainable tall office buildings in order to achieve healthy and comfortable working spaces, as well as energy efficient environments. Initially the historical evolution of ventilation strategies for tall buildings is presented, beginning with natural ventilation and continuing with the introduction of mechanical airconditioning systems. Then the emergence of natural ventilation due to the health and environmental concerns in tall buildings is handled, and the strategies for implementing this strategy are revealed. In the next section, a number of case studies that utilize this strategy are investigated. Finally, how tall office buildings can benefit from this strategy is discussed.

Keywords: Tall office building, natural ventilation, energy efficiency, double-skin façade, stack ventilation, air conditioning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7838
240 Applications of Building Information Modeling (BIM) in Knowledge Sharing and Management in Construction

Authors: Shu-Hui Jan, Shih-Ping Ho, Hui-Ping Tserng

Abstract:

Construction knowledge can be referred to and reused among involved project managers and jobsite engineers to alleviate problems on a construction jobsite and reduce the time and cost of solving problems related to constructability. This paper proposes a new methodology to provide sharing of construction knowledge by using the Building Information Modeling (BIM) approach. The main characteristics of BIM include illustrating 3D CAD-based presentations and keeping information in a digital format, and facilitation of easy updating and transfer of information in the 3D BIM environment. Using the BIM approach, project managers and engineers can gain knowledge related to 3D BIM and obtain feedback provided by jobsite engineers for future reference. This study addresses the application of knowledge sharing management in the construction phase of construction projects and proposes a BIM-based Knowledge Sharing Management (BIMKSM) system for project managers and engineers. The BIMKSM system is then applied in a selected case study of a construction project in Taiwan to verify the proposed methodology and demonstrate the effectiveness of sharing knowledge in the BIM environment. The combined results demonstrate that the BIMKSM system can be used as a visual BIM-based knowledge sharing management platform by utilizing the BIM approach and web technology.

Keywords: Construction knowledge management, building information modeling, project management, web-based information system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4297
239 A Test Methodology to Measure the Open-Loop Voltage Gain of an Operational Amplifier

Authors: Maninder Kaur Gill, Alpana Agarwal

Abstract:

It is practically not feasible to measure the open-loop voltage gain of the operational amplifier in the open loop configuration. It is because the open-loop voltage gain of the operational amplifier is very large. In order to avoid the saturation of the output voltage, a very small input should be given to operational amplifier which is not possible to be measured practically by a digital multimeter. A test circuit for measurement of open loop voltage gain of an operational amplifier has been proposed and verified using simulation tools as well as by experimental methods on breadboard. The main advantage of this test circuit is that it is simple, fast, accurate, cost effective, and easy to handle even on a breadboard. The test circuit requires only the device under test (DUT) along with resistors. This circuit has been tested for measurement of open loop voltage gain for different operational amplifiers. The underlying goal is to design testable circuits for various analog devices that are simple to realize in VLSI systems, giving accurate results and without changing the characteristics of the original system. The DUTs used are LM741CN and UA741CP. For LM741CN, the simulated gain and experimentally measured gain (average) are calculated as 89.71 dB and 87.71 dB, respectively. For UA741CP, the simulated gain and experimentally measured gain (average) are calculated as 101.15 dB and 105.15 dB, respectively. These values are found to be close to the datasheet values.

Keywords: Device under test, open-loop voltage gain, operational amplifier, test circuit.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3277
238 The Practice of Teaching Chemistry by the Application of Online Tests

Authors: Nikolina Ribarić

Abstract:

E-learning is most commonly defined as a set of applications and processes, such as Web-based learning, computer-based learning, virtual classrooms and digital collaboration, that enable access to instructional content through a variety of electronic media. The main goal of an e-learning system is learning, and the way to evaluate the impact of an e-learning system is by examining whether students learn effectively with the help of that system. Testmoz is a program for online preparation of knowledge evaluation assignments. The program provides teachers with computer support during the design of assignments and evaluating them. Students can review and solve assignments and also check the correctness of their solutions. Research into the increase of motivation by the practice of providing teaching content by applying online tests prepared in the Testmoz program, was carried out with students of the 8th grade of Ljubo Babić Primary School in Jastrebarsko. The students took the tests in their free time, from home, for an unlimited number of times. SPSS was used to process the data obtained by the research instruments. The results of the research showed that students preferred to practice teaching content, and achieved better educational results in chemistry, when they had access to online tests for repetition and practicing in relation to subject content which was checked after repetition and practicing in "the classical way" – i.e., solving assignments in a workbook or writing assignments in worksheets.

Keywords: Chemistry class, e-learning, online test, Testmoz.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 482
237 Pectoral Muscles Suppression in Digital Mammograms Using Hybridization of Soft Computing Methods

Authors: I. Laurence Aroquiaraj, K. Thangavel

Abstract:

Breast region segmentation is an essential prerequisite in computerized analysis of mammograms. It aims at separating the breast tissue from the background of the mammogram and it includes two independent segmentations. The first segments the background region which usually contains annotations, labels and frames from the whole breast region, while the second removes the pectoral muscle portion (present in Medio Lateral Oblique (MLO) views) from the rest of the breast tissue. In this paper we propose hybridization of Connected Component Labeling (CCL), Fuzzy, and Straight line methods. Our proposed methods worked good for separating pectoral region. After removal pectoral muscle from the mammogram, further processing is confined to the breast region alone. To demonstrate the validity of our segmentation algorithm, it is extensively tested using over 322 mammographic images from the Mammographic Image Analysis Society (MIAS) database. The segmentation results were evaluated using a Mean Absolute Error (MAE), Hausdroff Distance (HD), Probabilistic Rand Index (PRI), Local Consistency Error (LCE) and Tanimoto Coefficient (TC). The hybridization of fuzzy with straight line method is given more than 96% of the curve segmentations to be adequate or better. In addition a comparison with similar approaches from the state of the art has been given, obtaining slightly improved results. Experimental results demonstrate the effectiveness of the proposed approach.

Keywords: X-ray Mammography, CCL, Fuzzy, Straight line.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1715
236 Bilingual Gaming Kit to Teach English Language through Collaborative Learning

Authors: Sarayu Agarwal

Abstract:

This paper aims to teach English (secondary language) by bridging the understanding between the Regional language (primary language) and the English Language (secondary language). Here primary language is the one a person has learned from birth or within the critical period, while secondary language would be any other language one learns or speaks. The paper also focuses on evolving old teaching methods to a contemporary participatory model of learning and teaching. Pilot studies were conducted to gauge an understanding of student’s knowledge of the English language. Teachers and students were interviewed and their academic curriculum was assessed as a part of the initial study. Extensive literature study and design thinking principles were used to devise a solution to the problem. The objective is met using a holistic learning kit/card game to teach children word recognition, word pronunciation, word spelling and writing words. Implication of the paper is a noticeable improvement in the understanding and grasping of English language. With increasing usage and applicability of English as a second language (ESL) world over, the paper becomes relevant due to its easy replicability to any other primary or secondary language. Future scope of this paper would be transforming the idea of participatory learning into self-regulated learning methods. With the upcoming govt. learning centres in rural areas and provision of smart devices such as tablets, the development of the card games into digital applications seems very feasible.

Keywords: English as a second language, vocabulary-building, learning through gamification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1323
235 Governance, Risk Management, and Compliance Factors Influencing the Adoption of Cloud Computing in Australia

Authors: Tim Nedyalkov

Abstract:

A business decision to move to the cloud brings fundamental changes in how an organization develops and delivers its Information Technology solutions. The accelerated pace of digital transformation across businesses and government agencies increases the reliance on cloud-based services. Collecting, managing, and retaining large amounts of data in cloud environments make information security and data privacy protection essential. It becomes even more important to understand what key factors drive successful cloud adoption following the commencement of the Privacy Amendment Notifiable Data Breaches (NDB) Act 2017 in Australia as the regulatory changes impact many organizations and industries. This quantitative correlational research investigated the governance, risk management, and compliance factors contributing to cloud security success. The factors influence the adoption of cloud computing within an organizational context after the commencement of the NDB scheme. The results and findings demonstrated that corporate information security policies, data storage location, management understanding of data governance responsibilities, and regular compliance assessments are the factors influencing cloud computing adoption. The research has implications for organizations, future researchers, practitioners, policymakers, and cloud computing providers to meet the rapidly changing regulatory and compliance requirements.

Keywords: Cloud compliance, cloud security, cloud security governance, data governance, privacy protection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 856
234 A Large Ion Collider Experiment (ALICE) Diffractive Detector Control System for RUN-II at the Large Hadron Collider

Authors: J. C. Cabanillas-Noris, M. I. Martínez-Hernández, I. León-Monzón

Abstract:

The selection of diffractive events in the ALICE experiment during the first data taking period (RUN-I) of the Large Hadron Collider (LHC) was limited by the range over which rapidity gaps occur. It would be possible to achieve better measurements by expanding the range in which the production of particles can be detected. For this purpose, the ALICE Diffractive (AD0) detector has been installed and commissioned for the second phase (RUN-II). Any new detector should be able to take the data synchronously with all other detectors and be operated through the ALICE central systems. One of the key elements that must be developed for the AD0 detector is the Detector Control System (DCS). The DCS must be designed to operate safely and correctly this detector. Furthermore, the DCS must also provide optimum operating conditions for the acquisition and storage of physics data and ensure these are of the highest quality. The operation of AD0 implies the configuration of about 200 parameters, from electronics settings and power supply levels to the archiving of operating conditions data and the generation of safety alerts. It also includes the automation of procedures to get the AD0 detector ready for taking data in the appropriate conditions for the different run types in ALICE. The performance of AD0 detector depends on a certain number of parameters such as the nominal voltages for each photomultiplier tube (PMT), their threshold levels to accept or reject the incoming pulses, the definition of triggers, etc. All these parameters define the efficiency of AD0 and they have to be monitored and controlled through AD0 DCS. Finally, AD0 DCS provides the operator with multiple interfaces to execute these tasks. They are realized as operating panels and scripts running in the background. These features are implemented on a SCADA software platform as a distributed control system which integrates to the global control system of the ALICE experiment.

Keywords: AD0, ALICE, DCS, LHC.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1350
233 Feasibility Study for a Castor oil Extraction Plant in South Africa

Authors: Mohamed Belaid, Edison Muzenda, Getrude Mitilene, Mansoor Mollagee

Abstract:

A feasibility study for the design and construction of a pilot plant for the extraction of castor oil in South Africa was conducted. The study emphasized the four critical aspects of project feasibility analysis, namely technical, financial, market and managerial aspects. The technical aspect involved research on existing oil extraction technologies, namely: mechanical pressing and solvent extraction, as well as assessment of the proposed production site for both short and long term viability of the project. The site is on the outskirts of Nkomazi village in the Mpumalanga province, where connections for water and electricity are currently underway, potential raw material supply proves to be reliable since the province is known for its commercial farming. The managerial aspect was evaluated based on the fact that the current producer of castor oil will be fully involved in the project while receiving training and technical assistance from Sasol Technology, the TSC and SEDA. Market and financial aspects were evaluated and the project was considered financially viable with a Net Present Value (NPV) of R2 731 687 and an Internal Rate of Return (IRR) of 18% at an annual interest rate of 10.5%. The payback time is 6years for analysis over the first 10 years with a net income of R1 971 000 in the first year. The project was thus found to be feasible with high chance of success while contributing to socio-economic development. It was recommended for lab tests to be conducted to establish process kinetics that would be used in the initial design of the plant.

Keywords: Mechanical pressing, Net Present Value, Oilextraction, Project feasibility, Solvent extraction

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6046
232 Low Value Capacitance Measurement System with Adjustable Lead Capacitance Compensation

Authors: Gautam Sarkar, Anjan Rakshit, Amitava Chatterjee, Kesab Bhattacharya

Abstract:

The present paper describes the development of a low cost, highly accurate low capacitance measurement system that can be used over a range of 0 – 400 pF with a resolution of 1 pF. The range of capacitance may be easily altered by a simple resistance or capacitance variation of the measurement circuit. This capacitance measurement system uses quad two-input NAND Schmitt trigger circuit CD4093B with hysteresis for the measurement and this system is integrated with PIC 18F2550 microcontroller for data acquisition purpose. The microcontroller interacts with software developed in the PC end through USB architecture and an attractive graphical user interface (GUI) based system is developed in the PC end to provide the user with real time, online display of capacitance under measurement. The system uses a differential mode of capacitance measurement, with reference to a trimmer capacitance, that effectively compensates lead capacitances, a notorious error encountered in usual low capacitance measurements. The hysteresis provided in the Schmitt-trigger circuits enable reliable operation of the system by greatly minimizing the possibility of false triggering because of stray interferences, usually regarded as another source of significant error. The real life testing of the proposed system showed that our measurements could produce highly accurate capacitance measurements, when compared to cutting edge, high end digital capacitance meters.

Keywords: Capacitance measurement, NAND Schmitt trigger, microcontroller, GUI, lead compensation, hysteresis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7338
231 Comparison of Compression Ability Using DCT and Fractal Technique on Different Imaging Modalities

Authors: Sumathi Poobal, G. Ravindran

Abstract:

Image compression is one of the most important applications Digital Image Processing. Advanced medical imaging requires storage of large quantities of digitized clinical data. Due to the constrained bandwidth and storage capacity, however, a medical image must be compressed before transmission and storage. There are two types of compression methods, lossless and lossy. In Lossless compression method the original image is retrieved without any distortion. In lossy compression method, the reconstructed images contain some distortion. Direct Cosine Transform (DCT) and Fractal Image Compression (FIC) are types of lossy compression methods. This work shows that lossy compression methods can be chosen for medical image compression without significant degradation of the image quality. In this work DCT and Fractal Compression using Partitioned Iterated Function Systems (PIFS) are applied on different modalities of images like CT Scan, Ultrasound, Angiogram, X-ray and mammogram. Approximately 20 images are considered in each modality and the average values of compression ratio and Peak Signal to Noise Ratio (PSNR) are computed and studied. The quality of the reconstructed image is arrived by the PSNR values. Based on the results it can be concluded that the DCT has higher PSNR values and FIC has higher compression ratio. Hence in medical image compression, DCT can be used wherever picture quality is preferred and FIC is used wherever compression of images for storage and transmission is the priority, without loosing picture quality diagnostically.

Keywords: DCT, FIC, PIFS, PSNR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1793
230 MiR-200a/ZEB1 Pathway in Liver Fibrogenesis of Biliary Atresia

Authors: Hai-Ying Liu, Yi-Hao Chen, Shu-Yin Pang, Feng-Hua Wang, Xiao-Fang Peng, Li-Yuan Yang, Zheng-Rong Chen, Yi Chen, Bing Zhu

Abstract:

Objective: Biliary atresia (BA) is characterized by progressive liver fibrosis. Epithelial-mesenchymal transition (EMT) has been implicated as a key mechanism in the pathogenesis of organ fibrosis. MiR-200a has been shown to repress EMT. We aim to explore the role of miR-200a in the fibrogenesis of BA. Methods: We obtained the plasma samples and liver samples from patients with BA or controls to examine the role of miR-200a. Histological liver fibrosis was assessed using the Ishak fibrosis scores. Reverse transcription quantitative polymerase chain reaction (RT-qPCR) was performed to detect the expression of miR-200a in plasma. We also evaluated the expression of miR-200a in liver tissues using tyramide signal amplification fluorescence in situ hybridization (TSA-FISH). The expression of EMT related proteins zinc finger E-box-binding homeobox 1 (ZEB1), E-cadherin and α-smooth muscle actin (α-SMA) in the liver sections were detected by immunohistochemical staining. Results: We found that the expression of miR-200a was both elevated in the plasma and liver tissues from BA patients compared with the controls. The hepatic expression of ZEB1 and α-SMA were markedly increased in the liver sections from BA patients compared to the controls, whereas E-cadherin was downregulated in the BA group. Simultaneously, we noted that the hepatic expression of miR-200a, E-cadherin and α-SMA were upregulated with the progression of liver fibrosis in the BA group, while ZEB1 was downregulated with the progression of liver fibrosis in BA patients. Conclusion: These findings suggest EMT has a critical effect on the fibrotic process of BA, and the interaction between miR-200a and ZEB1 may regulate EMT and eventually influence liver fibrogenesis of BA.

Keywords: Biliary atresia, liver fibrosis, MicroRNA, epithelial-mesenchymal transition, zinc finger e-box-binding homeobox 1.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1144
229 Geopotential Models Evaluation in Algeria Using Stochastic Method, GPS/Leveling and Topographic Data

Authors: M. A. Meslem

Abstract:

For precise geoid determination, we use a reference field to subtract long and medium wavelength of the gravity field from observations data when we use the remove-compute-restore technique. Therefore, a comparison study between considered models should be made in order to select the optimal reference gravity field to be used. In this context, two recent global geopotential models have been selected to perform this comparison study over Northern Algeria. The Earth Gravitational Model (EGM2008) and the Global Gravity Model (GECO) conceived with a combination of the first model with anomalous potential derived from a GOCE satellite-only global model. Free air gravity anomalies in the area under study have been used to compute residual data using both gravity field models and a Digital Terrain Model (DTM) to subtract the residual terrain effect from the gravity observations. Residual data were used to generate local empirical covariance functions and their fitting to the closed form in order to compare their statistical behaviors according to both cases. Finally, height anomalies were computed from both geopotential models and compared to a set of GPS levelled points on benchmarks using least squares adjustment. The result described in details in this paper regarding these two models has pointed out a slight advantage of GECO global model globally through error degree variances comparison and ground-truth evaluation.

Keywords: Quasigeoid, gravity anomalies, covariance, GGM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 844
228 Embedded Semi-Fragile Signature Based Scheme for Ownership Identification and Color Image Authentication with Recovery

Authors: M. Hamad Hassan, S.A.M. Gilani

Abstract:

In this paper, a novel scheme is proposed for Ownership Identification and Color Image Authentication by deploying Cryptography & Digital Watermarking. The color image is first transformed from RGB to YST color space exclusively designed for watermarking. Followed by color space transformation, each channel is divided into 4×4 non-overlapping blocks with selection of central 2×2 sub-blocks. Depending upon the channel selected two to three LSBs of each central 2×2 sub-block are set to zero to hold the ownership, authentication and recovery information. The size & position of sub-block is important for correct localization, enhanced security & fast computation. As YS ÔèÑ T so it is suitable to embed the recovery information apart from the ownership and authentication information, therefore 4×4 block of T channel along with ownership information is then deployed by SHA160 to compute the content based hash that is unique and invulnerable to birthday attack or hash collision instead of using MD5 that may raise the condition i.e. H(m)=H(m'). For recovery, intensity mean of 4x4 block of each channel is computed and encoded upto eight bits. For watermark embedding, key based mapping of blocks is performed using 2DTorus Automorphism. Our scheme is oblivious, generates highly imperceptible images with correct localization of tampering within reasonable time and has the ability to recover the original work with probability of near one.

Keywords: Hash Collision, LSB, MD5, PSNR, SHA160

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1487
227 Innovative Teaching in Systems Analysis and Design - an Action Research Project

Authors: Imelda Smit

Abstract:

Systems Analysis and Design is a key subject in Information Technology courses, but students do not find it easy to cope with, since it is not “precise" like programming and not exact like Mathematics. It is a subject working with many concepts, modeling ideas into visual representations and then translating the pictures into a real life system. To complicate matters users who are not necessarily familiar with computers need to give their inputs to ensure that they get the system the need. Systems Analysis and Design also covers two fields, namely Analysis, focusing on the analysis of the existing system and Design, focusing on the design of the new system. To be able to test the analysis and design of a system, it is necessary to develop a system or at least a prototype of the system to test the validity of the analysis and design. The skills necessary in each aspect differs vastly. Project Management Skills, Database Knowledge and Object Oriented Principles are all necessary. In the context of a developing country where students enter tertiary education underprepared and the digital divide is alive and well, students need to be motivated to learn the necessary skills, get an opportunity to test it in a “live" but protected environment – within the framework of a university. The purpose of this article is to improve the learning experience in Systems Analysis and Design through reviewing the underlying teaching principles used, the teaching tools implemented, the observations made and the reflections that will influence future developments in Systems Analysis and Design. Action research principles allows the focus to be on a few problematic aspects during a particular semester.

Keywords: Action Research, Project Development, Systems Analysis and Design, Technology in Teaching.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1419
226 AI-Driven Cloud Security: Proactive Defense Against Evolving Cyber Threats

Authors: Ashly Joseph

Abstract:

Cloud computing has become an essential component of enterprises and organizations globally in the current era of digital technology. The cloud has a multitude of advantages, including scalability, flexibility, and cost-effectiveness, rendering it an appealing choice for data storage and processing. The increasing storage of sensitive information in cloud environments has raised significant concerns over the security of such systems. The frequency of cyber threats and attacks specifically aimed at cloud infrastructure has been increasing, presenting substantial dangers to the data, reputation, and financial stability of enterprises. Conventional security methods can become inadequate when confronted with ever intricate and dynamic threats. Artificial Intelligence (AI) technologies possess the capacity to significantly transform cloud security through their ability to promptly identify and thwart assaults, adjust to emerging risks, and offer intelligent perspectives for proactive security actions. The objective of this research study is to investigate the utilization of AI technologies in augmenting the security measures within cloud computing systems. This paper aims to offer significant insights and recommendations for businesses seeking to protect their cloud-based assets by analyzing the present state of cloud security, the capabilities of AI, and the possible advantages and obstacles associated with using AI into cloud security policies.

Keywords: Machine Learning, Natural Learning Processing, Denial-of-Service attacks, Sentiment Analysis, Cloud computing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 46
225 Evaluation of Coastal Erosion in the Jurisdiction of the Municipalities of Puerto Colombia and Tubará, Atlántico, Colombia in Google Earth Engine with Landsat and Sentinel 2 Images

Authors: Francisco Javier Reyes Salazar, Héctor Mauricio Ramírez

Abstract:

The coastal zones are home to mangrove swamps, coral reefs, and seagrass ecosystems, which are the most biodiverse and fragile on the planet. These areas support a great diversity of marine life; they are also extraordinarily important for humans in the provision of food, water, wood, and other associated goods and services; they also contribute to climate regulation. The lack of an automated model that generates information on the dynamics of changes in coastlines and coastal erosion is identified as a central problem. In this paper, coastlines were determined from 1984 to 2020 on the Google Earth Engine platform from Landsat and Sentinel images. Then, we determined the Modified Normalized Difference Water Index (MNDWI) and used Digital Shoreline Analysis System (DSAS) v5.0. Starting from the 2020 coastline; the 10-year prediction (Year 2031) was determined with the erosion of 238.32 hectares and an accretion of 181.96 hectares. For the 20-year prediction (Year 2041) will be presented an erosion of 544.04 hectares and an accretion of 133.94 hectares. The erosion and accretion of Playa Muelle in the municipality of Puerto Colombia were established, which will register the highest value of erosion. The coverage that presented the greatest change was that of artificialized territories.

Keywords: Coastline, coastal erosion, MNDWI, Google Earth Engine, Colombia.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 125
224 General Haemodynamics, Aerobic Potential and Strategy for Adaptation of Students to Team Sports

Authors: V.A. Baronenko, S.I. Bugreeva, K.R. Mekhdieva

Abstract:

Differentiated impact of team sports (basketball, indoor soccer, handball) on general haemodynamics and aerobic potential of students who specialize in technical subjects is detected only on the fourth year of studies in the institute of higher education. Those who play basketball and indoor soccer have shown increase of stroke and minute volume of blood indices, pumping and contractile function of the heart, oxygenation of blood and oxygen delivery to tissues, aerobic energy supply and balance of sympathetic and parasympathetic activity of the nervous regulation mechanism of the circulatory system. Those who play handball have shown these indices statistically decreased. On the whole playing basketball and indoor soccer optimizes the strategy for adaptation of students to the studying process, but playing handball does the opposite thing. The leading factor for adaptation of students is: those who play basketball have increase of minute blood volume which stipulates velocity of the system blood circulation and well-timed oxygen delivery to tissues; those who play indoor soccer have increase of power and velocity of contractile function of the heart; those who play handball have increase of resistance of thorax to the system blood flow which minimizes contractile function of the heart, blood oxygen saturation and delivery of oxygen to tissues.

Keywords: team sports, general haemodynamics, aerobic potential, strategy for adaptation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1920
223 Influence of Instructors in Engaging Online Graduate Students in Active Learning in the United States

Authors: Ehi E. Aimiuwu

Abstract:

As of 2017, many online learning professionals, institutions, and journals are still wondering how instructors can keep student engaged in the online learning environment to facilitate active learning effectively. The purpose of this qualitative single-case and narrative research is to explore whether online professors understand their role as mentors and facilitators of students’ academic success by keeping students engaged in active learning based on personalized experience in the field. Data collection tools that were used in the study included an NVivo 12 Plus qualitative software, an interview protocol, a digital audiotape, an observation sheet, and a transcription. Seven online professors in the United States from LinkedIn and residencies were interviewed for this study. Eleven online teaching techniques from previous research were used as the study framework. Data analysis process, member checking, and key themes were used to achieve saturation. About 85.7% of professors agreed on rubric as the preferred online grading technique. About 57.1% agreed on professors logging in daily, students logging in about 2-5 times weekly, knowing students to increase accountability, email as preferred communication tool, and computer access for adequate online learning. About 42.9% agreed on syllabus for clear class expectations, participation to show what has been learned, and energizing students for creativity.

Keywords: Class facilitation, class management, online teaching, online education, pedagogy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 638
222 Comparison between Higher-Order SVD and Third-order Orthogonal Tensor Product Expansion

Authors: Chiharu Okuma, Jun Murakami, Naoki Yamamoto

Abstract:

In digital signal processing it is important to approximate multi-dimensional data by the method called rank reduction, in which we reduce the rank of multi-dimensional data from higher to lower. For 2-dimennsional data, singular value decomposition (SVD) is one of the most known rank reduction techniques. Additional, outer product expansion expanded from SVD was proposed and implemented for multi-dimensional data, which has been widely applied to image processing and pattern recognition. However, the multi-dimensional outer product expansion has behavior of great computation complex and has not orthogonally between the expansion terms. Therefore we have proposed an alterative method, Third-order Orthogonal Tensor Product Expansion short for 3-OTPE. 3-OTPE uses the power method instead of nonlinear optimization method for decreasing at computing time. At the same time the group of B. D. Lathauwer proposed Higher-Order SVD (HOSVD) that is also developed with SVD extensions for multi-dimensional data. 3-OTPE and HOSVD are similarly on the rank reduction of multi-dimensional data. Using these two methods we can obtain computation results respectively, some ones are the same while some ones are slight different. In this paper, we compare 3-OTPE to HOSVD in accuracy of calculation and computing time of resolution, and clarify the difference between these two methods.

Keywords: Singular value decomposition (SVD), higher-order SVD (HOSVD), higher-order tensor, outer product expansion, power method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1528
221 Integrating Fast Karnough Map and Modular Neural Networks for Simplification and Realization of Complex Boolean Functions

Authors: Hazem M. El-Bakry

Abstract:

In this paper a new fast simplification method is presented. Such method realizes Karnough map with large number of variables. In order to accelerate the operation of the proposed method, a new approach for fast detection of group of ones is presented. Such approach implemented in the frequency domain. The search operation relies on performing cross correlation in the frequency domain rather than time one. It is proved mathematically and practically that the number of computation steps required for the presented method is less than that needed by conventional cross correlation. Simulation results using MATLAB confirm the theoretical computations. Furthermore, a powerful solution for realization of complex functions is given. The simplified functions are implemented by using a new desigen for neural networks. Neural networks are used because they are fault tolerance and as a result they can recognize signals even with noise or distortion. This is very useful for logic functions used in data and computer communications. Moreover, the implemented functions are realized with minimum amount of components. This is done by using modular neural nets (MNNs) that divide the input space into several homogenous regions. Such approach is applied to implement XOR function, 16 logic functions on one bit level, and 2-bit digital multiplier. Compared to previous non- modular designs, a clear reduction in the order of computations and hardware requirements is achieved.

Keywords: Boolean Functions, Simplification, KarnoughMap, Implementation of Logic Functions, Modular NeuralNetworks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1778
220 Chose the Right Mutation Rate for Better Evolve Combinational Logic Circuits

Authors: Emanuele Stomeo, Tatiana Kalganova, Cyrille Lambert

Abstract:

Evolvable hardware (EHW) is a developing field that applies evolutionary algorithm (EA) to automatically design circuits, antennas, robot controllers etc. A lot of research has been done in this area and several different EAs have been introduced to tackle numerous problems, as scalability, evolvability etc. However every time a specific EA is chosen for solving a particular task, all its components, such as population size, initialization, selection mechanism, mutation rate, and genetic operators, should be selected in order to achieve the best results. In the last three decade the selection of the right parameters for the EA-s components for solving different “test-problems" has been investigated. In this paper the behaviour of mutation rate for designing logic circuits, which has not been done before, has been deeply analyzed. The mutation rate for an EHW system modifies the number of inputs of each logic gates, the functionality (for example from AND to NOR) and the connectivity between logic gates. The behaviour of the mutation has been analyzed based on the number of generations, genotype redundancy and number of logic gates for the evolved circuits. The experimental results found provide the behaviour of the mutation rate during evolution for the design and optimization of simple logic circuits. The experimental results propose the best mutation rate to be used for designing combinational logic circuits. The research presented is particular important for those who would like to implement a dynamic mutation rate inside the evolutionary algorithm for evolving digital circuits. The researches on the mutation rate during the last 40 years are also summarized.

Keywords: Design of logic circuit, evolutionary computation, evolvable hardware, mutation rate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1664
219 Distances over Incomplete Diabetes and Breast Cancer Data Based on Bhattacharyya Distance

Authors: Loai AbdAllah, Mahmoud Kaiyal

Abstract:

Missing values in real-world datasets are a common problem. Many algorithms were developed to deal with this problem, most of them replace the missing values with a fixed value that was computed based on the observed values. In our work, we used a distance function based on Bhattacharyya distance to measure the distance between objects with missing values. Bhattacharyya distance, which measures the similarity of two probability distributions. The proposed distance distinguishes between known and unknown values. Where the distance between two known values is the Mahalanobis distance. When, on the other hand, one of them is missing the distance is computed based on the distribution of the known values, for the coordinate that contains the missing value. This method was integrated with Wikaya, a digital health company developing a platform that helps to improve prevention of chronic diseases such as diabetes and cancer. In order for Wikaya’s recommendation system to work distance between users need to be measured. Since there are missing values in the collected data, there is a need to develop a distance function distances between incomplete users profiles. To evaluate the accuracy of the proposed distance function in reflecting the actual similarity between different objects, when some of them contain missing values, we integrated it within the framework of k nearest neighbors (kNN) classifier, since its computation is based only on the similarity between objects. To validate this, we ran the algorithm over diabetes and breast cancer datasets, standard benchmark datasets from the UCI repository. Our experiments show that kNN classifier using our proposed distance function outperforms the kNN using other existing methods.

Keywords: Missing values, distance metric, Bhattacharyya distance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 743