Search results for: SoC soft error rate
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10361

Search results for: SoC soft error rate

9731 Identification of Architectural Design Error Risk Factors in Construction Projects Using IDEF0 Technique

Authors: Sahar Tabarroki, Ahad Nazari

Abstract:

The design process is one of the most key project processes in the construction industry. Although architects have the responsibility to produce complete, accurate, and coordinated documents, architectural design is accompanied by many errors. A design error occurs when the constraints and requirements of the design are not satisfied. Errors are potentially costly and time-consuming to correct if not caught early during the design phase, and they become expensive in either construction documents or in the construction phase. The aim of this research is to identify the risk factors of architectural design errors, so identification of risks is necessary. First, a literature review in the design process was conducted and then a questionnaire was designed to identify the risks and risk factors. The questions in the form of the questionnaire were based on the “similar service description of study and supervision of architectural works” published by “Vice Presidency of Strategic Planning & Supervision of I.R. Iran” as the base of architects’ tasks. Second, the top 10 risks of architectural activities were identified. To determine the positions of possible causes of risks with respect to architectural activities, these activities were located in a design process modeled by the IDEF0 technique. The research was carried out by choosing a case study, checking the design drawings, interviewing its architect and client, and providing a checklist in order to identify the concrete examples of architectural design errors. The results revealed that activities such as “defining the current and future requirements of the project”, “studies and space planning,” and “time and cost estimation of suggested solution” has a higher error risk than others. Moreover, the most important causes include “unclear goals of a client”, “time force by a client”, and “lack of knowledge of architects about the requirements of end-users”. For error detecting in the case study, lack of criteria, standards and design criteria, and lack of coordination among them, was a barrier, anyway, “lack of coordination between architectural design and electrical and mechanical facility”, “violation of the standard dimensions and sizes in space designing”, “design omissions” were identified as the most important design errors.

Keywords: architectural design, design error, risk management, risk factor

Procedia PDF Downloads 129
9730 Improving Anchor Technology for Adapting the Weak Soil

Authors: Sang Hee Shin

Abstract:

The technical improving project is for using the domestic construction technology in the weak soil condition. The improved technology is applied directly under local construction site at OOO, OOO. Existing anchor technology was developed for the case of soft ground as N value 10 or less. In case of soft ground and heavy load, the attachment site per one strand is shortened due to the distributed interval so that the installation site is increased relatively and being economically infeasible. In addition, in case of high tensile load, adhesion phenomenon between wedge and block occurs. To solve these problems, it strengthens the function of the attached strands to treat a ‘bulbing’ on the strands. In the solution for minimizing the internal damage and strengthening the removal function, it induces lubricating action using the film and the attached film, and it makes the buffer structure using wedge lubricating structure and the spring. The technology is performed such as in-house testing and the field testing. The project can improve the reliability of the standardized quality technique. As a result, it intended to give the technical competitiveness.

Keywords: anchor, improving technology, removal anchor, soil reinforcement, weak soil

Procedia PDF Downloads 207
9729 Wet Sliding Wear and Frictional Behavior of Commercially Available Perspex

Authors: S. Reaz Ahmed, M. S. Kaiser

Abstract:

The tribological behavior of commercially used Perspex was evaluated under dry and wet sliding condition using a pin-on-disc wear tester with different applied loads ranging from 2.5 to 20 N. Experiments were conducted with varying sliding distance from 0.2 km to 4.6 km, wherein the sliding velocity was kept constant, 0.64 ms-1. The results reveal that the weight loss increases with applied load and the sliding distance. The nature of the wear rate was very similar in both the sliding environments in which initially the wear rate increased very rapidly with increasing sliding distance and then progressed to a slower rate. Moreover, the wear rate in wet sliding environment was significantly lower than that under dry sliding condition. The worn surfaces were characterized by optical microscope and SEM. It is found that surface modification has significant effect on sliding wear performance of Perspex.

Keywords: Perspex, wear, friction, SEM

Procedia PDF Downloads 269
9728 Rapid Soil Classification Using Computer Vision, Electrical Resistivity and Soil Strength

Authors: Eugene Y. J. Aw, J. W. Koh, S. H. Chew, K. E. Chua, Lionel L. J. Ang, Algernon C. S. Hong, Danette S. E. Tan, Grace H. B. Foo, K. Q. Hong, L. M. Cheng, M. L. Leong

Abstract:

This paper presents a novel rapid soil classification technique that combines computer vision with four-probe soil electrical resistivity method and cone penetration test (CPT), to improve the accuracy and productivity of on-site classification of excavated soil. In Singapore, excavated soils from local construction projects are transported to Staging Grounds (SGs) to be reused as fill material for land reclamation. Excavated soils are mainly categorized into two groups (“Good Earth” and “Soft Clay”) based on particle size distribution (PSD) and water content (w) from soil investigation reports and on-site visual survey, such that proper treatment and usage can be exercised. However, this process is time-consuming and labour-intensive. Thus, a rapid classification method is needed at the SGs. Computer vision, four-probe soil electrical resistivity and CPT were combined into an innovative non-destructive and instantaneous classification method for this purpose. The computer vision technique comprises soil image acquisition using industrial grade camera; image processing and analysis via calculation of Grey Level Co-occurrence Matrix (GLCM) textural parameters; and decision-making using an Artificial Neural Network (ANN). Complementing the computer vision technique, the apparent electrical resistivity of soil (ρ) is measured using a set of four probes arranged in Wenner’s array. It was found from the previous study that the ANN model coupled with ρ can classify soils into “Good Earth” and “Soft Clay” in less than a minute, with an accuracy of 85% based on selected representative soil images. To further improve the technique, the soil strength is measured using a modified mini cone penetrometer, and w is measured using a set of time-domain reflectometry (TDR) probes. Laboratory proof-of-concept was conducted through a series of seven tests with three types of soils – “Good Earth”, “Soft Clay” and an even mix of the two. Validation was performed against the PSD and w of each soil type obtained from conventional laboratory tests. The results show that ρ, w and CPT measurements can be collectively analyzed to classify soils into “Good Earth” or “Soft Clay”. It is also found that these parameters can be integrated with the computer vision technique on-site to complete the rapid soil classification in less than three minutes.

Keywords: Computer vision technique, cone penetration test, electrical resistivity, rapid and non-destructive, soil classification

Procedia PDF Downloads 209
9727 Feature Location Restoration for Under-Sampled Photoplethysmogram Using Spline Interpolation

Authors: Hangsik Shin

Abstract:

The purpose of this research is to restore the feature location of under-sampled photoplethysmogram using spline interpolation and to investigate feasibility for feature shape restoration. We obtained 10 kHz-sampled photoplethysmogram and decimated it to generate under-sampled dataset. Decimated dataset has 5 kHz, 2.5 k Hz, 1 kHz, 500 Hz, 250 Hz, 25 Hz and 10 Hz sampling frequency. To investigate the restoration performance, we interpolated under-sampled signals with 10 kHz, then compared feature locations with feature locations of 10 kHz sampled photoplethysmogram. Features were upper and lower peak of photplethysmography waveform. Result showed that time differences were dramatically decreased by interpolation. Location error was lesser than 1 ms in both feature types. In 10 Hz sampled cases, location error was also deceased a lot, however, they were still over 10 ms.

Keywords: peak detection, photoplethysmography, sampling, signal reconstruction

Procedia PDF Downloads 364
9726 Enhancement of coupler-based delay line filters modulation techniques using optical wireless channel and amplifiers at 100 Gbit/s

Authors: Divya Sisodiya, Deepika Sipal

Abstract:

Optical wireless communication (OWC) is a relatively new technology in optical communication systems that allows for high-speed wireless optical communication. This research focuses on developing a cost-effective OWC system using a hybrid configuration of optical amplifiers. In addition to using EDFA amplifiers, a comparison study was conducted to determine which modulation technique is more effective for communication. This research examines the performance of an OWC system based on ASK and PSK modulation techniques by varying OWC parameters under various atmospheric conditions such as rain, mist, haze, and snow. Finally, the simulation results are discussed and analyzed.

Keywords: OWC, bit error rate, amplitude shift keying, phase shift keying, attenuation, amplifiers

Procedia PDF Downloads 127
9725 Assessment of the Radiation Absorbed Dose Produced by Lu-177, Ra-223, AC-225 for Metastatic Prostate Cancer in a Bone Model

Authors: Maryam Tajadod

Abstract:

The treatment of cancer is one of the main challenges of nuclear medicine; while cancer begins in an organ, such as the breast or prostate, it spreads to the bone, resulting in metastatic bone. In the treatment of cancer with radiotherapy, the determination of the involved tissues’ dose is one of the important steps in the treatment protocol. Comparing absorbed doses for Lu-177 and Ra-223 and Ac-225 in the bone marrow and soft tissue of bone phantom with evaluating energetic emitted particles of these radionuclides is the important aim of this research. By the use of MCNPX computer code, a model for bone phantom was designed and the values of absorbed dose for Ra-223 and Ac-225, which are Alpha emitters & Lu-177, which is a beta emitter, were calculated. As a result of research, in comparing gamma radiation for three radionuclides, Lu-177 released the highest dose in the bone marrow and Ra-223 achieved the lowest level. On the other hand, the result showed that although the figures of absorbed dose for Ra and Ac in the bone marrow are near to each other, Ra spread more energy in cortical bone. Moreover, The alpha component of the Ra-223 and Ac-225 have very little effect on bone marrow and soft tissue than a beta component of the lu-177 and it leaves the highest absorbed dose in the bone where the source is located.

Keywords: bone metastases, lutetium-177, radium-223, actinium-225, absorbed dose

Procedia PDF Downloads 108
9724 The Semiotics of Soft Power; An Examination of the South Korean Entertainment Industry

Authors: Enya Trenholm-Jensen

Abstract:

This paper employs various semiotic methodologies to examine the mechanism of soft power. Soft power refers to a country’s global reputation and their ability to leverage that reputation to achieve certain aims. South Korea has invested heavily in their soft power strategy for a multitude of predominantly historical and geopolitical reasons. On account of this investment and the global prominence of their strategy, South Korea was considered to be the optimal candidate for the aims of this investigation. Having isolated the entertainment industry as one of the most heavily funded segments of the South Korean soft power strategy, the analysis restricted itself to this sector. Within this industry, two entertainment products were selected as case studies. The case studies were chosen based on commercial success according to metrics such as streams, purchases, and subsequent revenue. This criterion was deemed to be the most objective and verifiable indicator of the products general appeal. The entertainment products which met the chosen criterion were Netflix’ “Squid Game” and BTS’ hit single “Butter”. The methodologies employed were chosen according to the medium of the entertainment products. For “Squid Game,” an aesthetic analysis was carried out to investigate how multi- layered meanings were mobilized in a show popularized by its visual grammar. To examine “Butter”, both music semiology and linguistic analysis were employed. The music section featured an analysis underpinned by denotative and connotative music semiotic theories borrowing from scholars Theo van Leeuwen and Martin Irvine. The linguistic analysis focused on stance and semantic fields according to scholarship by George Yule and John W. DuBois. The aesthetic analysis of the first case study revealed intertextual references to famous artworks, which served to augment the emotional provocation of the Squid Game narrative. For the second case study, the findings exposed a set of musical meaning units arranged in a patchwork of familiar and futuristic elements to achieve a song that existed on the boundary between old and new. The linguistic analysis of the song’s lyrics found a deceptively innocuous surface level meaning that bore implications for authority, intimacy, and commercial success. Whether through means of visual metaphor, embedded auditory associations, or linguistic subtext, the collective findings of the three analyses exhibited a desire to conjure a form of positive arousal in the spectator. In the synthesis section, this process is likened to that of branding. Through an exploration of branding, the entertainment products can be understood as cogs in a larger operation aiming to create positive associations to Korea as a country and a concept. Limitations in the form of a timeframe biased perspective are addressed, and directions for future research are suggested. This paper employs semiotic methodologies to examine two entertainment products as mechanisms of soft power. Through means of visual metaphor, embedded auditory associations, or linguistic subtext, the findings reveal a desire to conjure positive arousal in the spectator. The synthesis finds similarities to branding, thus positioning the entertainment products as cogs in a larger operation aiming to create positive associations to Korea as a country and a concept.

Keywords: BTS, cognitive semiotics, entertainment, soft power, south korea, squid game

Procedia PDF Downloads 149
9723 Capturing the Stress States in Video Conferences by Photoplethysmographic Pulse Detection

Authors: Jarek Krajewski, David Daxberger

Abstract:

We propose a stress detection method based on an RGB camera using heart rate detection, also known as Photoplethysmography Imaging (PPGI). This technique focuses on the measurement of the small changes in skin colour caused by blood perfusion. A stationary lab setting with simulated video conferences is chosen using constant light conditions and a sampling rate of 30 fps. The ground truth measurement of heart rate is conducted with a common PPG system. The proposed approach for pulse peak detection is based on a machine learning-based approach, applying brute force feature extraction for the prediction of heart rate pulses. The statistical analysis showed good agreement (correlation r = .79, p<0.05) between the reference heart rate system and the proposed method. Based on these findings, the proposed method could provide a reliable, low-cost, and contactless way of measuring HR parameters in daily-life environments.

Keywords: heart rate, PPGI, machine learning, brute force feature extraction

Procedia PDF Downloads 122
9722 Maximum Initial Input Allowed to Iterative Learning Control Set-up Using Singular Values

Authors: Naser Alajmi, Ali Alobaidly, Mubarak Alhajri, Salem Salamah, Muhammad Alsubaie

Abstract:

Iterative Learning Control (ILC) known to be a controlling tool to overcome periodic disturbances for repetitive systems. This technique is required to let the error signal tends to zero as the number of operation increases. The learning process that lies within this context is strongly dependent on the initial input which if selected properly tends to let the learning process be more effective compared to the case where a system starts from blind. ILC uses previous recorded execution data to update the following execution/trial input such that a reference trajectory is followed to a high accuracy. Error convergence in ILC is generally highly dependent on the input applied to a plant for trial $1$, thus a good choice of initial starting input signal would make learning faster and as a consequence the error tends to zero faster as well. In the work presented within, an upper limit based on the Singular Values Principle (SV) is derived for the initial input signal applied at trial $1$ such that the system follow the reference in less number of trials without responding aggressively or exceeding the working envelope where a system is required to move within in a robot arm, for example. Simulation results presented illustrate the theory introduced within this paper.

Keywords: initial input, iterative learning control, maximum input, singular values

Procedia PDF Downloads 236
9721 The Non-Existence of Perfect 2-Error Correcting Lee Codes of Word Length 7 over Z

Authors: Catarina Cruz, Ana Breda

Abstract:

Tiling problems have been capturing the attention of many mathematicians due to their real-life applications. In this study, we deal with tilings of Zⁿ by Lee spheres, where n is a positive integer number, being these tilings related with error correcting codes on the transmission of information over a noisy channel. We focus our attention on the question ‘for what values of n and r does the n-dimensional Lee sphere of radius r tile Zⁿ?’. It seems that the n-dimensional Lee sphere of radius r does not tile Zⁿ for n ≥ 3 and r ≥ 2. Here, we prove that is not possible to tile Z⁷ with Lee spheres of radius 2 presenting a proof based on a combinatorial method and faithful to the geometric idea of the problem. The non-existence of such tilings has been studied by several authors being considered the most difficult cases those in which the radius of the Lee spheres is equal to 2. The relation between these tilings and error correcting codes is established considering the center of a Lee sphere as a codeword and the other elements of the sphere as words which are decoded by the central codeword. When the Lee spheres of radius r centered at elements of a set M ⊂ Zⁿ tile Zⁿ, M is a perfect r-error correcting Lee code of word length n over Z, denoted by PL(n, r). Our strategy to prove the non-existence of PL(7, 2) codes are based on the assumption of the existence of such code M. Without loss of generality, we suppose that O ∈ M, where O = (0, ..., 0). In this sense and taking into account that we are dealing with Lee spheres of radius 2, O covers all words which are distant two or fewer units from it. By the definition of PL(7, 2) code, each word which is distant three units from O must be covered by a unique codeword of M. These words have to be covered by codewords which dist five units from O. We prove the non-existence of PL(7, 2) codes showing that it is not possible to cover all the referred words without superposition of Lee spheres whose centers are distant five units from O, contradicting the definition of PL(7, 2) code. We achieve this contradiction by combining the cardinality of particular subsets of codewords which are distant five units from O. There exists an extensive literature on codes in the Lee metric. Here, we present a new approach to prove the non-existence of PL(7, 2) codes.

Keywords: Golomb-Welch conjecture, Lee metric, perfect Lee codes, tilings

Procedia PDF Downloads 157
9720 Assessment of Time-variant Work Stress for Human Error Prevention

Authors: Hyeon-Kyo Lim, Tong-Il Jang, Yong-Hee Lee

Abstract:

For an operator in a nuclear power plant, human error is one of the most dreaded factors that may result in unexpected accidents. The possibility of human errors may be low, but the risk of them would be unimaginably enormous. Thus, for accident prevention, it is quite indispensable to analyze the influence of any factors which may raise the possibility of human errors. During the past decades, not a few research results showed that performance of human operators may vary over time due to lots of factors. Among them, stress is known to be an indirect factor that may cause human errors and result in mental illness. Until now, not a few assessment tools have been developed to assess stress level of human workers. However, it still is questionable to utilize them for human performance anticipation which is related with human error possibility, because they were mainly developed from the viewpoint of mental health rather than industrial safety. Stress level of a person may go up or down with work time. In that sense, if they would be applicable in the safety aspect, they should be able to assess the variation resulted from work time at least. Therefore, this study aimed to compare their applicability for safety purpose. More than 10 kinds of work stress tools were analyzed with reference to assessment items, assessment and analysis methods, and follow-up measures which are known to close related factors with work stress. The results showed that most tools mainly focused their weights on some common organizational factors such as demands, supports, and relationships, in sequence. Their weights were broadly similar. However, they failed to recommend practical solutions. Instead, they merely advised to set up overall counterplans in PDCA cycle or risk management activities which would be far from practical human error prevention. Thus, it was concluded that application of stress assessment tools mainly developed for mental health seemed to be impractical for safety purpose with respect to human performance anticipation, and that development of a new assessment tools would be inevitable if anyone wants to assess stress level in the aspect of human performance variation and accident prevention. As a consequence, as practical counterplans, this study proposed a new scheme for assessment of work stress level of a human operator that may vary over work time which is closely related with the possibility of human errors.

Keywords: human error, human performance, work stress, assessment tool, time-variant, accident prevention

Procedia PDF Downloads 668
9719 Banking Sector Development and Economic Growth: Evidence from the State of Qatar

Authors: Fekri Shawtari

Abstract:

The banking sector plays a very crucial role in the economic development of the country. As a financial intermediary, it has assigned a great role in the economic growth and stability. This paper aims to examine the empirically the relationship between banking industry and economic growth in state of Qatar. We adopt the VAR vector error correction model (VECM) along with Granger causality to address the issue over the long-run and short-run between the banking sector and economic growth. It is expected that the results will give policy directions to the policymakers to make strategies that are conducive toward boosting development to achieve the targeted economic growth in current situation.

Keywords: economic growth, banking sector, Qatar, vector error correction model, VECM

Procedia PDF Downloads 166
9718 Discrete Element Simulations of Composite Ceramic Powders

Authors: Julia Cristina Bonaldo, Christophe L. Martin, Severine Romero Baivier, Stephane Mazerat

Abstract:

Alumina refractories are commonly used in steel and foundry industries. These refractories are prepared through a powder metallurgy route. They are a mixture of hard alumina particles and graphite platelets embedded into a soft carbonic matrix (binder). The powder can be cold pressed isostatically or uniaxially, depending on the application. The compact is then fired to obtain the final product. The quality of the product is governed by the microstructure of the composite and by the process parameters. The compaction behavior and the mechanical properties of the fired product depend greatly on the amount of each phase, on their morphology and on the initial microstructure. In order to better understand the link between these parameters and the macroscopic behavior, we use the Discrete Element Method (DEM) to simulate the compaction process and the fracture behavior of the fired composite. These simulations are coupled with well-designed experiments. Four mixes with various amounts of Al₂O₃ and binder were tested both experimentally and numerically. In DEM, each particle is modelled and the interactions between particles are taken into account through appropriate contact or bonding laws. Here, we model a bimodal mixture of large Al₂O₃ and small Al₂O₃ covered with a soft binder. This composite is itself mixed with graphite platelets. X-ray tomography images are used to analyze the morphologies of the different components. Large Al₂O₃ particles and graphite platelets are modelled in DEM as sets of particles bonded together. The binder is modelled as a soft shell that covers both large and small Al₂O₃ particles. When two particles with binder indent each other, they first interact through this soft shell. Once a critical indentation is reached (towards the end of compaction), hard Al₂O₃ - Al₂O₃ contacts appear. In accordance with experimental data, DEM simulations show that the amount of Al₂O₃ and the amount of binder play a major role for the compaction behavior. The graphite platelets bend and break during the compaction, also contributing to the macroscopic stress. Firing step is modeled in DEM by ascribing bonds to particles which contact each other after compaction. The fracture behavior of the compacted mixture is also simulated and compared with experimental data. Both diametrical tests (Brazilian tests) and triaxial tests are carried out. Again, the link between the amount of Al₂O₃ particles and the fracture behavior is investigated. The methodology described here can be generalized to other particulate materials that are used in the ceramic industry.

Keywords: cold compaction, composites, discrete element method, refractory materials, x-ray tomography

Procedia PDF Downloads 136
9717 Modelling Vehicle Fuel Consumption Utilising Artificial Neural Networks

Authors: Aydin Azizi, Aburrahman Tanira

Abstract:

The main source of energy used in this modern age is fossil fuels. There is a myriad of problems that come with the use of fossil fuels, out of which the issues with the greatest impact are its scarcity and the cost it imposes on the planet. Fossil fuels are the only plausible option for many vital functions and processes; the most important of these is transportation. Thus, using this source of energy wisely and as efficiently as possible is a must. The aim of this work was to explore utilising mathematical modelling and artificial intelligence techniques to enhance fuel consumption in passenger cars by focusing on the speed at which cars are driven. An artificial neural network with an error less than 0.05 was developed to be applied practically as to predict the rate of fuel consumption in vehicles.

Keywords: mathematical modeling, neural networks, fuel consumption, fossil fuel

Procedia PDF Downloads 403
9716 Virtual Assessment of Measurement Error in the Fractional Flow Reserve

Authors: Keltoum Chahour, Mickael Binois

Abstract:

Due to a lack of standardization during the invasive fractional flow reserve (FFR) procedure, the index is subject to many sources of uncertainties. In this paper, we investigate -through simulation- the effect of the (FFR) device position and configuration on the obtained value of the (FFR) fraction. For this purpose, we use computational fluid dynamics (CFD) in a 3D domain corresponding to a diseased arterial portion. The (FFR) pressure captor is introduced inside it with a given length and coefficient of bending to capture the (FFR) value. To get over the computational limitations, basically, the time of the simulation is about 2h 15min for one (FFR) value; we generate a Gaussian Process (GP) model for (FFR) prediction. The (GP) model indicates good accuracy and demonstrates the effective error in the measurement created by the random configuration of the pressure captor.

Keywords: fractional flow reserve, Gaussian processes, computational fluid dynamics, drift

Procedia PDF Downloads 129
9715 Maximum Likelihood Estimation Methods on a Two-Parameter Rayleigh Distribution under Progressive Type-Ii Censoring

Authors: Daniel Fundi Murithi

Abstract:

Data from economic, social, clinical, and industrial studies are in some way incomplete or incorrect due to censoring. Such data may have adverse effects if used in the estimation problem. We propose the use of Maximum Likelihood Estimation (MLE) under a progressive type-II censoring scheme to remedy this problem. In particular, maximum likelihood estimates (MLEs) for the location (µ) and scale (λ) parameters of two Parameter Rayleigh distribution are realized under a progressive type-II censoring scheme using the Expectation-Maximization (EM) and the Newton-Raphson (NR) algorithms. These algorithms are used comparatively because they iteratively produce satisfactory results in the estimation problem. The progressively type-II censoring scheme is used because it allows the removal of test units before the termination of the experiment. Approximate asymptotic variances and confidence intervals for the location and scale parameters are derived/constructed. The efficiency of EM and the NR algorithms is compared given root mean squared error (RMSE), bias, and the coverage rate. The simulation study showed that in most sets of simulation cases, the estimates obtained using the Expectation-maximization algorithm had small biases, small variances, narrower/small confidence intervals width, and small root of mean squared error compared to those generated via the Newton-Raphson (NR) algorithm. Further, the analysis of a real-life data set (data from simple experimental trials) showed that the Expectation-Maximization (EM) algorithm performs better compared to Newton-Raphson (NR) algorithm in all simulation cases under the progressive type-II censoring scheme.

Keywords: expectation-maximization algorithm, maximum likelihood estimation, Newton-Raphson method, two-parameter Rayleigh distribution, progressive type-II censoring

Procedia PDF Downloads 157
9714 Design of Decimation Filter Using Cascade Structure for Sigma Delta ADC

Authors: Misbahuddin Mahammad, P. Chandra Sekhar, Metuku Shyamsunder

Abstract:

The oversampled output of a sigma-delta modulator is decimated to Nyquist sampling rate by decimation filters. The decimation filters work twofold; they decimate the sampling rate by a factor of OSR (oversampling rate) and they remove the out band quantization noise resulting in an increase in resolution. The speed, area and power consumption of oversampled converter are governed largely by decimation filters in sigma-delta A/D converters. The scope of the work is to design a decimation filter for sigma-delta ADC and simulation using MATLAB. The decimation filter structure is based on cascaded-integrated comb (CIC) filter. A second decimation filter is using CIC for large rate change and cascaded FIR filters, for small rate changes, to improve the frequency response. The proposed structure is even more hardware efficient.

Keywords: sigma delta modulator, CIC filter, decimation filter, compensation filter, noise shaping

Procedia PDF Downloads 455
9713 Spatial Characters Adapted to Rainwater Natural Circulation in Residential Landscape

Authors: Yun Zhang

Abstract:

Urban housing in China is typified by residential districts that occupy 25 to 40 percentage of the urban land. In residential districts, squares, roads, and building facades, as well as plants, usually form a four-grade spatial structure: district entrances, central landscapes, housing cluster entrances, green spaces between dwellings. This spatial structure and its elements not only compose the visible residential landscape but also play a major role of carrying rain water. These elements, therefore, imply ecological significance to urban fitness. Based upon theories of landscape ecology, residential landscape can be understood as a pattern typified by minor soft patch of planted area and major hard patch of buildings and squares, as well as hard corridors of roads. Use five landscape districts in Hangzhou as examples; this paper finds that the size, shape and slope direction of soft patch, the bend of roads, and the form of the four-grade spatial structure are influential for adapting to natural rainwater circulation.

Keywords: Hangzhou China, rainwater, residential landscape, spatial character, urban housing

Procedia PDF Downloads 321
9712 Modeling Visual Memorability Assessment with Autoencoders Reveals Characteristics of Memorable Images

Authors: Elham Bagheri, Yalda Mohsenzadeh

Abstract:

Image memorability refers to the phenomenon where certain images are more likely to be remembered by humans than others. It is a quantifiable and intrinsic attribute of an image. Understanding how visual perception and memory interact is important in both cognitive science and artificial intelligence. It reveals the complex processes that support human cognition and helps to improve machine learning algorithms by mimicking the brain's efficient data processing and storage mechanisms. To explore the computational underpinnings of image memorability, this study examines the relationship between an image's reconstruction error, distinctiveness in latent space, and its memorability score. A trained autoencoder is used to replicate human-like memorability assessment inspired by the visual memory game employed in memorability estimations. This study leverages a VGG-based autoencoder that is pre-trained on the vast ImageNet dataset, enabling it to recognize patterns and features that are common to a wide and diverse range of images. An empirical analysis is conducted using the MemCat dataset, which includes 10,000 images from five broad categories: animals, sports, food, landscapes, and vehicles, along with their corresponding memorability scores. The memorability score assigned to each image represents the probability of that image being remembered by participants after a single exposure. The autoencoder is finetuned for one epoch with a batch size of one, attempting to create a scenario similar to human memorability experiments where memorability is quantified by the likelihood of an image being remembered after being seen only once. The reconstruction error, which is quantified as the difference between the original and reconstructed images, serves as a measure of how well the autoencoder has learned to represent the data. The reconstruction error of each image, the error reduction, and its distinctiveness in latent space are calculated and correlated with the memorability score. Distinctiveness is measured as the Euclidean distance between each image's latent representation and its nearest neighbor within the autoencoder's latent space. Different structural and perceptual loss functions are considered to quantify the reconstruction error. The results indicate that there is a strong correlation between the reconstruction error and the distinctiveness of images and their memorability scores. This suggests that images with more unique distinct features that challenge the autoencoder's compressive capacities are inherently more memorable. There is also a negative correlation between the reduction in reconstruction error compared to the autoencoder pre-trained on ImageNet, which suggests that highly memorable images are harder to reconstruct, probably due to having features that are more difficult to learn by the autoencoder. These insights suggest a new pathway for evaluating image memorability, which could potentially impact industries reliant on visual content and mark a step forward in merging the fields of artificial intelligence and cognitive science. The current research opens avenues for utilizing neural representations as instruments for understanding and predicting visual memory.

Keywords: autoencoder, computational vision, image memorability, image reconstruction, memory retention, reconstruction error, visual perception

Procedia PDF Downloads 82
9711 Toward an Informed Capacity Development Program in Inclusive and Sustainable Agricultural and Rural Development

Authors: Maria Ana T. Quimbo

Abstract:

As the Southeast Asian Regional Center for Graduate Study and Research in Agriculture (SEARCA) approaches its 50th founding anniversary. It continues to pursue its mission of strengthening the capacities of Southeast Asian leaders and institutions under its reformulated mission of Inclusive and Sustainable Agricultural and Rural Development (ISARD). Guided by this mission, this study analyzed the desired and priority capacity development needs of institutions heads and key personnel toward addressing the constraints, problems, and issues related to agricultural and rural development toward achieving their institutional goals. Adopting an exploratory, descriptive research design, the study examined the competency needs at the institutional and personnel levels. A total of 35 institution heads from seven countries and 40 key personnel from eight countries served as research participants. The results showed a variety of competencies in the areas of leadership and management, agriculture, climate change, research, monitoring, and evaluation, planning, and extension or community service. While mismatch was found in a number of desired and priority competency areas as perceived by the respondents, there were also interesting concordant answers in both technical and non-technical areas. Interestingly, the competency needs both desired and prioritized were a combination of “hard” or technical skills and “soft” or interpersonal skills. Policy recommendations were forwarded on the need to continue building capacities in core competencies along ISARD; have a balance of 'hard' skills and 'soft' skills through the use of appropriate training strategies and explicit statement in training objectives, strengthen awareness on “soft” skills through its integration in workplace culture, build capacity on action research, continue partnerships encourage mentoring, prioritize competencies, and build capacity of desired and priority competency areas.

Keywords: capacity development, competency needs assessment, sustainability and development, ISARD

Procedia PDF Downloads 376
9710 Exclusive Breastfeeding Abandonment among Adolescent Mothers: A Cohort Study

Authors: Maria I. Nuñez-Hernández, Maria L. Riesco

Abstract:

Background: Exclusive breastfeeding (EBF) up to 6 months old infant have been considered one of the most important factors in the overall development of children. Nevertheless, as resources are scarce, it is essential to identify the most vulnerable groups that have major risk of EBF abandonment, in order to deliver the best strategies. Children of adolescent mothers are within these groups. Aims: To determine the EBF abandonment rate among adolescent mothers and to analyze the associated factors. Methods: Prospective cohort study of adolescent mothers in the southern area of Santiago, Chile, conducted in primary care services of public health system. The cohort was established from 2014 to 2015, with a sample of 105 adolescent mothers and their children at 2 months of life. The inclusion criteria were: adolescent mother from 14 to 19 years old; not twin babies; mother and baby leaving the hospital together after birthchild; correct attachment of the baby to the breast; no difficulty understanding the Spanish language or communicating. Follow-up was performed at 4 and 6 months old infant. Data were collected by interviews, considering EBF as breastfeeding only, without adding other milk, tea, juice, water or other product that not breast milk, except drugs. Data were analyzed by descriptive and inferential statistics, by Kaplan-Meier estimator and Log-Rank test, admitting the probability of occurrence of type I error of 5% (p-value = 0.05). Results: The cumulative EBF abandonment rate at 2, 4 and 6 months was 33.3%, 52.2% and 63.8%, respectively. Factors associated with EBF abandonment were maternal perception of the quality of milk as poor (p < 0.001), maternal perception that the child was not satisfied after breastfeeding (p < 0.001), use of pacifier (p < 0.001), maternal consumption of illicit drugs after delivery (p < 0.001), mother return to school (p = 0.040) and presence of nipple trauma (p = 0.045). Conclusion: EBF abandonment rate was higher in the first 4 months of life and is superior to the population of women who breastfeed. Among the EBF abandonment factors, one of them is related to the adolescent condition, and two are related to the maternal subjective perception.

Keywords: adolescent, breastfeeding, midwifery, nursing

Procedia PDF Downloads 318
9709 The Effect of Heart Rate and Valence of Emotions on Perceived Intensity of Emotion

Authors: Madeleine Nicole G. Bernardo, Katrina T. Feliciano, Marcelo Nonato A. Nacionales III, Diane Frances M. Peralta, Denise Nicole V. Profeta

Abstract:

This study aims to find out if heart rate variability and valence of emotion have an effect on perceived intensity of emotion. Psychology undergraduates (N = 60) from the University of the Philippines Diliman were shown 10 photographs from the Japanese Female Facial Expression (JAFFE) Database, along with a corresponding questionnaire with a Likert scale on perceived intensity of emotion. In this 3 x 2 mixed subjects factorial design, each group was either made to do a simple exercise prior to answering the questionnaire in order to increase the heart rate, listen to a heart rate of 120 bpm, or colour a drawing to keep the heart rate stable. After doing the activity, the participants then answered the questionnaire, providing a rating of the faces according to the participants’ perceived emotional intensity on the photographs. The photographs presented were either of positive or negative emotional valence. The results of the experiment showed that neither an induced fast heart rate or perceived fast heart rate had any significant effect on the participants’ perceived intensity of emotion. There was also no interaction effect of heart rate variability and valence of emotion. The insignificance of results was explained by the Philippines’ high context culture, accompanied by the prevalence of both intensely valenced positive and negative emotions in Philippine society. Insignificance in the effects were also attributed to the Cannon-Bard theory, Schachter-Singer theory and various methodological limitations.

Keywords: heart rate variability, perceived intensity of emotion, Philippines , valence of emotion

Procedia PDF Downloads 245
9708 Rapid Soil Classification Using Computer Vision with Electrical Resistivity and Soil Strength

Authors: Eugene Y. J. Aw, J. W. Koh, S. H. Chew, K. E. Chua, P. L. Goh, Grace H. B. Foo, M. L. Leong

Abstract:

This paper presents the evaluation of various soil testing methods such as the four-probe soil electrical resistivity method and cone penetration test (CPT) that can complement a newly developed novel rapid soil classification scheme using computer vision, to improve the accuracy and productivity of on-site classification of excavated soil. In Singapore, excavated soils from the local construction industry are transported to Staging Grounds (SGs) to be reused as fill material for land reclamation. Excavated soils are mainly categorized into two groups (“Good Earth” and “Soft Clay”) based on particle size distribution (PSD) and water content (w) from soil investigation reports and on-site visual survey, such that proper treatment and usage can be exercised. However, this process is time-consuming and labor-intensive. Thus, a rapid classification method is needed at the SGs. Four-probe soil electrical resistivity and CPT were evaluated for their feasibility as suitable additions to the computer vision system to further develop this innovative non-destructive and instantaneous classification method. The computer vision technique comprises soil image acquisition using an industrial-grade camera; image processing and analysis via calculation of Grey Level Co-occurrence Matrix (GLCM) textural parameters; and decision-making using an Artificial Neural Network (ANN). It was found from the previous study that the ANN model coupled with ρ can classify soils into “Good Earth” and “Soft Clay” in less than a minute, with an accuracy of 85% based on selected representative soil images. To further improve the technique, the following three items were targeted to be added onto the computer vision scheme: the apparent electrical resistivity of soil (ρ) measured using a set of four probes arranged in Wenner’s array, the soil strength measured using a modified mini cone penetrometer, and w measured using a set of time-domain reflectometry (TDR) probes. Laboratory proof-of-concept was conducted through a series of seven tests with three types of soils – “Good Earth”, “Soft Clay,” and a mix of the two. Validation was performed against the PSD and w of each soil type obtained from conventional laboratory tests. The results show that ρ, w and CPT measurements can be collectively analyzed to classify soils into “Good Earth” or “Soft Clay” and are feasible as complementing methods to the computer vision system.

Keywords: computer vision technique, cone penetration test, electrical resistivity, rapid and non-destructive, soil classification

Procedia PDF Downloads 233
9707 Bayesian Borrowing Methods for Count Data: Analysis of Incontinence Episodes in Patients with Overactive Bladder

Authors: Akalu Banbeta, Emmanuel Lesaffre, Reynaldo Martina, Joost Van Rosmalen

Abstract:

Including data from previous studies (historical data) in the analysis of the current study may reduce the sample size requirement and/or increase the power of analysis. The most common example is incorporating historical control data in the analysis of a current clinical trial. However, this only applies when the historical control dataare similar enough to the current control data. Recently, several Bayesian approaches for incorporating historical data have been proposed, such as the meta-analytic-predictive (MAP) prior and the modified power prior (MPP) both for single control as well as for multiple historical control arms. Here, we examine the performance of the MAP and the MPP approaches for the analysis of (over-dispersed) count data. To this end, we propose a computational method for the MPP approach for the Poisson and the negative binomial models. We conducted an extensive simulation study to assess the performance of Bayesian approaches. Additionally, we illustrate our approaches on an overactive bladder data set. For similar data across the control arms, the MPP approach outperformed the MAP approach with respect to thestatistical power. When the means across the control arms are different, the MPP yielded a slightly inflated type I error (TIE) rate, whereas the MAP did not. In contrast, when the dispersion parameters are different, the MAP gave an inflated TIE rate, whereas the MPP did not.We conclude that the MPP approach is more promising than the MAP approach for incorporating historical count data.

Keywords: count data, meta-analytic prior, negative binomial, poisson

Procedia PDF Downloads 113
9706 Vulnerability Risk Assessment of Non-Engineered Houses Based on Damage Data of the 2009 Padang Earthquake 2009 in Padang City, Indonesia

Authors: Rusnardi Rahmat Putra, Junji Kiyono, Aiko Furukawa

Abstract:

Several powerful earthquakes have struck Padang during recent years, one of the largest of which was an M 7.6 event that occurred on September 30, 2009 and caused more than 1000 casualties. Following the event, we conducted a 12-site microtremor array investigation to gain a representative determination of the soil condition of subsurface structures in Padang. From the dispersion curve of array observations, the central business district of Padang corresponds to relatively soft soil condition with Vs30 less than 400 m/s. because only one accelerometer existed, we simulated the 2009 Padang earthquake to obtain peak ground acceleration for all sites in Padang city. By considering the damage data of the 2009 Padang earthquake, we produced seismic risk vulnerability estimation of non-engineered houses for rock, medium and soft soil condition. We estimated the loss ratio based on the ground response, seismic hazard of Padang and the existing damaged to non-engineered structure houses due to Padang earthquake in 2009 data for several return periods of earthquake events.

Keywords: profile, Padang earthquake, microtremor array, seismic vulnerability

Procedia PDF Downloads 404
9705 Effects of Soil-Structure Interaction on Seismic Performance of Steel Structures Equipped with Viscous Fluid Dampers

Authors: Faramarz Khoshnoudian, Saeed Vosoughiyan

Abstract:

The main goal of this article is to clarify the soil-structure interaction (SSI) effects on the seismic performance of steel moment resisting frame buildings which are rested on soft soil and equipped with viscous fluid dampers (VFDs). For this purpose, detailed structural models of a ten-story SMRF with VFDs excluding and including the SSI are constructed first. In order to simulate the dynamic response of the foundation, in this paper, the simple cone model is applied. Then, the nonlinear time-history analysis of the models is conducted using three kinds of earthquake excitations with different intensities. The analysis results have demonstrated that the SSI effects on the seismic performance of a structure equipped with VFDs and supported by rigid foundation on soft soil need to be considered. Also VFDs designed based on rigid foundation hypothesis fail to achieve the expected seismic objective while SSI goes into effect.

Keywords: nonlinear time-history analysis, soil-structure interaction, steel moment resisting frame building, viscous fluid dampers

Procedia PDF Downloads 332
9704 Optimal ECG Sampling Frequency for Multiscale Entropy-Based HRV

Authors: Manjit Singh

Abstract:

Multiscale entropy (MSE) is an extensively used index to provide a general understanding of multiple complexity of physiologic mechanism of heart rate variability (HRV) that operates on a wide range of time scales. Accurate selection of electrocardiogram (ECG) sampling frequency is an essential concern for clinically significant HRV quantification; high ECG sampling rate increase memory requirements and processing time, whereas low sampling rate degrade signal quality and results in clinically misinterpreted HRV. In this work, the impact of ECG sampling frequency on MSE based HRV have been quantified. MSE measures are found to be sensitive to ECG sampling frequency and effect of sampling frequency will be a function of time scale.

Keywords: ECG (electrocardiogram), heart rate variability (HRV), multiscale entropy, sampling frequency

Procedia PDF Downloads 268
9703 Financial Inclusion for Inclusive Growth in an Emerging Economy

Authors: Godwin Chigozie Okpara, William Chimee Nwaoha

Abstract:

The paper set out to stress on how financial inclusion index could be calculated and also investigated the impact of inclusive finance on inclusive growth in an emerging economy. In the light of these objectives, chi-wins method was used to calculate indexes of financial inclusion while co-integration and error correction model were used for evaluation of the impact of financial inclusion on inclusive growth. The result of the analysis revealed that financial inclusion while having a long-run relationship with GDP growth is an insignificant function of the growth of the economy. The speed of adjustment is correctly signed and significant. On the basis of these results, the researchers called for tireless efforts of government and banking sector in promoting financial inclusion in developing countries.

Keywords: chi-wins index, co-integration, error correction model, financial inclusion

Procedia PDF Downloads 649
9702 The Underestimate of the Annual Maximum Rainfall Depths Due to Coarse Time Resolution Data

Authors: Renato Morbidelli, Carla Saltalippi, Alessia Flammini, Tommaso Picciafuoco, Corrado Corradini

Abstract:

A considerable part of rainfall data to be used in the hydrological practice is available in aggregated form within constant time intervals. This can produce undesirable effects, like the underestimate of the annual maximum rainfall depth, Hd, associated with a given duration, d, that is the basic quantity in the development of rainfall depth-duration-frequency relationships and in determining if climate change is producing effects on extreme event intensities and frequencies. The errors in the evaluation of Hd from data characterized by a coarse temporal aggregation, ta, and a procedure to reduce the non-homogeneity of the Hd series are here investigated. Our results indicate that: 1) in the worst conditions, for d=ta, the estimation of a single Hd value can be affected by an underestimation error up to 50%, while the average underestimation error for a series with at least 15-20 Hd values, is less than or equal to 16.7%; 2) the underestimation error values follow an exponential probability density function; 3) each very long time series of Hd contains many underestimated values; 4) relationships between the non-dimensional ratio ta/d and the average underestimate of Hd, derived from continuous rainfall data observed in many stations of Central Italy, may overcome this issue; 5) these equations should allow to improve the Hd estimates and the associated depth-duration-frequency curves at least in areas with similar climatic conditions.

Keywords: central Italy, extreme events, rainfall data, underestimation errors

Procedia PDF Downloads 187