Search results for: single event upset
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5627

Search results for: single event upset

1997 Designing Floor Planning in 2D and 3D with an Efficient Topological Structure

Authors: V. Nagammai

Abstract:

Very-large-scale integration (VLSI) is the process of creating an integrated circuit (IC) by combining thousands of transistors into a single chip. Development of technology increases the complexity in IC manufacturing which may vary the power consumption, increase the size and latency period. Topology defines a number of connections between network. In this project, NoC topology is generated using atlas tool which will increase performance in turn determination of constraints are effective. The routing is performed by XY routing algorithm and wormhole flow control. In NoC topology generation, the value of power, area and latency are predetermined. In previous work, placement, routing and shortest path evaluation is performed using an algorithm called floor planning with cluster reconstruction and path allocation algorithm (FCRPA) with the account of 4 3x3 switch, 6 4x4 switch, and 2 5x5 switches. The usage of the 4x4 and 5x5 switch will increase the power consumption and area of the block. In order to avoid the problem, this paper has used one 8x8 switch and 4 3x3 switches. This paper uses IPRCA which of 3 steps they are placement, clustering, and shortest path evaluation. The placement is performed using min – cut placement and clustering are performed using an algorithm called cluster generation. The shortest path is evaluated using an algorithm called Dijkstra's algorithm. The power consumption of each block is determined. The experimental result shows that the area, power, and wire length improved simultaneously.

Keywords: application specific noc, b* tree representation, floor planning, t tree representation

Procedia PDF Downloads 383
1996 Using Social Media to Amplify Social Entrepreneurial Message

Authors: Irfan Khairi

Abstract:

It is arguable that today's social media has dramatically redefined human contact, and chiefly because the platforms enable communication opportunities unprecedented. Without question, billions of individuals globally engage in the media, a reality by no means lost on businesses and social entrepreneurs desirous of generating interest in a cause, movement, or other social effort. If, however, the opportunities are immense, so too is the competition. Private persons and entrepreneurial concerns alike virtually saturate the popular sites of Facebook, Twitter, and Instagram, and most are intent on capturing as much external interest as possible. At the same time, however, the social entrepreneur possesses an advantage over the individual concerned only the social aspects of the sites, as they express interests in, and measures applicable to, important causes of which the public at large may be unaware. There is, unfortunately, no single means of assuring success in using the media outlets to generate interest. Nonetheless, a general awareness of how social media sites function, as well as the psychological elements relevant to the functioning, is necessary. It is as important to comprehend basic realities of the platforms and approaches that fail as it is to develop strategy, for the latter relies on knowledge of the former. This awareness in place, the social entrepreneur is then better enabled to determine strategy, in terms of which sites to focus upon and how to most effectively convey their message. What is required is familiarity with the online communities, with attention to the specific advantages each provides. Ultimately, today's social entrepreneur may establish a highly effective platform of promotion and engagement, provided they fully comprehend the social investment necessary for success.

Keywords: social media, marketing, e-commerce, internet business

Procedia PDF Downloads 193
1995 Seismic Retrofits – A Catalyst for Minimizing the Building Sector’s Carbon Footprint

Authors: Juliane Spaak

Abstract:

A life-cycle assessment was performed, looking at seven retrofit projects in New Zealand using LCAQuickV3.5. The study found that retrofits save up to 80% of embodied carbon emissions for the structural elements compared to a new building. In other words, it is only a 20% carbon investment to transform and extend a building’s life. In addition, the systems were evaluated by looking at environmental impacts over the design life of these buildings and resilience using FEMA P58 and PACT software. With the increasing interest in Zero Carbon targets, significant changes in the building and construction sector are required. Emissions for buildings arise from both embodied carbon and operations. Based on the significant advancements in building energy technology, the focus is moving more toward embodied carbon, a large portion of which is associated with the structure. Since older buildings make up most of the real estate stock of our cities around the world, their reuse through structural retrofit and wider refurbishment plays an important role in extending the life of a building’s embodied carbon. New Zealand’s building owners and engineers have learned a lot about seismic issues following a decade of significant earthquakes. Recent earthquakes have brought to light the necessity to move away from constructing code-minimum structures that are designed for life safety but are frequently ‘disposable’ after a moderate earthquake event, especially in relation to a structure’s ability to minimize damage. This means weaker buildings sit as ‘carbon liabilities’, with considerably more carbon likely to be expended remediating damage after a shake. Renovating and retrofitting older assets plays a big part in reducing the carbon profile of the buildings sector, as breathing new life into a building’s structure is vastly more sustainable than the highest quality ‘green’ new builds, which are inherently more carbon-intensive. The demolition of viable older buildings (often including heritage buildings) is increasingly at odds with society’s desire for a lower carbon economy. Bringing seismic resilience and carbon best practice together in decision-making can open the door to commercially attractive outcomes, with retrofits that include structural and sustainability upgrades transforming the asset’s revenue generation. Across the global real estate market, tenants are increasingly demanding the buildings they occupy be resilient and aligned with their own climate targets. The relationship between seismic performance and ‘sustainable design’ has yet to fully mature, yet in a wider context is of profound consequence. A whole-of-life carbon perspective on a building means designing for the likely natural hazards within the asset’s expected lifespan, be that earthquake, storms, damage, bushfires, fires, and so on, ¬with financial mitigation (e.g., insurance) part, but not all, of the picture.

Keywords: retrofit, sustainability, earthquake, reuse, carbon, resilient

Procedia PDF Downloads 58
1994 Mixed Mode Fracture Analyses Using Finite Element Method of Edge Cracked Heavy Spinning Annulus Pulley

Authors: Bijit Kalita, K. V. N. Surendra

Abstract:

Rotating disk is one of the most indispensable parts of a rotating machine. Rotating disk has found many applications in the diverging field of science and technology. In this paper, we have taken into consideration the problem of a heavy spinning disk mounted on a rotor system acted upon by boundary traction. Finite element modelling is used at various loading condition to determine the mixed mode stress intensity factors. The effect of combined shear and normal traction on the boundary is incorporated in the analysis under the action of gravity. The variation near the crack tip is characterized in terms of the stress intensity factor (SIF) with an aim to find the SIF for a wide range of parameters. The results of the finite element analyses carried out on the compressed disk of a belt pulley arrangement using fracture mechanics concepts are shown. A total of hundred cases of the problem are solved for each of the variations in loading arc parameter and crack orientation using finite element models of the disc under compression. All models were prepared and analyzed for the uncracked disk, disk with a single crack at different orientation emanating from shaft hole as well as for a disc with pair of cracks emerging from the same center hole. Curves are plotted for various loading conditions. Finally, crack propagation paths are determined using kink angle concepts.

Keywords: crack-tip deformations, static loading, stress concentration, stress intensity factor

Procedia PDF Downloads 129
1993 Understanding Indonesian Smallholder Dairy Farmers’ Decision to Adopt Multiple Farm: Level Innovations

Authors: Rida Akzar, Risti Permani, Wahida , Wendy Umberger

Abstract:

Adoption of farm innovations may increase farm productivity, and therefore improve market access and farm incomes. However, most studies that look at the level and drivers of innovation adoption only focus on a specific type of innovation. Farmers may consider multiple innovation options, and constraints such as budget, environment, scarcity of labour supply, and the cost of learning. There have been some studies proposing different methods to combine a broad variety of innovations into a single measurable index. However, little has been done to compare these methods and assess whether they provide similar information about farmer segmentation by their ‘innovativeness’. Using data from a recent survey of 220 dairy farm households in West Java, Indonesia, this study compares and considers different methods of deriving an innovation index, including expert-weighted innovation index; an index derived from the total number of adopted technologies; and an index of the extent of adoption of innovation taking into account both adoption and disadoption of multiple innovations. Second, it examines the distribution of different farming systems taking into account their innovativeness and farm characteristics. Results from this study will inform policy makers and stakeholders in the dairy industry on how to better design, target and deliver programs to improve and encourage farm innovation, and therefore improve farm productivity and the performance of the dairy industry in Indonesia.

Keywords: adoption, dairy, household survey, innovation index, Indonesia, multiple innovations dairy, West Java

Procedia PDF Downloads 326
1992 Magnesium Ameliorates Lipopolysaccharide-Induced Liver Injury in Mice

Authors: D. M. El-Tanbouly, R. M. Abdelsalam, A. S. Attia, M. T. Abdel-Aziz

Abstract:

Lipopolysaccharide (LPS) endotoxin, a component of the outer membrane of Gram-negative bacteria, is involved in the pathogenesis of sepsis. LPS administration induces systemic inflammation that mimics many of the initial clinical features of sepsis and has deleterious effects on several organs including the liver and eventually leading to septic shock and death. The present study aimed to investigate the protective effect of magnesium, a well-known cofactor in many enzymatic reactions and a critical component of the antioxidant system, on hepatic damage associated with LPS induced- endotoxima in mice. Mg (20 and 40 mg/kg, po) was administered for 7 consecutive days. Systemic inflammation was induced one hour after the last dose of Mg by a single dose of LPS (2 mg/kg, ip) and three hours thereafter plasma was separated, animals were sacrificed and their livers were isolated. LPS-treated mice suffered from hepatic dysfunction revealed by histological observation, elevation in plasma transaminases activities, C-reactive protein content and caspase-3, a critical marker of apoptosis. Liver inflammation was evident by elevation in liver cytokines contents (TNF-α and IL-10) and myeloperoxidase (MPO) activity. Additionally, oxidative stress was manifested by increased liver lipoperoxidation, glutathione depletion, elevated total nitrate/nitrite (NOx) content and glutathione peroxidase (GPx) activity. Pretreatment with Mg largely mitigated these alternations through its anti-inflammatory and antioxidant potentials. Mg, therefore, could be regarded as an effective strategy for prevention of liver damage associated with septicemia.

Keywords: LPS, liver damage, magnesium, septicemia

Procedia PDF Downloads 384
1991 Experimental Investigation on the Shear Strength Parameters of Sand-Slag Mixtures

Authors: Ayad Salih Sabbar, Amin Chegenizadeh, Hamid Nikraz

Abstract:

Utilizing waste materials in civil engineering applications has a positive influence on the environment by reducing carbon dioxide emissions and issues associated with waste disposal. Granulated blast furnace slag (GBFS) is a by-product of the iron and steel industry, with millions of tons of slag being annually produced worldwide. Slag has been widely used in structural engineering and for stabilizing clay soils; however, studies on the effect of slag on sandy soils are scarce. This article investigates the effect of slag content on shear strength parameters through direct shear tests and unconsolidated undrained triaxial tests on mixtures of Perth sand and slag. For this purpose, sand-slag mixtures, with slag contents of 2%, 4%, and 6% by weight of samples, were tested with direct shear tests under three normal stress values, namely 100 kPa, 150 kPa, and 200 kPa. Unconsolidated undrained triaxial tests were performed under a single confining pressure of 100 kPa and relative density of 80%. The internal friction angles and shear stresses of the mixtures were determined via the direct shear tests, demonstrating that shear stresses increased with increasing normal stress and the internal friction angles and cohesion increased with increasing slag. There were no significant differences in shear stresses parameters when slag content rose from 4% to 6%. The unconsolidated undrained triaxial tests demonstrated that shear strength increased with increasing slag content.

Keywords: direct shear, shear strength, slag, UU test

Procedia PDF Downloads 463
1990 Urban Open Source: Synthesis of a Citizen-Centric Framework to Design Densifying Cities

Authors: Shaurya Chauhan, Sagar Gupta

Abstract:

Prominent urbanizing centres across the globe like Delhi, Dhaka, or Manila have exhibited that development often faces a challenge in bridging the gap among the top-down collective requirements of the city and the bottom-up individual aspirations of the ever-diversifying population. When this exclusion is intertwined with rapid urbanization and diversifying urban demography: unplanned sprawl, poor planning, and low-density development emerge as automated responses. In parallel, new ideas and methods of densification and public participation are being widely adopted as sustainable alternatives for the future of urban development. This research advocates a collaborative design method for future development: one that allows rapid application with its prototypical nature and an inclusive approach with mediation between the 'user' and the 'urban', purely with the use of empirical tools. Building upon the concepts and principles of 'open-sourcing' in design, the research establishes a design framework that serves the current user requirements while allowing for future citizen-driven modifications. This is synthesized as a 3-tiered model: user needs – design ideology – adaptive details. The research culminates into a context-responsive 'open source project development framework' (hereinafter, referred to as OSPDF) that can be used for on-ground field applications. To bring forward specifics, the research looks at a 300-acre redevelopment in the core of a rapidly urbanizing city as a case encompassing extreme physical, demographic, and economic diversity. The suggestive measures also integrate the region’s cultural identity and social character with the diverse citizen aspirations, using architecture and urban design tools, and references from recognized literature. This framework, based on a vision – feedback – execution loop, is used for hypothetical development at the five prevalent scales in design: master planning, urban design, architecture, tectonics, and modularity, in a chronological manner. At each of these scales, the possible approaches and avenues for open- sourcing are identified and validated, through hit-and-trial, and subsequently recorded. The research attempts to re-calibrate the architectural design process and make it more responsive and people-centric. Analytical tools such as Space, Event, and Movement by Bernard Tschumi and Five-Point Mental Map by Kevin Lynch, among others, are deep rooted in the research process. Over the five-part OSPDF, a two-part subsidiary process is also suggested after each cycle of application, for a continued appraisal and refinement of the framework and urban fabric with time. The research is an exploration – of the possibilities for an architect – to adopt the new role of a 'mediator' in development of the contemporary urbanity.

Keywords: open source, public participation, urbanization, urban development

Procedia PDF Downloads 132
1989 Modelling and Control of Binary Distillation Column

Authors: Narava Manose

Abstract:

Distillation is a very old separation technology for separating liquid mixtures that can be traced back to the chemists in Alexandria in the first century A. D. Today distillation is the most important industrial separation technology. By the eleventh century, distillation was being used in Italy to produce alcoholic beverages. At that time, distillation was probably a batch process based on the use of just a single stage, the boiler. The word distillation is derived from the Latin word destillare, which means dripping or trickling down. By at least the sixteenth century, it was known that the extent of separation could be improved by providing multiple vapor-liquid contacts (stages) in a so called Rectifactorium. The term rectification is derived from the Latin words rectefacere, meaning to improve. Modern distillation derives its ability to produce almost pure products from the use of multi-stage contacting. Throughout the twentieth century, multistage distillation was by far the most widely used industrial method for separating liquid mixtures of chemical components.The basic principle behind this technique relies on the different boiling temperatures for the various components of the mixture, allowing the separation between the vapor from the most volatile component and the liquid of other(s) component(s). •Developed a simple non-linear model of a binary distillation column using Skogestad equations in Simulink. •We have computed the steady-state operating point around which to base our analysis and controller design. However, the model contains two integrators because the condenser and reboiler levels are not controlled. One particular way of stabilizing the column is the LV-configuration where we use D to control M_D, and B to control M_B; such a model is given in cola_lv.m where we have used two P-controllers with gains equal to 10.

Keywords: modelling, distillation column, control, binary distillation

Procedia PDF Downloads 260
1988 Human Gesture Recognition for Real-Time Control of Humanoid Robot

Authors: S. Aswath, Chinmaya Krishna Tilak, Amal Suresh, Ganesh Udupa

Abstract:

There are technologies to control a humanoid robot in many ways. But the use of Electromyogram (EMG) electrodes has its own importance in setting up the control system. The EMG based control system helps to control robotic devices with more fidelity and precision. In this paper, development of an electromyogram based interface for human gesture recognition for the control of a humanoid robot is presented. To recognize control signs in the gestures, a single channel EMG sensor is positioned on the muscles of the human body. Instead of using a remote control unit, the humanoid robot is controlled by various gestures performed by the human. The EMG electrodes attached to the muscles generates an analog signal due to the effect of nerve impulses generated on moving muscles of the human being. The analog signals taken up from the muscles are supplied to a differential muscle sensor that processes the given signal to generate a signal suitable for the microcontroller to get the control over a humanoid robot. The signal from the differential muscle sensor is converted to a digital form using the ADC of the microcontroller and outputs its decision to the CM-530 humanoid robot controller through a Zigbee wireless interface. The output decision of the CM-530 processor is sent to a motor driver in order to control the servo motors in required direction for human like actions. This method for gaining control of a humanoid robot could be used for performing actions with more accuracy and ease. In addition, a study has been conducted to investigate the controllability and ease of use of the interface and the employed gestures.

Keywords: electromyogram, gesture, muscle sensor, humanoid robot, microcontroller, Zigbee

Procedia PDF Downloads 394
1987 Optimization Based Extreme Learning Machine for Watermarking of an Image in DWT Domain

Authors: RAM PAL SINGH, VIKASH CHAUDHARY, MONIKA VERMA

Abstract:

In this paper, we proposed the implementation of optimization based Extreme Learning Machine (ELM) for watermarking of B-channel of color image in discrete wavelet transform (DWT) domain. ELM, a regularization algorithm, works based on generalized single-hidden-layer feed-forward neural networks (SLFNs). However, hidden layer parameters, generally called feature mapping in context of ELM need not to be tuned every time. This paper shows the embedding and extraction processes of watermark with the help of ELM and results are compared with already used machine learning models for watermarking.Here, a cover image is divide into suitable numbers of non-overlapping blocks of required size and DWT is applied to each block to be transformed in low frequency sub-band domain. Basically, ELM gives a unified leaning platform with a feature mapping, that is, mapping between hidden layer and output layer of SLFNs, is tried for watermark embedding and extraction purpose in a cover image. Although ELM has widespread application right from binary classification, multiclass classification to regression and function estimation etc. Unlike SVM based algorithm which achieve suboptimal solution with high computational complexity, ELM can provide better generalization performance results with very small complexity. Efficacy of optimization method based ELM algorithm is measured by using quantitative and qualitative parameters on a watermarked image even though image is subjected to different types of geometrical and conventional attacks.

Keywords: BER, DWT, extreme leaning machine (ELM), PSNR

Procedia PDF Downloads 295
1986 Evaluation of a Potential Metabolism-Mediated Drug-Drug Interaction between Carvedilol and Fluvoxamine in Rats

Authors: Ana-Maria Gheldiu, Bianca M. Abrudan, Maria A. Neag, Laurian Vlase, Dana M. Muntean

Abstract:

Background information: The objective of this study was to investigate the effect of multiple-dose fluvoxamine on the pharmacokinetic profile of single-dose carvedilol in rats, in order to evaluate this possible drug-drug pharmacokinetic interaction. Methods: A preclinical study, in 28 white male Wistar rats, was conducted. Each rat was cannulated on the femoral vein, prior to being connected to BASi Culex ABC®. Carvedilol was orally administrated in rats (3.57 mg/kg body mass (b.m.)) in the absence of fluvoxamine or after a pre-treatment with multiple oral doses of fluvoxamine (14.28 mg/kg b.m.). The plasma concentrations of carvedilol were estimated by high performance liquid chromatography-tandem mass spectrometry. The pharmacokinetic parameters of carvedilol were analyzed by non-compartmental method. Results: After carvediol co-administration with fluvoxamine, an approximately 2-fold increase in the exposure of carvedilol was observed, considering the significantly elevated value of the total area under the concentration versus time curve (AUC₀₋∞). Moreover, an increase by approximately 145% of the peak plasma concentration was found, as well as an augmentation by approximately 230% of the half life time of carvedilol was observed. Conclusion: Fluvoxamine co-administration led to a significant alteration of carvedilol’s pharmacokinetic profile in rats, these effects could be explained by the existence of a drug-drug interaction mediated by CYP2D6 inhibition. Acknowledgement: This work was supported by CNCS Romania – project PNII-RU-TE-2014-4-0242.

Keywords: carvedilol, fluvoxamine, drug-drug pharmacokinetic interaction, rats

Procedia PDF Downloads 259
1985 Incidence, Pattern and Risk Factors of Congenial Heart Diseases in Neonates in a Tertiary Care Hospital, Egyptian Study

Authors: Gehan Hussein, Hams Ahmad, Baher Matta, Yasmeen Mansi, Mohamad Fawzi

Abstract:

Background: Congenital heart disease (CHD) is a common problem worldwide with variable incidence in different countries. The exact etiology is unknown, suggested to be multifactorial. We aimed to study the incidence of various CHD in a neonatal intensive care unit (NICU) in a tertiary care hospital in Egypt and the possible associations with variable risk factors. Methods: Prospective study was conducted over a period of one year (2013 /2014) at NICU KasrAlAini School of Medicine, Cairo University. Questionnaire about possible maternal and/or paternal risk factors for CHD, clinical examination, bedside echocardiography were done. Cases were classified into groups: group 1 without CHD and group 2 with CHD. Results: from 723 neonates admitted to NICU, 180 cases were proved to have CHD, 58 % of them were males. patent ductus arteriosus(PDA) was the most common CHD (70%), followed by an atrial septal defect (ASD8%), while Fallot tetralogy and single ventricle were the least common (0.45 %) for each. CHD was found in 30 % of consanguineous parents Maternal age ≥ 35 years at the time of conception was associated with increased incidence of PDA (p= 0.45 %). Maternal diabetes and insulin intake were significantly associated with cases of CHD (p=0.02 &0.001 respectively), maternal hypertension and hypothyroidism were both associated with VSD, but the difference did not reach statistical significance (P=0.36 &0.44respectively). Maternal passive smoking was significantly associated with PDA (p=0.03). Conclusion: The most frequent CHD in the studied population was PDA, followed by ASD. Maternal conditions as diabetes was associated with VSD occurrence.

Keywords: NICU, risk factors, congenital heart disease, echocardiography

Procedia PDF Downloads 170
1984 An Explanatory Study into the Information-Seeking Behaviour of Egyptian Beggars

Authors: Essam Mansour

Abstract:

The key purpose of this study is to provide first-hand information about beggars in Egypt, especially from the perspective of their information seeking behaviour including their information needs. The researcher tries to investigate the information-seeking behaviour of Egyptian beggars with regard to their thoughts, perceptions, motivations, attitudes, habits, preferences as well as challenges that may impede their use of information. The research methods used were an adapted form of snowball sampling of a heterogeneous demographic group of participants in the beggary activity in Egypt. This sampling was used to select focus groups to explore a range of relevant issues. Data on the demographic characteristics of the Egyptian beggars showed that they tend to be men, mostly with no formal education, with an average age around 30s, labeled as low-income persons, mostly single and mostly Muslims. A large number of Egyptian beggars were seeking for information to meet their basic needs as well as their daily needs, although some of them were not able to identify their information needs clearly. The information-seeking behaviour profile of a very large number of Egyptian beggars indicated a preference for informal sources of information over formal ones to solve different problems and meet the challenges they face during their beggary activity depending on assistive devices, such as mobile phones. The high degree of illiteracy and the lack of awareness about the basic rights of information as well as information needs were the most important problems Egyptian beggars face during accessing information. The study recommended further research to be conducted about the role of the library in the education of beggars. It also recommended that beggars’ awareness about their information rights should be promoted through educational programs that help them value the role of information in their life.

Keywords: user studies, information-seeking behaviour, information needs, information sources, beggars, Egypt

Procedia PDF Downloads 303
1983 A Protocol of Procedures and Interventions to Accelerate Post-Earthquake Reconstruction

Authors: Maria Angela Bedini, Fabio Bronzini

Abstract:

The Italian experiences, positive and negative, of the post-earthquake are conditioned by long times and structural bureaucratic constraints, also motivated by the attempt to contain mafia infiltration and corruption. The transition from the operational phase of the emergency to the planning phase of the reconstruction project is thus hampered by a series of inefficiencies and delays, incompatible with the need for rapid recovery of the territories in crisis. In fact, intervening in areas affected by seismic events means at the same time associating the reconstruction plan with an urban and territorial rehabilitation project based on strategies and tools in which prevention and safety play a leading role in the regeneration of territories in crisis and the return of the population. On the contrary, the earthquakes that took place in Italy have instead further deprived the territories affected of the minimum requirements for habitability, in terms of accessibility and services, accentuating the depopulation process, already underway before the earthquake. The objective of this work is to address with implementing and programmatic tools the procedures and strategies to be put in place, today and in the future, in Italy and abroad, to face the challenge of the reconstruction of activities, sociality, services, risk mitigation: a protocol of operational intentions and firm points, open to a continuous updating and implementation. The methodology followed is that of the comparison in a synthetic form between the different Italian experiences of the post-earthquake, based on facts and not on intentions, to highlight elements of excellence or, on the contrary, damage. The main results obtained can be summarized in technical comparison cards on good and bad practices. With this comparison, we intend to make a concrete contribution to the reconstruction process, certainly not only related to the reconstruction of buildings but privileging the primary social and economic needs. In this context, the recent instrument applied in Italy of the strategic urban and territorial SUM (Minimal Urban Structure) and the strategic monitoring process become dynamic tools for supporting reconstruction. The conclusions establish, by points, a protocol of interventions, the priorities for integrated socio-economic strategies, multisectoral and multicultural, and highlight the innovative aspects of 'inversion' of priorities in the reconstruction process, favoring the take-off of 'accelerator' interventions social and economic and a more updated system of coexistence with risks. In this perspective, reconstruction as a necessary response to the calamitous event can and must become a unique opportunity to raise the level of protection from risks and rehabilitation and development of the most fragile places in Italy and abroad.

Keywords: an operational protocol for reconstruction, operational priorities for coexistence with seismic risk, social and economic interventions accelerators of building reconstruction, the difficult post-earthquake reconstruction in Italy

Procedia PDF Downloads 112
1982 Supplier Risk Management: A Multivariate Statistical Modelling and Portfolio Optimization Based Approach for Supplier Delivery Performance Development

Authors: Jiahui Yang, John Quigley, Lesley Walls

Abstract:

In this paper, the authors develop a stochastic model regarding the investment in supplier delivery performance development from a buyer’s perspective. The authors propose a multivariate model through a Multinomial-Dirichlet distribution within an Empirical Bayesian inference framework, representing both the epistemic and aleatory uncertainties in deliveries. A closed form solution is obtained and the lower and upper bound for both optimal investment level and expected profit under uncertainty are derived. The theoretical properties provide decision makers with useful insights regarding supplier delivery performance improvement problems where multiple delivery statuses are involved. The authors also extend the model from a single supplier investment into a supplier portfolio, using a Lagrangian method to obtain a theoretical expression for an optimal investment level and overall expected profit. The model enables a buyer to know how the marginal expected profit/investment level of each supplier changes with respect to the budget and which supplier should be invested in when additional budget is available. An application of this model is illustrated in a simulation study. Overall, the main contribution of this study is to provide an optimal investment decision making framework for supplier development, taking into account multiple delivery statuses as well as multiple projects.

Keywords: decision making, empirical bayesian, portfolio optimization, supplier development, supply chain management

Procedia PDF Downloads 275
1981 Comparison of Polyphonic Profile of a Berry from Two Different Sources, Using an Optimized Extraction Method

Authors: G. Torabian, A. Fathi, P. Valtchev, F. Dehghani

Abstract:

The superior polyphenol content of Sambucus nigra berries has high health potentials for the production of nutraceutical products. Numerous factors influence the polyphenol content of the final products including the berries’ source and the subsequent processing production steps. The aim of this study is to compare the polyphenol content of berries from two different sources and also to optimise the polyphenol extraction process from elderberries. Berries from source B obtained more acceptable physical properties than source A; a single berry from source B was double in size and weight (both wet and dry weight) compared with a source A berry. Despite the appropriate physical characteristics of source B berries, their polyphenolic profile was inferior; as source A berries had 2.3 fold higher total anthocyanin content, and nearly two times greater total phenolic content and total flavonoid content compared to source B. Moreover, the result of this study showed that almost 50 percent of the phenolic content of berries are entrapped within their skin and pulp that potentially cannot be extracted by press juicing. To address this challenge and to increase the total polyphenol yield of the extract, we used cold-shock blade grinding method to break the cell walls. The result of this study showed that using cultivars with higher phenolic content as well as using the whole fruit including juice, skin and pulp can increase polyphenol yield significantly; and thus, may boost the potential of using elderberries as therapeutic products.

Keywords: different sources, elderberry, grinding, juicing, polyphenols

Procedia PDF Downloads 280
1980 A Methodology for Automatic Diversification of Document Categories

Authors: Dasom Kim, Chen Liu, Myungsu Lim, Su-Hyeon Jeon, ByeoungKug Jeon, Kee-Young Kwahk, Namgyu Kim

Abstract:

Recently, numerous documents including unstructured data and text have been created due to the rapid increase in the usage of social media and the Internet. Each document is usually provided with a specific category for the convenience of the users. In the past, the categorization was performed manually. However, in the case of manual categorization, not only can the accuracy of the categorization be not guaranteed but the categorization also requires a large amount of time and huge costs. Many studies have been conducted towards the automatic creation of categories to solve the limitations of manual categorization. Unfortunately, most of these methods cannot be applied to categorizing complex documents with multiple topics because the methods work by assuming that one document can be categorized into one category only. In order to overcome this limitation, some studies have attempted to categorize each document into multiple categories. However, they are also limited in that their learning process involves training using a multi-categorized document set. These methods therefore cannot be applied to multi-categorization of most documents unless multi-categorized training sets are provided. To overcome the limitation of the requirement of a multi-categorized training set by traditional multi-categorization algorithms, we previously proposed a new methodology that can extend a category of a single-categorized document to multiple categorizes by analyzing relationships among categories, topics, and documents. In this paper, we design a survey-based verification scenario for estimating the accuracy of our automatic categorization methodology.

Keywords: big data analysis, document classification, multi-category, text mining, topic analysis

Procedia PDF Downloads 257
1979 A Comparison of Methods for Estimating Dichotomous Treatment Effects: A Simulation Study

Authors: Jacqueline Y. Thompson, Sam Watson, Lee Middleton, Karla Hemming

Abstract:

Introduction: The odds ratio (estimated via logistic regression) is a well-established and common approach for estimating covariate-adjusted binary treatment effects when comparing a treatment and control group with dichotomous outcomes. Its popularity is primarily because of its stability and robustness to model misspecification. However, the situation is different for the relative risk and risk difference, which are arguably easier to interpret and better suited to specific designs such as non-inferiority studies. So far, there is no equivalent, widely acceptable approach to estimate an adjusted relative risk and risk difference when conducting clinical trials. This is partly due to the lack of a comprehensive evaluation of available candidate methods. Methods/Approach: A simulation study is designed to evaluate the performance of relevant candidate methods to estimate relative risks to represent conditional and marginal estimation approaches. We consider the log-binomial, generalised linear models (GLM) with iteratively weighted least-squares (IWLS) and model-based standard errors (SE); log-binomial GLM with convex optimisation and model-based SEs; log-binomial GLM with convex optimisation and permutation tests; modified-Poisson GLM IWLS and robust SEs; log-binomial generalised estimation equations (GEE) and robust SEs; marginal standardisation and delta method SEs; and marginal standardisation and permutation test SEs. Independent and identically distributed datasets are simulated from a randomised controlled trial to evaluate these candidate methods. Simulations are replicated 10000 times for each scenario across all possible combinations of sample sizes (200, 1000, and 5000), outcomes (10%, 50%, and 80%), and covariates (ranging from -0.05 to 0.7) representing weak, moderate or strong relationships. Treatment effects (ranging from 0, -0.5, 1; on the log-scale) will consider null (H0) and alternative (H1) hypotheses to evaluate coverage and power in realistic scenarios. Performance measures (bias, mean square error (MSE), relative efficiency, and convergence rates) are evaluated across scenarios covering a range of sample sizes, event rates, covariate prognostic strength, and model misspecifications. Potential Results, Relevance & Impact: There are several methods for estimating unadjusted and adjusted relative risks. However, it is unclear which method(s) is the most efficient, preserves type-I error rate, is robust to model misspecification, or is the most powerful when adjusting for non-prognostic and prognostic covariates. GEE estimations may be biased when the outcome distributions are not from marginal binary data. Also, it seems that marginal standardisation and convex optimisation may perform better than GLM IWLS log-binomial.

Keywords: binary outcomes, statistical methods, clinical trials, simulation study

Procedia PDF Downloads 98
1978 Observation of the Flow Behavior for a Rising Droplet in a Mini-Slot

Authors: H. Soltani, J. Hadfield, M. Redmond, D. S. Nobes

Abstract:

The passage of oil droplets through a vertical mini-slot were investigated in this study. Oil-in-water emulsion can undergo coalescence of finer oil droplets forming droplets of a size that need to be considered individually. This occurs in a number of industrial processes and has important consequences at a scale where both body and surfaces forces are relevant. In the study, two droplet diameters of smaller than the slot width and a relatively larger diameter where the oil droplet can interact directly with the slot wall were generated. To monitor fluid motion, a particle shadow velocimetry (PSV) imaging technique was used to study fluid flow motion inside and around a single oil droplet rising in a net co-flow. The droplet was a transparent canola oil and the surrounding working fluid was glycerol, adjusted to allow a matching of refractive index between the two fluids. Particles seeded in both fluids were observed with the PSV system allowing the capture of the velocity field both within the droplet and in the surrounds. The effect of droplet size on the droplet internal circulation was observed. Part of the study was related the potential generation of flow structures, such as von Karman vortex shedding already observed in rising droplets in infinite reservoirs and their interaction with the mini-channel. Results show that two counter-rotating vortices exist inside the droplets as they pass through slot. The vorticity map analysis shows that the droplet of relatively larger size has a stronger internal circulation.

Keywords: rising droplet, rectangular orifice, particle shadow velocimetry, match refractive index

Procedia PDF Downloads 160
1977 High-Dimensional Single-Cell Imaging Maps Inflammatory Cell Types in Pulmonary Arterial Hypertension

Authors: Selena Ferrian, Erin Mccaffrey, Toshie Saito, Aiqin Cao, Noah Greenwald, Mark Robert Nicolls, Trevor Bruce, Roham T. Zamanian, Patricia Del Rosario, Marlene Rabinovitch, Michael Angelo

Abstract:

Recent experimental and clinical observations are advancing immunotherapies to clinical trials in pulmonary arterial hypertension (PAH). However, comprehensive mapping of the immune landscape in pulmonary arteries (PAs) is necessary to understand how immune cell subsets interact to induce pulmonary vascular pathology. We used multiplexed ion beam imaging by time-of-flight (MIBI-TOF) to interrogate the immune landscape in PAs from idiopathic (IPAH) and hereditary (HPAH) PAH patients. Massive immune infiltration in I/HPAH was observed with intramural infiltration linked to PA occlusive changes. The spatial context of CD11c+DCs expressing SAMHD1, TIM-3 and IDO-1 within immune-enriched microenvironments and neutrophils were associated with greater immune activation in HPAH. Furthermore, CD11c-DC3s (mo-DC-like cells) within a smooth muscle cell (SMC) enriched microenvironment were linked to vessel score, proliferating SMCs, and inflamed endothelial cells. Experimental data in cultured cells reinforced a causal relationship between neutrophils and mo-DCs in mediating pulmonary arterial SMC proliferation. These findings merit consideration in developing effective immunotherapies for PAH.

Keywords: pulmonary arterial hypertension, vascular remodeling, indoleamine 2-3-dioxygenase 1 (IDO-1), neutrophils, monocyte-derived dendritic cells, BMPR2 mutation, interferon gamma (IFN-γ)

Procedia PDF Downloads 158
1976 Configuration as a Service in Multi-Tenant Enterprise Resource Planning System

Authors: Mona Misfer Alshardan, Djamal Ziani

Abstract:

Enterprise resource planning (ERP) systems are the organizations tickets to the global market. With the implementation of ERP, organizations can manage and coordinate all functions, processes, resources and data from different departments by a single software. However, many organizations consider the cost of traditional ERP to be expensive and look for alternative affordable solutions within their budget. One of these alternative solutions is providing ERP over a software as a service (SaaS) model. This alternative could be considered as a cost effective solution compared to the traditional ERP system. A key feature of any SaaS system is the multi-tenancy architecture where multiple customers (tenants) share the system software. However, different organizations have different requirements. Thus, the SaaS developers accommodate each tenant’s unique requirements by allowing tenant-level customization or configuration. While customization requires source code changes and in most cases a programming experience, the configuration process allows users to change many features within a predefined scope in an easy and controlled manner. The literature provides many techniques to accomplish the configuration process in different SaaS systems. However, the nature and complexity of SaaS ERP needs more attention to the details regarding the configuration process which is merely described in previous researches. Thus, this research is built on strong knowledge regarding the configuration in SaaS to define specifically the configuration borders in SaaS ERP and to design a configuration service with the consideration of the different configuration aspects. The proposed architecture will ensure the easiness of the configuration process by using wizard technology. Also, the privacy and performance are guaranteed by adopting the databases isolation technique.

Keywords: configuration, software as a service, multi-tenancy, ERP

Procedia PDF Downloads 379
1975 Study of Aqueous Solutions: A Dielectric Spectroscopy Approach

Authors: Kumbharkhane Ashok

Abstract:

The time domain dielectric relaxation spectroscopy (TDRS) probes the interaction of a macroscopic sample with a time-dependent electrical field. The resulting complex permittivity spectrum, characterizes amplitude (voltage) and time scale of the charge-density fluctuations within the sample. These fluctuations may arise from the reorientation of the permanent dipole moments of individual molecules or from the rotation of dipolar moieties in flexible molecules, like polymers. The time scale of these fluctuations depends on the sample and its relative relaxation mechanism. Relaxation times range from some picoseconds in low viscosity liquids to hours in glasses, Therefore the DRS technique covers an extensive dynamical process, its corresponding frequency range from 10-4 Hz to 1012 Hz. This inherent ability to monitor the cooperative motion of molecular ensemble distinguishes dielectric relaxation from methods like NMR or Raman spectroscopy which yield information on the motions of individual molecules. An experimental set up for Time Domain Reflectometry (TDR) technique from 10 MHz to 30 GHz has been developed for the aqueous solutions. This technique has been very simple and covers a wide band of frequencies in the single measurement. Dielectric Relaxation Spectroscopy is especially sensitive to intermolecular interactions. The complex permittivity spectra of aqueous solutions have been fitted using Cole-Davidson (CD) model to determine static dielectric constants and relaxation times for entire concentrations. The heterogeneous molecular interactions in aqueous solutions have been discussed through Kirkwood correlation factor and excess properties.

Keywords: liquid, aqueous solutions, time domain reflectometry

Procedia PDF Downloads 428
1974 Evaluation of DNA Oxidation and Chemical DNA Damage Using Electrochemiluminescent Enzyme/DNA Microfluidic Array

Authors: Itti Bist, Snehasis Bhakta, Di Jiang, Tia E. Keyes, Aaron Martin, Robert J. Forster, James F. Rusling

Abstract:

DNA damage from metabolites of lipophilic drugs and pollutants, generated by enzymes, represents a major toxicity pathway in humans. These metabolites can react with DNA to form either 8-oxo-7,8-dihydro-2-deoxyguanosine (8-oxodG), which is the oxidative product of DNA or covalent DNA adducts, both of which are genotoxic and hence considered important biomarkers to detect cancer in humans. Therefore, detecting reactions of metabolites with DNA is an effective approach for the safety assessment of new chemicals and drugs. Here we describe a novel electrochemiluminescent (ECL) sensor array which can detect DNA oxidation and chemical DNA damage in a single array, facilitating a more accurate diagnostic tool for genotoxicity screening. Layer-by-layer assembly of DNA and enzyme are assembled on the pyrolytic graphite array which is housed in a microfluidic device for sequential detection of two type of the DNA damages. Multiple enzyme reactions are run on test compounds using the array, generating toxic metabolites in situ. These metabolites react with DNA in the films to cause DNA oxidation and chemical DNA damage which are detected by ECL generating osmium compound and ruthenium polymer, respectively. The method is further validated by the formation of 8-oxodG and DNA adduct using similar films of DNA/enzyme on magnetic bead biocolloid reactors, hydrolyzing the DNA, and analyzing by liquid chromatography-mass spectrometry (LC-MS). Hence, this combined DNA/enzyme array/LC-MS approach can efficiently explore metabolic genotoxic pathways for drugs and environmental chemicals.

Keywords: biosensor, electrochemiluminescence, DNA damage, microfluidic array

Procedia PDF Downloads 352
1973 The Effect of Crack Size, Orientation and Number on the Elastic Modulus of a Cracked Body

Authors: Mark T. Hanson, Alan T. Varughese

Abstract:

Osteoporosis is a disease affecting bone quality which in turn can increase the risk of low energy fractures. Treatment of osteoporosis using Bisphosphonates has the beneficial effect of increasing bone mass while at the same time has been linked to the formation of atypical femoral fractures. This has led to the increased study of micro-fractures in bones of patients using Bisphosphonate treatment. One of the mechanics related issues which have been identified in this regard is the loss in stiffness of bones containing one or many micro-fractures. Different theories have been put forth using fracture mechanics to determine the effect of crack presence on elastic properties such as modulus. However, validation of these results in a deterministic way has not been forthcoming. The present analysis seeks to provide this deterministic evaluation of fracture’s effect on the elastic modulus. In particular, the effect of crack size, crack orientation and crack number on elastic modulus is investigated. In particular, the Finite Element method is used to explicitly determine the elastic modulus reduction caused by the presence of cracks in a representative volume element. Single cracks of various lengths and orientations are examined as well as cases of multiple cracks. Cracks in tension as well as under shear stress are considered. Although the focus is predominantly two-dimensional, some three-dimensional results are also presented. The results obtained show the explicit reduction in modulus caused by the parameters of crack size, orientation and number noted above. The present results allow the interpretation of the various theories which currently exist in the literature.

Keywords: cracks, elastic, fracture, modulus

Procedia PDF Downloads 93
1972 Tomato-Weed Classification by RetinaNet One-Step Neural Network

Authors: Dionisio Andujar, Juan lópez-Correa, Hugo Moreno, Angela Ri

Abstract:

The increased number of weeds in tomato crops highly lower yields. Weed identification with the aim of machine learning is important to carry out site-specific control. The last advances in computer vision are a powerful tool to face the problem. The analysis of RGB (Red, Green, Blue) images through Artificial Neural Networks had been rapidly developed in the past few years, providing new methods for weed classification. The development of the algorithms for crop and weed species classification looks for a real-time classification system using Object Detection algorithms based on Convolutional Neural Networks. The site study was located in commercial corn fields. The classification system has been tested. The procedure can detect and classify weed seedlings in tomato fields. The input to the Neural Network was a set of 10,000 RGB images with a natural infestation of Cyperus rotundus l., Echinochloa crus galli L., Setaria italica L., Portulaca oeracea L., and Solanum nigrum L. The validation process was done with a random selection of RGB images containing the aforementioned species. The mean average precision (mAP) was established as the metric for object detection. The results showed agreements higher than 95 %. The system will provide the input for an online spraying system. Thus, this work plays an important role in Site Specific Weed Management by reducing herbicide use in a single step.

Keywords: deep learning, object detection, cnn, tomato, weeds

Procedia PDF Downloads 89
1971 Time Parameter Based for the Detection of Catastrophic Faults in Analog Circuits

Authors: Arabi Abderrazak, Bourouba Nacerdine, Ayad Mouloud, Belaout Abdeslam

Abstract:

In this paper, a new test technique of analog circuits using time mode simulation is proposed for the single catastrophic faults detection in analog circuits. This test process is performed to overcome the problem of catastrophic faults being escaped in a DC mode test applied to the inverter amplifier in previous research works. The circuit under test is a second-order low pass filter constructed around this type of amplifier but performing a function that differs from that of the previous test. The test approach performed in this work is based on two key- elements where the first one concerns the unique square pulse signal selected as an input vector test signal to stimulate the fault effect at the circuit output response. The second element is the filter response conversion to a square pulses sequence obtained from an analog comparator. This signal conversion is achieved through a fixed reference threshold voltage of this comparison circuit. The measurement of the three first response signal pulses durations is regarded as fault effect detection parameter on one hand, and as a fault signature helping to hence fully establish an analog circuit fault diagnosis on another hand. The results obtained so far are very promising since the approach has lifted up the fault coverage ratio in both modes to over 90% and has revealed the harmful side of faults that has been masked in a DC mode test.

Keywords: analog circuits, analog faults diagnosis, catastrophic faults, fault detection

Procedia PDF Downloads 427
1970 Safety and Efficacy of Laparoscopic D2 Gastrectomy for Advanced Gastric Cancers Single Unit Experience

Authors: S. M. P Manjula, Ishara Amarathunga, Aryan Nath Koura, Jaideepraj Rao

Abstract:

Background: Laparoscopic D2 Gastrectomy for non metastatic advanced Gastric cancer (AGC) has become a controversial topic as there are confronting ideas from experts in the field. Lack of consensus are mainly due to non feasibility of the dissection and safety and efficacy. Method: Data from all D2 Gastrectomies performed (both Subtotal and Total Gastrectomies) in our unit from 2009 December to 2013 December were retrospectively analysed. Computor database was prospectively maintained. Pathological stage two A (iiA) and above considered advanced Gastric cancers, who underwent curative intent D2 Gastrectomy were included for analysis(n=46). Four patients excluded from the study as peritoneal fluid cytology came positive for cancer cells and one patient exempted as microscopic resection margin positive(R1) after curative resection. Thirty day morbidity and mortality, operative time, lymph nodes harvest and survival (disease free and overall) analyzed. Results: Complete curative resection achieved in 40 patients. Mean age of the study population was 62.2 (32-88) and male to female ratio was 23: 17. Thirty day mortality (1/40) and morbidity (6/40). Average operative time 203.7 minutes (185- 400) and average lymphnodes harvest was 40.5 (18-91). Disease free survival of the AGC in this study population was 16.75 months (1-49). Average hospital stay was 6.8 days (3-31). Conclusion: Laparoscopic dissection is effective feasible and safe in AGC.

Keywords: laparoscopy, advanced gastric cancer, safety, efficacy

Procedia PDF Downloads 325
1969 Population Structure Analysis of Pakistani Indigenous Cattle Population by Using High Density SNP Array

Authors: Hamid Mustafa, Huson J. Heather, Kim Eiusoo, McClure Matt, Khalid Javed, Talat Nasser Pasha, Afzal Ali1, Adeela Ajmal, Tad Sonstegard

Abstract:

Genetic differences associated with speciation, breed formation or local adaptation can help to preserve and effective utilization of animals in selection programs. Analyses of population structure and breed diversity have provided insight into the origin and evolution of cattle. In this study, we used a high-density panel of SNP markers to examine population structure and diversity among ten Pakistani indigenous cattle breeds. In total, 25 individuals from three cattle populations, including Achi (n=08), Bhagnari (n=04) and Cholistani (n=13) were genotyped for 777, 962 single nucleotide polymorphism (SNP) markers. Population structure was examined using the linkage model in the program STRUCTURE. After characterizing SNP polymorphism in the different populations, we performed a detailed analysis of genetic structure at both the individual and population levels. The whole-genome SNP panel identified several levels of population substructure in the set of examined cattle breeds. We further searched for spatial patterns of genetic diversity among these breeds under the recently developed spatial principal component analysis framework. Overall, such high throughput genotyping data confirmed a clear partitioning of the cattle genetic diversity into distinct breeds. The resulting complex historical origins associated with both natural and artificial selection have led to the differentiation of numerous different cattle breeds displaying a broad phenotypic variety over a short period of time.

Keywords: Pakistan, cattle, genetic diversity, population structure

Procedia PDF Downloads 599
1968 Optimization the Conditions of Electrophoretic Deposition Fabrication of Graphene-Based Electrode to Consider Applications in Electro-Optical Sensors

Authors: Sepehr Lajevardi Esfahani, Shohre Rouhani, Zahra Ranjbar

Abstract:

Graphene has gained much attention owing to its unique optical and electrical properties. Charge carriers in graphene sheets (GS) carry out a linear dispersion relation near the Fermi energy and behave as massless Dirac fermions resulting in unusual attributes such as the quantum Hall effect and ambipolar electric field effect. It also exhibits nondispersive transport characteristics with an extremely high electron mobility (15000 cm2/(Vs)) at room temperature. Recently, several progresses have been achieved in the fabrication of single- or multilayer GS for functional device applications in the fields of optoelectronic such as field-effect transistors ultrasensitive sensors and organic photovoltaic cells. In addition to device applications, graphene also can serve as reinforcement to enhance mechanical, thermal, or electrical properties of composite materials. Electrophoretic deposition (EPD) is an attractive method for development of various coatings and films. It readily applied to any powdered solid that forms a stable suspension. The deposition parameters were controlled in various thicknesses. In this study, the graphene electrodeposition conditions were optimized. The results were obtained from SEM, Ohm resistance measuring technique and AFM characteristic tests. The minimum sheet resistance of electrodeposited reduced graphene oxide layers is achieved at conditions of 2 V in 10 s and it is annealed at 200 °C for 1 minute.

Keywords: electrophoretic deposition (EPD), graphene oxide (GO), electrical conductivity, electro-optical devices

Procedia PDF Downloads 174