Search results for: estimations of probability distributions
152 Sovereign Debt Restructuring: A Study of the Inadequacies of the Contractual Approach
Authors: Salamah Ansari
Abstract:
In absence of a comprehensive international legal regime for sovereign debt restructuring, majority of the complications arising from sovereign debt restructuring are frequently left to the uncertain market forces. The resort to market forces for sovereign debt restructuring has led to a phenomenal increase in litigations targeting assets of defaulting sovereign nations, internationally across jurisdictions with the first major wave of lawsuits against sovereigns in the 1980s with the Latin American crisis. Recent experiences substantiate that majority of obstacles faced during sovereign debt restructuring process are caused by inefficient creditor coordination and collective action problems. Collective action problems manifest as grab race, rush to exits, holdouts, the free rider problem and the rush to the courthouse. On defaulting, for a nation to successfully restructure its debt, all the creditors involved must accept some reduction in the value of their claims. As a single holdout creditor has the potential to undermine the restructuring process, hold-out creditors are snowballing with the increasing probability of earning high returns through litigations. This necessitates a mechanism to avoid holdout litigations and reinforce collective action on the part of the creditor. This can be done either through a statutory reform or through market-based contractual approach. In absence of an international sovereign bankruptcy regime, the impetus is mostly on inclusion of collective action clauses in debt contracts. The preference to contractual mechanisms vis- a vis a statutory approach can be explained with numerous reasons, but that's only part of the puzzle in trying to understand the economics of the underlying system. The contractual approach proposals advocate the inclusion of certain clauses in the debt contract for an orderly debt restructuring. These include clauses such as majority voting clauses, sharing clauses, non- acceleration clauses, initiation clauses, aggregation clauses, temporary stay on litigation clauses, priority financing clauses, and complete revelation of relevant information. However, voluntary market based contractual approach to debt workouts has its own complexities. It is a herculean task to enshrine clauses in debt contracts that are detailed enough to create an orderly debt restructuring mechanism while remaining attractive enough for creditors. Introduction of collective action clauses into debt contracts can reduce the barriers in efficient debt restructuring and also have the potential to improve the terms on which sovereigns are able to borrow. However, it should be borne in mind that such clauses are not a panacea to the huge institutional inadequacy that persists and may lead to worse restructuring outcomes.Keywords: sovereign debt restructuring, collective action clauses, hold out creditors, litigations
Procedia PDF Downloads 156151 Use of Cellulosic Fibres in Double Layer Porous Asphalt
Authors: Márcia Afonso, Marisa Dinis-Almeida, Cristina Fael
Abstract:
Climate change, namely precipitation patterns alteration, has led to extreme conditions such as floods and droughts. In turn, excessive construction has led to the waterproofing of the soil, increasing the surface runoff and decreasing the groundwater recharge capacity. The permeable pavements used in areas with low traffic lead to a decrease in the probability of floods peaks occurrence and the sediments reduction and pollutants transport, ensuring rainwater quality improvement. This study aims to evaluate the porous asphalt performance, developed in the laboratory, with addition of cellulosic fibres. One of the main objectives of cellulosic fibres use is to stop binder drainage, preventing its loss during storage and transport. Comparing to the conventional porous asphalt the cellulosic fibres addition improved the porous asphalt performance. The cellulosic fibres allowed the bitumen content increase, enabling retention and better aggregates coating and, consequently, a greater mixture durability. With this solution, it is intended to develop better practices of resilience and adaptation to the extreme climate changes and respond to the sustainability current demands, through the eco-friendly materials use. The mix design was performed for different size aggregates (with fine aggregates – PA1 and with coarse aggregates – PA2). The percentage influence of the fibres to be used was studied. It was observed that overall, the binder drainage decreases as the cellulose fibres percentage increases. It was found that the PA2 mixture obtained most binder drainage relative to PA1 mixture, irrespective of the fibres percentage used. Subsequently, the performance was evaluated through laboratory tests of indirect tensile stiffness modulus, water sensitivity, permeability and permanent deformation. The stiffness modulus for the two mixtures groups (with and without cellulosic fibres) presented very similar values between them. For the water sensitivity test it was observed that porous asphalt containing more fine aggregates are more susceptible to the water presence than mixtures with coarse aggregates. The porous asphalt with coarse aggregates have more air voids which allow water to pass easily leading to ITSR higher values. In the permeability test was observed that asphalt porous without cellulosic fibres presented had lower permeability than asphalt porous with cellulosic fibres. The resistance to permanent deformation results indicates better behaviour of porous asphalt with cellulosic fibres, verifying a bigger rut depth in porous asphalt without cellulosic fibres. In this study, it was observed that porous asphalt with bitumen higher percentages improve the performance to permanent deformation. This fact was only possible due to the bitumen retention by the cellulosic fibres.Keywords: binder drainage, cellulosic fibres, permanent deformation, porous asphalt
Procedia PDF Downloads 228150 Implicit U-Net Enhanced Fourier Neural Operator for Long-Term Dynamics Prediction in Turbulence
Authors: Zhijie Li, Wenhui Peng, Zelong Yuan, Jianchun Wang
Abstract:
Turbulence is a complex phenomenon that plays a crucial role in various fields, such as engineering, atmospheric science, and fluid dynamics. Predicting and understanding its behavior over long time scales have been challenging tasks. Traditional methods, such as large-eddy simulation (LES), have provided valuable insights but are computationally expensive. In the past few years, machine learning methods have experienced rapid development, leading to significant improvements in computational speed. However, ensuring stable and accurate long-term predictions remains a challenging task for these methods. In this study, we introduce the implicit U-net enhanced Fourier neural operator (IU-FNO) as a solution for stable and efficient long-term predictions of the nonlinear dynamics in three-dimensional (3D) turbulence. The IU-FNO model combines implicit re-current Fourier layers to deepen the network and incorporates the U-Net architecture to accurately capture small-scale flow structures. We evaluate the performance of the IU-FNO model through extensive large-eddy simulations of three types of 3D turbulence: forced homogeneous isotropic turbulence (HIT), temporally evolving turbulent mixing layer, and decaying homogeneous isotropic turbulence. The results demonstrate that the IU-FNO model outperforms other FNO-based models, including vanilla FNO, implicit FNO (IFNO), and U-net enhanced FNO (U-FNO), as well as the dynamic Smagorinsky model (DSM), in predicting various turbulence statistics. Specifically, the IU-FNO model exhibits improved accuracy in predicting the velocity spectrum, probability density functions (PDFs) of vorticity and velocity increments, and instantaneous spatial structures of the flow field. Furthermore, the IU-FNO model addresses the stability issues encountered in long-term predictions, which were limitations of previous FNO models. In addition to its superior performance, the IU-FNO model offers faster computational speed compared to traditional large-eddy simulations using the DSM model. It also demonstrates generalization capabilities to higher Taylor-Reynolds numbers and unseen flow regimes, such as decaying turbulence. Overall, the IU-FNO model presents a promising approach for long-term dynamics prediction in 3D turbulence, providing improved accuracy, stability, and computational efficiency compared to existing methods.Keywords: data-driven, Fourier neural operator, large eddy simulation, fluid dynamics
Procedia PDF Downloads 74149 Two Component Source Apportionment Based on Absorption and Size Distribution Measurement
Authors: Tibor Ajtai, Noémi Utry, Máté Pintér, Gábor Szabó, Zoltán Bozóki
Abstract:
Beyond its climate and health related issues ambient light absorbing carbonaceous particulate matter (LAC) has also become a great scientific interest in terms of its regulations recently. It has been experimentally demonstrated in recent studies, that LAC is dominantly composed of traffic and wood burning aerosol particularly under wintertime urban conditions, when the photochemical and biological activities are negligible. Several methods have been introduced to quantitatively apportion aerosol fractions emitted by wood burning and traffic but most of them require costly and time consuming off-line chemical analysis. As opposed to chemical features, the microphysical properties of airborne particles such as optical absorption and size distribution can be easily measured on-line, with high accuracy and sensitivity, especially under highly polluted urban conditions. Recently a new method has been proposed for the apportionment of wood burning and traffic aerosols based on the spectral dependence of their absorption quantified by the Aerosol Angström Exponent (AAE). In this approach the absorption coefficient is deduced from transmission measurement on a filter accumulated aerosol sample and the conversion factor between the measured optical absorption and the corresponding mass concentration (the specific absorption cross section) are determined by on-site chemical analysis. The recently developed multi-wavelength photoacoustic instruments provide novel, in-situ approach towards the reliable and quantitative characterization of carbonaceous particulate matter. Therefore, it also opens up novel possibilities on the source apportionment through the measurement of light absorption. In this study, we demonstrate an in-situ spectral characterization method of the ambient carbon fraction based on light absorption and size distribution measurements using our state-of-the-art multi-wavelength photoacoustic instrument (4λ-PAS) and Single Mobility Particle Sizer (SMPS) The carbonaceous particulate selective source apportionment study was performed for ambient particulate matter in the city center of Szeged, Hungary where the dominance of traffic and wood burning aerosol has been experimentally demonstrated earlier. The proposed model is based on the parallel, in-situ measurement of optical absorption and size distribution. AAEff and AAEwb were deduced from the measured data using the defined correlation between the AOC(1064nm)/AOC(266nm) and N100/N20 ratios. σff(λ) and σwb(λ) were determined with the help of the independently measured temporal mass concentrations in the PM1 mode. Furthermore, the proposed optical source apportionment is based on the assumption that the light absorbing fraction of PM is exclusively related to traffic and wood burning. This assumption is indirectly confirmed here by the fact that the measured size distribution is composed of two unimodal size distributions identified to correspond to traffic and wood burning aerosols. The method offers the possibility of replacing laborious chemical analysis with simple in-situ measurement of aerosol size distribution data. The results by the proposed novel optical absorption based source apportionment method prove its applicability whenever measurements are performed at an urban site where traffic and wood burning are the dominant carbonaceous sources of emission.Keywords: absorption, size distribution, source apportionment, wood burning, traffic aerosol
Procedia PDF Downloads 228148 Practicing Inclusion for Hard of Hearing and Deaf Students in Regular Schools in Ethiopia
Authors: Mesfin Abebe Molla
Abstract:
This research aims to examine the practices of inclusion of the hard of hearing and deaf students in regular schools. It also focuses on exploring strategies for optimal benefits of students with Hard of Hearing and Deaf (HH-D) from inclusion. Concurrent mixed methods research design was used to collect quantitative and qualitative data. The instruments used to gather data for this study were questionnaire, semi- structured interview, and observations. A total of 102 HH-D students and 42 primary and High School teachers were selected using simple random sampling technique and used as participants to collect quantitative data. Non-probability sampling technique was also employed to select 14 participants (4-school principals, 6-teachers and 4-parents of HH-D students) and they were interviewed to collect qualitative data. Descriptive and inferential statistical techniques (independent sample t-test, one way ANOVA and Multiple regressions) were employed to analyze quantitative data. Qualitative data were also analyzed qualitatively by theme analysis. The findings reported that there were individual principals’, teachers’ and parents’ strong commitment and efforts for practicing inclusion of HH-D students effectively; however, most of the core values of inclusion were missing in both schools. Most of the teachers (78.6 %) and HH-D students (75.5%) had negative attitude and considerable reservations about the feasibility of inclusion of HH-D students in both schools. Furthermore, there was a statistically significant difference of attitude toward to inclusion between the two school’s teachers and the teachers’ who had taken and had not taken additional training on IE and sign language. The study also indicated that there was a statistically significant difference of attitude toward to inclusion between hard of hearing and deaf students. However, the overall contribution of the demographic variables of teachers and HH-D students on their attitude toward inclusion is not statistically significant. The finding also showed that HH-D students did not have access to modified curriculum which would maximize their abilities and help them to learn together with their hearing peers. In addition, there is no clear and adequate direction for the medium of instruction. Poor school organization and management, lack of commitment, financial resources, collaboration and teachers’ inadequate training on Inclusive Education (IE) and sign language, large class size, inappropriate assessment procedure, lack of trained deaf adult personnel who can serve as role model for HH-D students and lack of parents and community members’ involvement were some of the major factors that affect the practicing inclusion of students HH-D. Finally, recommendations are made to improve the practices of inclusion of HH-D students and to make inclusion of HH-D students an integrated part of Ethiopian education based on the findings of the study.Keywords: deaf, hard of hearing, inclusion, regular schools
Procedia PDF Downloads 343147 A Case Study of Psycho-Social Status of Rohingya Women Refugees Settled in Delhi
Authors: Fizza Saghir
Abstract:
Rohingyas are an ethnic minority of predominantly Buddhist-Myanmar. Living in ghettos in Rakhine, one of the poorest states of Myanmar, for decades, they have been marginalized, discriminated, deprived of the basic amenities and have faced ghastly violations of their rights- politically, socially, economically and culturally. In 2012, in violence that, erupted between ethnic Rakhine Buddhists and Rohingya Muslims, hundreds of Rohingyas were slayed and many more displaced. The state does not recognize them as ‘citizens’ and the military and police have constantly persecuted and pushed them to either migrate to other countries like India, Bangladesh or else die of deprivation. Amidst the deadly violence, Rohingya women are the most vulnerable. Many of them have faced sexual abuse and gender-based violence. Minimalistic to insignificant studies have been done on the plight of Rohingya women refugees in context of India. Thus, this paper focuses on psycho-social status of Rohingya women refugees settled in Delhi, India. The research study used both quantitative and qualitative methods. It was explorative in nature and used non-probability sampling, purposive sampling, in particular. A sample size of 30 Rohingya women refugees was interviewed out of the universe of 45 Rohingya refugee families living in Kalindi Kunj Refugee Camp of Delhi. Case studies were developed. The paper explores the psychological and social status of the respondents along with a deep understanding of their issues and concerns. Moreover, it assesses the impact of violence and migration on respondents. It was found that Rohingya women refugees are deeply and severely affected by a violent past, an insecure present and an uncertain future. Major problems they face in Delhi, India are finding employment, lack of identity cards to avail government services, language barrier, lack of health and education facilities. All they desire is peace and shelter in India. Besides, recommendations and suggestions have been given to various stakeholders of the forced mass migration of Rohingya refugees which includes, Government of Myanmar, Government of India, other bordering nations of Myanmar, international NGOs and media and the Rohingya community, itself. Only an immediate, peaceful and continuous dialogue process can help resolve the issue of exodus of Rohingyas. Countries, including India, must come together to help the Rohingyas who are in need of urgent humanitarian aid and assistance.Keywords: dialogue process, ethnic minority, forced mass migration, impact of violence and migration, psycho-social status, Rohingya women refugees, sexual abuse
Procedia PDF Downloads 177146 Fair Federated Learning in Wireless Communications
Authors: Shayan Mohajer Hamidi
Abstract:
Federated Learning (FL) has emerged as a promising paradigm for training machine learning models on distributed data without the need for centralized data aggregation. In the realm of wireless communications, FL has the potential to leverage the vast amounts of data generated by wireless devices to improve model performance and enable intelligent applications. However, the fairness aspect of FL in wireless communications remains largely unexplored. This abstract presents an idea for fair federated learning in wireless communications, addressing the challenges of imbalanced data distribution, privacy preservation, and resource allocation. Firstly, the proposed approach aims to tackle the issue of imbalanced data distribution in wireless networks. In typical FL scenarios, the distribution of data across wireless devices can be highly skewed, resulting in unfair model updates. To address this, we propose a weighted aggregation strategy that assigns higher importance to devices with fewer samples during the aggregation process. By incorporating fairness-aware weighting mechanisms, the proposed approach ensures that each participating device's contribution is proportional to its data distribution, thereby mitigating the impact of data imbalance on model performance. Secondly, privacy preservation is a critical concern in federated learning, especially in wireless communications where sensitive user data is involved. The proposed approach incorporates privacy-enhancing techniques, such as differential privacy, to protect user privacy during the model training process. By adding carefully calibrated noise to the gradient updates, the proposed approach ensures that the privacy of individual devices is preserved without compromising the overall model accuracy. Moreover, the approach considers the heterogeneity of devices in terms of computational capabilities and energy constraints, allowing devices to adaptively adjust the level of privacy preservation to strike a balance between privacy and utility. Thirdly, efficient resource allocation is crucial for federated learning in wireless communications, as devices operate under limited bandwidth, energy, and computational resources. The proposed approach leverages optimization techniques to allocate resources effectively among the participating devices, considering factors such as data quality, network conditions, and device capabilities. By intelligently distributing the computational load, communication bandwidth, and energy consumption, the proposed approach minimizes resource wastage and ensures a fair and efficient FL process in wireless networks. To evaluate the performance of the proposed fair federated learning approach, extensive simulations and experiments will be conducted. The experiments will involve a diverse set of wireless devices, ranging from smartphones to Internet of Things (IoT) devices, operating in various scenarios with different data distributions and network conditions. The evaluation metrics will include model accuracy, fairness measures, privacy preservation, and resource utilization. The expected outcomes of this research include improved model performance, fair allocation of resources, enhanced privacy preservation, and a better understanding of the challenges and solutions for fair federated learning in wireless communications. The proposed approach has the potential to revolutionize wireless communication systems by enabling intelligent applications while addressing fairness concerns and preserving user privacy.Keywords: federated learning, wireless communications, fairness, imbalanced data, privacy preservation, resource allocation, differential privacy, optimization
Procedia PDF Downloads 75145 Thermal and Visual Comfort Assessment in Office Buildings in Relation to Space Depth
Authors: Elham Soltani Dehnavi
Abstract:
In today’s compact cities, bringing daylighting and fresh air to buildings is a significant challenge, but it also presents opportunities to reduce energy consumption in buildings by reducing the need for artificial lighting and mechanical systems. Simple adjustments to building form can contribute to their efficiency. This paper examines how the relationship between the width and depth of the rooms in office buildings affects visual and thermal comfort, and consequently energy savings. Based on these evaluations, we can determine the best location for sedentary areas in a room. We can also propose improvements to occupant experience and minimize the difference between the predicted and measured performance in buildings by changing other design parameters, such as natural ventilation strategies, glazing properties, and shading. This study investigates the condition of spatial daylighting and thermal comfort for a range of room configurations using computer simulations, then it suggests the best depth for optimizing both daylighting and thermal comfort, and consequently energy performance in each room type. The Window-to-Wall Ratio (WWR) is 40% with 0.8m window sill and 0.4m window head. Also, there are some fixed parameters chosen according to building codes and standards, and the simulations are done in Seattle, USA. The simulation results are presented as evaluation grids using the thresholds for different metrics such as Daylight Autonomy (DA), spatial Daylight Autonomy (sDA), Annual Sunlight Exposure (ASE), and Daylight Glare Probability (DGP) for visual comfort, and Predicted Mean Vote (PMV), Predicted Percentage of Dissatisfied (PPD), occupied Thermal Comfort Percentage (occTCP), over-heated percent, under-heated percent, and Standard Effective Temperature (SET) for thermal comfort that are extracted from Grasshopper scripts. The simulation tools are Grasshopper plugins such as Ladybug, Honeybee, and EnergyPlus. According to the results, some metrics do not change much along the room depth and some of them change significantly. So, we can overlap these grids in order to determine the comfort zone. The overlapped grids contain 8 metrics, and the pixels that meet all 8 mentioned metrics’ thresholds define the comfort zone. With these overlapped maps, we can determine the comfort zones inside rooms and locate sedentary areas there. Other parts can be used for other tasks that are not used permanently or need lower or higher amounts of daylight and thermal comfort is less critical to user experience. The results can be reflected in a table to be used as a guideline by designers in the early stages of the design process.Keywords: occupant experience, office buildings, space depth, thermal comfort, visual comfort
Procedia PDF Downloads 183144 Medication Side Effects: Implications on the Mental Health and Adherence Behaviour of Patients with Hypertension
Authors: Irene Kretchy, Frances Owusu-Daaku, Samuel Danquah
Abstract:
Hypertension is the leading risk factor for cardiovascular diseases, and a major cause of death and disability worldwide. This study examined whether psychosocial variables influenced patients’ perception and experience of side effects of their medicines, how they coped with these experiences and the impact on mental health and medication adherence to conventional hypertension therapies. Methods: A hospital-based mixed methods study, using quantitative and qualitative approaches was conducted on hypertensive patients. Participants were asked about side effects, medication adherence, common psychological symptoms, and coping mechanisms with the aid of standard questionnaires. Information from the quantitative phase was analyzed with the Statistical Package for Social Sciences (SPSS) version 20. The interviews from the qualitative study were audio-taped with a digital audio recorder, manually transcribed and analyzed using thematic content analysis. The themes originated from participant interviews a posteriori. Results: The experiences of side effects – such as palpitations, frequent urination, recurrent bouts of hunger, erectile dysfunction, dizziness, cough, physical exhaustion - were categorized as no/low (39.75%), moderate (53.0%) and high (7.25%). Significant relationships between depression (x 2 = 24.21, P < 0.0001), anxiety (x 2 = 42.33, P < 0.0001), stress (x 2 = 39.73, P < 0.0001) and side effects were observed. A logistic regression model using the adjusted results for this association are reported – depression [OR = 1.9 (1.03 – 3.57), p = 0.04], anxiety [OR = 1.5 (1.22 – 1.77), p = < 0.001], and stress [OR = 1.3 (1.02 – 1.71), p = 0.04]. Side effects significantly increased the probability of individuals to be non-adherent [OR = 4.84 (95% CI 1.07 – 1.85), p = 0.04] with social factors, media influences and attitudes of primary caregivers further explaining this relationship. The personal adoption of medication modifying strategies, espousing the use of complementary and alternative treatments, and interventions made by clinicians were the main forms of coping with side effects. Conclusions: Results from this study show that contrary to a biomedical approach, the experience of side effects has biological, social and psychological interrelations. The result offers more support for the need for a multi-disciplinary approach to healthcare where all forms of expertise are incorporated into health provision and patient care. Additionally, medication side effects should be considered as a possible cause of non-adherence among hypertensive patients, thus addressing this problem from a Biopsychosocial perspective in any intervention may improve adherence and invariably control blood pressure.Keywords: biopsychosocial, hypertension, medication adherence, psychological disorders
Procedia PDF Downloads 371143 Seismic Behavior of Existing Reinforced Concrete Buildings in California under Mainshock-Aftershock Scenarios
Authors: Ahmed Mantawy, James C. Anderson
Abstract:
Numerous cases of earthquakes (main-shocks) that were followed by aftershocks have been recorded in California. In 1992 a pair of strong earthquakes occurred within three hours of each other in Southern California. The first shock occurred near the community of Landers and was assigned a magnitude of 7.3 then the second shock occurred near the city of Big Bear about 20 miles west of the initial shock and was assigned a magnitude of 6.2. In the same year, a series of three earthquakes occurred over two days in the Cape-Mendocino area of Northern California. The main-shock was assigned a magnitude of 7.0 while the second and the third shocks were both assigned a value of 6.6. This paper investigates the effect of a main-shock accompanied with aftershocks of significant intensity on reinforced concrete (RC) frame buildings to indicate nonlinear behavior using PERFORM-3D software. A 6-story building in San Bruno and a 20-story building in North Hollywood were selected for the study as both of them have RC moment resisting frame systems. The buildings are also instrumented at multiple floor levels as a part of the California Strong Motion Instrumentation Program (CSMIP). Both buildings have recorded responses during past events such as Loma-Prieta and Northridge earthquakes which were used in verifying the response parameters of the numerical models in PERFORM-3D. The verification of the numerical models shows good agreement between the calculated and the recorded response values. Then, different scenarios of a main-shock followed by a series of aftershocks from real cases in California were applied to the building models in order to investigate the structural behavior of the moment-resisting frame system. The behavior was evaluated in terms of the lateral floor displacements, the ductility demands, and the inelastic behavior at critical locations. The analysis results showed that permanent displacements may have happened due to the plastic deformation during the main-shock that can lead to higher displacements during after-shocks. Also, the inelastic response at plastic hinges during the main-shock can change the hysteretic behavior during the aftershocks. Higher ductility demands can also occur when buildings are subjected to trains of ground motions compared to the case of individual ground motions. A general conclusion is that the occurrence of aftershocks following an earthquake can lead to increased damage within the elements of an RC frame buildings. Current code provisions for seismic design do not consider the probability of significant aftershocks when designing a new building in zones of high seismic activity.Keywords: reinforced concrete, existing buildings, aftershocks, damage accumulation
Procedia PDF Downloads 280142 Analysis of Correlation Between Manufacturing Parameters and Mechanical Strength Followed by Uncertainty Propagation of Geometric Defects in Lattice Structures
Authors: Chetra Mang, Ahmadali Tahmasebimoradi, Xavier Lorang
Abstract:
Lattice structures are widely used in various applications, especially in aeronautic, aerospace, and medical applications because of their high performance properties. Thanks to advancement of the additive manufacturing technology, the lattice structures can be manufactured by different methods such as laser beam melting technology. However, the presence of geometric defects in the lattice structures is inevitable due to the manufacturing process. The geometric defects may have high impact on the mechanical strength of the structures. This work analyzes the correlation between the manufacturing parameters and the mechanical strengths of the lattice structures. To do that, two types of the lattice structures; body-centered cubic with z-struts (BCCZ) structures made of Inconel718, and body-centered cubic (BCC) structures made of Scalmalloy, are manufactured by laser melting beam machine using Taguchi design of experiment. Each structure is placed on the substrate with a specific position and orientation regarding the roller direction of deposed metal powder. The position and orientation are considered as the manufacturing parameters. The geometric defects of each beam in the lattice are characterized and used to build the geometric model in order to perform simulations. Then, the mechanical strengths are defined by the homogeneous response as Young's modulus and yield strength. The distribution of mechanical strengths is observed as a function of manufacturing parameters. The mechanical response of the BCCZ structure is stretch-dominated, i.e., the mechanical strengths are directly dependent on the strengths of the vertical beams. As the geometric defects of vertical beams are slightly changed based on their position/orientation on the manufacturing substrate, the mechanical strengths are less dispersed. The manufacturing parameters are less influenced on the mechanical strengths of the structure BCCZ. The mechanical response of the BCC structure is bending-dominated. The geometric defects of inclined beam are highly dispersed within a structure and also based on their position/orientation on the manufacturing substrate. For different position/orientation on the substrate, the mechanical responses are highly dispersed as well. This shows that the mechanical strengths are directly impacted by manufacturing parameters. In addition, this work is carried out to study the uncertainty propagation of the geometric defects on the mechanical strength of the BCC lattice structure made of Scalmalloy. To do that, we observe the distribution of mechanical strengths of the lattice according to the distribution of the geometric defects. A probability density law is determined based on a statistical hypothesis corresponding to the geometric defects of the inclined beams. The samples of inclined beams are then randomly drawn from the density law to build the lattice structure samples. The lattice samples are then used for simulation to characterize the mechanical strengths. The results reveal that the distribution of mechanical strengths of the structures with the same manufacturing parameters is less dispersed than one of the structures with different manufacturing parameters. Nevertheless, the dispersion of mechanical strengths due to the structures with the same manufacturing parameters are unneglectable.Keywords: geometric defects, lattice structure, mechanical strength, uncertainty propagation
Procedia PDF Downloads 123141 Effect of Velocity-Slip in Nanoscale Electroosmotic Flows: Molecular and Continuum Transport Perspectives
Authors: Alper T. Celebi, Ali Beskok
Abstract:
Electroosmotic (EO) slip flows in nanochannels are investigated using non-equilibrium molecular dynamics (MD) simulations, and the results are compared with analytical solution of Poisson-Boltzmann and Stokes (PB-S) equations with slip contribution. The ultimate objective of this study is to show that well-known continuum flow model can accurately predict the EO velocity profiles in nanochannels using the slip lengths and apparent viscosities obtained from force-driven flow simulations performed at various liquid-wall interaction strengths. EO flow of aqueous NaCl solution in silicon nanochannels are simulated under realistic electrochemical conditions within the validity region of Poisson-Boltzmann theory. A physical surface charge density is determined for nanochannels based on dissociations of silanol functional groups on channel surfaces at known salt concentration, temperature and local pH. First, we present results of density profiles and ion distributions by equilibrium MD simulations, ensuring that the desired thermodynamic state and ionic conditions are satisfied. Next, force-driven nanochannel flow simulations are performed to predict the apparent viscosity of ionic solution between charged surfaces and slip lengths. Parabolic velocity profiles obtained from force-driven flow simulations are fitted to a second-order polynomial equation, where viscosity and slip lengths are quantified by comparing the coefficients of the fitted equation with continuum flow model. Presence of charged surface increases the viscosity of ionic solution while the velocity-slip at wall decreases. Afterwards, EO flow simulations are carried out under uniform electric field for different liquid-wall interaction strengths. Velocity profiles present finite slips near walls, followed with a conventional viscous flow profile in the electrical double layer that reaches a bulk flow region in the center of the channel. The EO flow enhances with increased slip at the walls, which depends on wall-liquid interaction strength and the surface charge. MD velocity profiles are compared with the predictions from analytical solutions of the slip modified PB-S equation, where the slip length and apparent viscosity values are obtained from force-driven flow simulations in charged silicon nano-channels. Our MD results show good agreements with the analytical solutions at various slip conditions, verifying the validity of PB-S equation in nanochannels as small as 3.5 nm. In addition, the continuum model normalizes slip length with the Debye length instead of the channel height, which implies that enhancement in EO flows is independent of the channel height. Further MD simulations performed at different channel heights also shows that the flow enhancement due to slip is independent of the channel height. This is important because slip enhanced EO flow is observable even in micro-channels experiments by using a hydrophobic channel with large slip and high conductivity solutions with small Debye length. The present study provides an advanced understanding of EO flows in nanochannels. Correct characterization of nanoscale EO slip flow is crucial to discover the extent of well-known continuum models, which is required for various applications spanning from ion separation to drug delivery and bio-fluidic analysis.Keywords: electroosmotic flow, molecular dynamics, slip length, velocity-slip
Procedia PDF Downloads 158140 A Comparison of qCON/qNOX to the Bispectral Index as Indices of Antinociception in Surgical Patients Undergoing General Anesthesia with Laryngeal Mask Airway
Authors: Roya Yumul, Ofelia Loani Elvir-Lazo, Sevan Komshian, Ruby Wang, Jun Tang
Abstract:
BACKGROUND: An objective means for monitoring the anti-nociceptive effects of perioperative medications has long been desired as a way to provide anesthesiologists information regarding a patient’s level of antinociception and preclude any untoward autonomic responses and reflexive muscular movements from painful stimuli intraoperatively. To this end, electroencephalogram (EEG) based tools including BIS and qCON were designed to provide information about the depth of sedation while qNOX was produced to inform on the degree of antinociception. The goal of this study was to compare the reliability of qCON/qNOX to BIS as specific indicators of response to nociceptive stimulation. METHODS: Sixty-two patients undergoing general anesthesia with LMA were included in this study. Institutional Review Board (IRB) approval was obtained, and informed consent was acquired prior to patient enrollment. Inclusion criteria included American Society of Anesthesiologists (ASA) class I-III, 18 to 80 years of age, and either gender. Exclusion criteria included the inability to consent. Withdrawal criteria included conversion to the endotracheal tube and EEG malfunction. BIS and qCON/qNOX electrodes were simultaneously placed on all patients prior to induction of anesthesia and were monitored throughout the case, along with other perioperative data, including patient response to noxious stimuli. All intraoperative decisions were made by the primary anesthesiologist without influence from qCON/qNOX. Student’s t-distribution, prediction probability (PK), and ANOVA were used to statistically compare the relative ability to detect nociceptive stimuli for each index. Twenty patients were included for the preliminary analysis. RESULTS: A comparison of overall intraoperative BIS, qCON and qNOX indices demonstrated no significant difference between the three measures (N=62, p> 0.05). Meanwhile, index values for qNOX (62±18) were significantly higher than those for BIS (46±14) and qCON (54±19) immediately preceding patient responses to nociceptive stimulation in a preliminary analysis (N=20, * p= 0.0408). Notably, certain hemodynamic measurements demonstrated a significant increase in response to painful stimuli (MAP increased from 74 ±13 mm Hg at baseline to 84 ± 18 mm Hg during noxious stimuli [p= 0.032] and HR from 76 ± 12 BPM at baseline to 80 ± 13 BPM during noxious stimuli [p=0.078] respectively). CONCLUSION: In this observational study, BIS and qCON/qNOX provided comparable information on patients’ level of sedation throughout the course of an anesthetic. Meanwhile, increases in qNOX values demonstrated a superior correlation to an imminent response to stimulation relative to all other indicesKeywords: antinociception, BIS, general anesthesia, LMA, qCON/qNOX
Procedia PDF Downloads 137139 Digitization and Morphometric Characterization of Botanical Collection of Indian Arid Zones as Informatics Initiatives Addressing Conservation Issues in Climate Change Scenario
Authors: Dipankar Saha, J. P. Singh, C. B. Pandey
Abstract:
Indian Thar desert being the seventh largest in the world is the main hot sand desert occupies nearly 385,000km2 and about 9% of the area of the country harbours several species likely the flora of 682 species (63 introduced species) belonging to 352 genera and 87 families. The degree of endemism of plant species in the Thar desert is 6.4 percent, which is relatively higher than the degree of endemism in the Sahara desert which is very significant for the conservationist to envisage. The advent and development of computer technology for digitization and data base management coupled with the rapidly increasing importance of biodiversity conservation resulted in the invention of biodiversity informatics as discipline of basic sciences with multiple applications. Aichi Target 19 as an outcome of Convention of Biological Diversity (CBD) specifically mandates the development of an advanced and shared biodiversity knowledge base. Information on species distributions in space is the crux of effective management of biodiversity in the rapidly changing world. The efficiency of biodiversity management is being increased rapidly by various stakeholders like researchers, policymakers, and funding agencies with the knowledge and application of biodiversity informatics. Herbarium specimens being a vital repository for biodiversity conservation especially in climate change scenario the digitization process usually aims to improve access and to preserve delicate specimens and in doing so creating large sets of images as a part of the existing repository as arid plant information facility for long-term future usage. As the leaf characters are important for describing taxa and distinguishing between them and they can be measured from herbarium specimens as well. As a part of this activity, laminar characterization (leaves being the most important characters in assessing climate change impact) initially resulted in classification of more than thousands collections belonging to ten families like Acanthaceae, Aizoaceae, Amaranthaceae, Asclepiadaceae, Anacardeaceae, Apocynaceae, Asteraceae, Aristolochiaceae, Berseraceae and Bignoniaceae etc. Taxonomic diversity indices has also been worked out being one of the important domain of biodiversity informatics approaches. The digitization process also encompasses workflows which incorporate automated systems to enable us to expand and speed up the digitisation process. The digitisation workflows used to be on a modular system which has the potential to be scaled up. As they are being developed with a geo-referencing tool and additional quality control elements and finally placing specimen images and data into a fully searchable, web-accessible database. Our effort in this paper is to elucidate the role of BIs, present effort of database development of the existing botanical collection of institute repository. This effort is expected to be considered as a part of various global initiatives having an effective biodiversity information facility. This will enable access to plant biodiversity data that are fit-for-use by scientists and decision makers working on biodiversity conservation and sustainable development in the region and iso-climatic situation of the world.Keywords: biodiversity informatics, climate change, digitization, herbarium, laminar characters, web accessible interface
Procedia PDF Downloads 229138 Improving Predictions of Coastal Benthic Invertebrate Occurrence and Density Using a Multi-Scalar Approach
Authors: Stephanie Watson, Fabrice Stephenson, Conrad Pilditch, Carolyn Lundquist
Abstract:
Spatial data detailing both the distribution and density of functionally important marine species are needed to inform management decisions. Species distribution models (SDMs) have proven helpful in this regard; however, models often focus only on species occurrences derived from spatially expansive datasets and lack the resolution and detail required to inform regional management decisions. Boosted regression trees (BRT) were used to produce high-resolution SDMs (250 m) at two spatial scales predicting probability of occurrence, abundance (count per sample unit), density (count per km2) and uncertainty for seven coastal seafloor taxa that vary in habitat usage and distribution to examine prediction differences and implications for coastal management. We investigated if small scale regionally focussed models (82,000 km2) can provide improved predictions compared to data-rich national scale models (4.2 million km2). We explored the variability in predictions across model type (occurrence vs abundance) and model scale to determine if specific taxa models or model types are more robust to geographical variability. National scale occurrence models correlated well with broad-scale environmental predictors, resulting in higher AUC (Area under the receiver operating curve) and deviance explained scores; however, they tended to overpredict in the coastal environment and lacked spatially differentiated detail for some taxa. Regional models had lower overall performance, but for some taxa, spatial predictions were more differentiated at a localised ecological scale. National density models were often spatially refined and highlighted areas of ecological relevance producing more useful outputs than regional-scale models. The utility of a two-scale approach aids the selection of the most optimal combination of models to create a spatially informative density model, as results contrasted for specific taxa between model type and scale. However, it is vital that robust predictions of occurrence and abundance are generated as inputs for the combined density model as areas that do not spatially align between models can be discarded. This study demonstrates the variability in SDM outputs created over different geographical scales and highlights implications and opportunities for managers utilising these tools for regional conservation, particularly in data-limited environments.Keywords: Benthic ecology, spatial modelling, multi-scalar modelling, marine conservation.
Procedia PDF Downloads 77137 Health-Related Quality of Life of Caregivers of Institution-Reared Children in Metro Manila: Effects of Role Overload and Role Distress
Authors: Ian Christopher Rocha
Abstract:
This study aimed to determine the association of the quality of life (QOL) of the caregivers of children in need of special protection (CNSP) in child-caring institutions in Metro Manila with the levels of their role overload (RO) and role distress (RD). The CNSP in this study covered the orphaned, abandoned, abused, neglected, exploited, and mentally-challenged children. In this study, the domains of QOL included physical health (PH), psychological health, social health (SH), and living conditions (LC). It also intended to ascertain the association of their personal and work-related characteristics with their RO and RD levels. The respondents of this study were 130 CNSP caregivers in 17 residential child-rearing institutions in Metro Manila. A purposive non-probability sampling was used. Using a quantitative methodological approach, the survey method was utilized to gather data with the use of a self-administered structured questionnaire. Data were analyzed using both descriptive and inferential statistics. Results revealed that the level of RO, the level of RD, and the QOL of the CNSP caregivers were all moderate. Data also suggested that there were significant positive relationships between the RO level and the caregivers’ characteristics, such as age, the number of training, and years of service in the institution. At the same time, the findings revealed that there were significant positive relationships between the RD level and the caregivers’ characteristics, such as age and hours of care rendered to their care recipients. In addition, the findings suggested that all domains of their QOL obtained significant relationships with their RO level. For the correlations of their level of RO and their QOL domains, the PH and the LC obtained a moderate negative correlation with the RO level while the rest of the domains obtained weak negative correlations with RO level. For the correlations of their level of RD and the QOL domains, all domains, except SH, obtained strong negative correlations with the level of RD. The SH revealed to have a moderate negative correlation with RD level. In conclusion, caregivers who are older experience higher levels of RO and RD; caregivers who have more training and years of service experience the higher level of RO; and caregivers who have longer hours of rendered care experience the higher level of RD. In addition, the study affirmed that if the levels of RO and RD are high, the QOL is low, and vice versa. Therefore, the RO and RD levels are reliable predictors of the caregivers’ QOL. In relation, the caregiving situation in the Philippines revealed to be unique and distinct from other countries because the levels of RO and RD and the QOL of Filipino CNSP caregivers were all moderate in contrast with their foreign counterparts who experience high caregiving RO and RD leading to low QOL.Keywords: quality of life, caregivers, children in need of special protection, physical health, psychological health, social health, living conditions, role overload, role distress
Procedia PDF Downloads 211136 Using Soil Texture Field Observations as Ordinal Qualitative Variables for Digital Soil Mapping
Authors: Anne C. Richer-De-Forges, Dominique Arrouays, Songchao Chen, Mercedes Roman Dobarco
Abstract:
Most of the digital soil mapping (DSM) products rely on machine learning (ML) prediction models and/or the use or pedotransfer functions (PTF) in which calibration data come from soil analyses performed in labs. However, many other observations (often qualitative, nominal, or ordinal) could be used as proxies of lab measurements or as input data for ML of PTF predictions. DSM and ML are briefly described with some examples taken from the literature. Then, we explore the potential of an ordinal qualitative variable, i.e., the hand-feel soil texture (HFST) estimating the mineral particle distribution (PSD): % of clay (0-2µm), silt (2-50µm) and sand (50-2000µm) in 15 classes. The PSD can also be measured by lab measurements (LAST) to determine the exact proportion of these particle-sizes. However, due to cost constraints, HFST are much more numerous and spatially dense than LAST. Soil texture (ST) is a very important soil parameter to map as it is controlling many of the soil properties and functions. Therefore, comes an essential question: is it possible to use HFST as a proxy of LAST for calibration and/or validation of DSM predictions of ST? To answer this question, the first step is to compare HFST with LAST on a representative set where both information are available. This comparison was made on ca 17,400 samples representative of a French region (34,000 km2). The accuracy of HFST was assessed, and each HFST class was characterized by a probability distribution function (PDF) of its LAST values. This enables to randomly replace HFST observations by LAST values while respecting the PDF previously calculated and results in a very large increase of observations available for the calibration or validation of PTF and ML predictions. Some preliminary results are shown. First, the comparison between HFST classes and LAST analyses showed that accuracies could be considered very good when compared to other studies. The causes of some inconsistencies were explored and most of them were well explained by other soil characteristics. Then we show some examples applying these relationships and the increase of data to several issues related to DSM. The first issue is: do the PDF functions that were established enable to use HSFT class observations to improve the LAST soil texture prediction? For this objective, we replaced all HFST for topsoil by values from the PDF 100 time replicates). Results were promising for the PTF we tested (a PTF predicting soil water holding capacity). For the question related to the ML prediction of LAST soil texture on the region, we did the same kind of replacement, but we implemented a 10-fold cross-validation using points where we had LAST values. We obtained only preliminary results but they were rather promising. Then we show another example illustrating the potential of using HFST as validation data. As in numerous countries, the HFST observations are very numerous; these promising results pave the way to an important improvement of DSM products in all the countries of the world.Keywords: digital soil mapping, improvement of digital soil mapping predictions, potential of using hand-feel soil texture, soil texture prediction
Procedia PDF Downloads 225135 Estimation of the Dynamic Fragility of Padre Jacinto Zamora Bridge Due to Traffic Loads
Authors: Kimuel Suyat, Francis Aldrine Uy, John Paul Carreon
Abstract:
The Philippines, composed of many islands, is connected with approximately 8030 bridges. Continuous evaluation of the structural condition of these bridges is needed to safeguard the safety of the general public. With most bridges reaching its design life, retrofitting and replacement may be needed. Concerned government agencies allocate huge costs for periodic monitoring and maintenance of these structures. The rising volume of traffic and aging of these infrastructures is challenging structural engineers to give rise for structural health monitoring techniques. Numerous techniques are already proposed and some are now being employed in other countries. Vibration Analysis is one way. The natural frequency and vibration of a bridge are design criteria in ensuring the stability, safety and economy of the structure. Its natural frequency must not be so high so as not to cause discomfort and not so low that the structure is so stiff causing it to be both costly and heavy. It is well known that the stiffer the member is, the more load it attracts. The frequency must not also match the vibration caused by the traffic loads. If this happens, a resonance occurs. Vibration that matches a systems frequency will generate excitation and when this exceeds the member’s limit, a structural failure will happen. This study presents a method for calculating dynamic fragility through the use of vibration-based monitoring system. Dynamic fragility is the probability that a structural system exceeds a limit state when subjected to dynamic loads. The bridge is modeled in SAP2000 based from the available construction drawings provided by the Department of Public Works and Highways. It was verified and adjusted based from the actual condition of the bridge. The bridge design specifications are also checked using nondestructive tests. The approach used in this method properly accounts the uncertainty of observed values and code-based structural assumptions. The vibration response of the structure due to actual loads is monitored using installed sensors on the bridge. From the determinacy of these dynamic characteristic of a system, threshold criteria can be established and fragility curves can be estimated. This study conducted in relation with the research project between Department of Science and Technology, Mapúa Institute of Technology, and the Department of Public Works and Highways also known as Mapúa-DOST Smart Bridge Project deploys Structural Health Monitoring Sensors at Zamora Bridge. The bridge is selected in coordination with the Department of Public Works and Highways. The structural plans for the bridge are also readily available.Keywords: structural health monitoring, dynamic characteristic, threshold criteria, traffic loads
Procedia PDF Downloads 270134 Particle Size Characteristics of Aerosol Jets Produced by a Low Powered E-Cigarette
Authors: Mohammad Shajid Rahman, Tarik Kaya, Edgar Matida
Abstract:
Electronic cigarettes, also known as e-cigarettes, may have become a tool to improve smoking cessation due to their ability to provide nicotine at a selected rate. Unlike traditional cigarettes, which produce toxic elements from tobacco combustion, e-cigarettes generate aerosols by heating a liquid solution (commonly a mixture of propylene glycol, vegetable glycerin, nicotine and some flavoring agents). However, caution still needs to be taken when using e-cigarettes due to the presence of addictive nicotine and some harmful substances produced from the heating process. Particle size distribution (PSD) and associated velocities generated by e-cigarettes have significant influence on aerosol deposition in different regions of human respiratory tracts. On another note, low actuation power is beneficial in aerosol generating devices since it exhibits a reduced emission of toxic chemicals. In case of e-cigarettes, lower heating powers can be considered as powers lower than 10 W compared to a wide range of powers (0.6 to 70.0 W) studied in literature. Due to the importance regarding inhalation risk reduction, deeper understanding of particle size characteristics of e-cigarettes demands thorough investigation. However, comprehensive study on PSD and velocities of e-cigarettes with a standard testing condition at relatively low heating powers is still lacking. The present study aims to measure particle number count and size distribution of undiluted aerosols of a latest fourth-generation e-cigarette at low powers, within 6.5 W using real-time particle counter (time-of-flight method). Also, temporal and spatial evolution of particle size and velocity distribution of aerosol jets are examined using phase Doppler anemometry (PDA) technique. To the authors’ best knowledge, application of PDA in e-cigarette aerosol measurement is rarely reported. In the present study, preliminary results about particle number count of undiluted aerosols measured by time-of-flight method depicted that an increase of heating power from 3.5 W to 6.5 W resulted in an enhanced asymmetricity in PSD, deviating from log-normal distribution. This can be considered as an artifact of rapid vaporization, condensation and coagulation processes on aerosols caused by higher heating power. A novel mathematical expression, combining exponential, Gaussian and polynomial (EGP) distributions, was proposed to describe asymmetric PSD successfully. The value of count median aerodynamic diameter and geometric standard deviation laid within a range of about 0.67 μm to 0.73 μm, and 1.32 to 1.43, respectively while the power varied from 3.5 W to 6.5 W. Laser Doppler velocimetry (LDV) and PDA measurement suggested a typical centerline streamwise mean velocity decay of aerosol jet along with a reduction of particle sizes. In the final submission, a thorough literature review, detailed description of experimental procedure and discussion of the results will be provided. Particle size and turbulent characteristics of aerosol jets will be further examined, analyzing arithmetic mean diameter, volumetric mean diameter, volume-based mean diameter, streamwise mean velocity and turbulence intensity. The present study has potential implications in PSD simulation and validation of aerosol dosimetry model, leading to improving related aerosol generating devices.Keywords: E-cigarette aerosol, laser doppler velocimetry, particle size distribution, particle velocity, phase Doppler anemometry
Procedia PDF Downloads 49133 Forensic Investigation: The Impact of Biometric-Based Solution in Combatting Mobile Fraud
Authors: Mokopane Charles Marakalala
Abstract:
Research shows that mobile fraud has grown exponentially in South Africa during the lockdown caused by the COVID-19 pandemic. According to the South African Banking Risk Information Centre (SABRIC), fraudulent online banking and transactions resulted in a sharp increase in cybercrime since the beginning of the lockdown, resulting in a huge loss to the banking industry in South Africa. While the Financial Intelligence Centre Act, 38 of 2001, regulate financial transactions, it is evident that criminals are making use of technology to their advantage. Money-laundering ranks among the major crimes, not only in South Africa but worldwide. This paper focuses on the impact of biometric-based solutions in combatting mobile fraud at the South African Risk Information. SABRIC had the challenges of a successful mobile fraud; cybercriminals could hijack a mobile device and use it to gain access to sensitive personal data and accounts. Cybercriminals are constantly looting the depths of cyberspace in search of victims to attack. Millions of people worldwide use online banking to do their regular bank-related transactions quickly and conveniently. This was supported by the SABRIC, who regularly highlighted incidents of mobile fraud, corruption, and maladministration in SABRIC, resulting in a lack of secure their banking online; they are vulnerable to falling prey to fraud scams such as mobile fraud. Criminals have made use of digital platforms since the development of technology. In 2017, 13 438 instances involving banking apps, internet banking, and mobile banking caused the sector to suffer gross losses of more than R250,000,000. The final three parties are forced to point fingers at one another while the fraudster makes off with the money. A non-probability sampling (purposive sampling) was used in selecting these participants. These included telephone calls and virtual interviews. The results indicate that there is a relationship between remote online banking and the increase in money-laundering as the system allows transactions to take place with limited verification processes. This paper highlights the significance of considering the development of prevention mechanisms, capacity development, and strategies for both financial institutions as well as law enforcement agencies in South Africa to reduce crime such as money-laundering. The researcher recommends that strategies to increase awareness for bank staff must be harnessed through the provision of requisite training and to be provided adequate training.Keywords: biometric-based solution, investigation, cybercrime, forensic investigation, fraud, combatting
Procedia PDF Downloads 101132 The Impact of Formulate and Implementation Strategy for an Organization to Better Financial Consequences in Malaysian Private Hospital
Authors: Naser Zouri
Abstract:
Purpose: Measures of formulate and implementation strategy shows amount of product rate-market based strategic management category such as courtesy, competence, and compliance to reach the high loyalty of financial ecosystem. Despite, it solves the market place error intention to fair trade organization. Finding: Finding shows the ability of executives’ level of management to motivate and better decision-making to solve the treatments in business organization. However, it made ideal level of each interposition policy for a hypothetical household. Methodology/design. Style of questionnaire about the data collection was selected to survey of both pilot test and real research. Also, divide of questionnaire and using of Free Scale Semiconductor`s between the finance employee was famous of this instrument. Respondent`s nominated basic on non-probability sampling such as convenience sampling to answer the questionnaire. The way of realization costs to performed the questionnaire divide among the respondent`s approximately was suitable as a spend the expenditure to reach the answer but very difficult to collect data from hospital. However, items of research survey was formed of implement strategy, environment, supply chain, employee from impact of implementation strategy on reach to better financial consequences and also formulate strategy, comprehensiveness strategic design, organization performance from impression on formulate strategy and financial consequences. Practical Implication: Dynamic capability approach of formulate and implement strategy focuses on the firm-specific processes through which firms integrate, build, or reconfigure resources valuable for making a theoretical contribution. Originality/ value of research: Going beyond the current discussion, we show that case studies have the potential to extend and refine theory. We present new light on how dynamic capabilities can benefit from case study research by discovering the qualifications that shape the development of capabilities and determining the boundary conditions of the dynamic capabilities approach. Limitation of the study :Present study also relies on survey of methodology for data collection and the response perhaps connection by financial employee was difficult to responds the question because of limitation work place.Keywords: financial ecosystem, loyalty, Malaysian market error, dynamic capability approach, rate-market, optimization intelligence strategy, courtesy, competence, compliance
Procedia PDF Downloads 304131 Prediction of Time to Crack Reinforced Concrete by Chloride Induced Corrosion
Authors: Anuruddha Jayasuriya, Thanakorn Pheeraphan
Abstract:
In this paper, a review of different mathematical models which can be used as prediction tools to assess the time to crack reinforced concrete (RC) due to corrosion is investigated. This investigation leads to an experimental study to validate a selected prediction model. Most of these mathematical models depend upon the mechanical behaviors, chemical behaviors, electrochemical behaviors or geometric aspects of the RC members during a corrosion process. The experimental program is designed to verify the accuracy of a well-selected mathematical model from a rigorous literature study. Fundamentally, the experimental program exemplifies both one-dimensional chloride diffusion using RC squared slab elements of 500 mm by 500 mm and two-dimensional chloride diffusion using RC squared column elements of 225 mm by 225 mm by 500 mm. Each set consists of three water-to-cement ratios (w/c); 0.4, 0.5, 0.6 and two cover depths; 25 mm and 50 mm. 12 mm bars are used for column elements and 16 mm bars are used for slab elements. All the samples are subjected to accelerated chloride corrosion in a chloride bath of 5% (w/w) sodium chloride (NaCl) solution. Based on a pre-screening of different models, it is clear that the well-selected mathematical model had included mechanical properties, chemical and electrochemical properties, nature of corrosion whether it is accelerated or natural, and the amount of porous area that rust products can accommodate before exerting expansive pressure on the surrounding concrete. The experimental results have shown that the selected model for both one-dimensional and two-dimensional chloride diffusion had ±20% and ±10% respective accuracies compared to the experimental output. The half-cell potential readings are also used to see the corrosion probability, and experimental results have shown that the mass loss is proportional to the negative half-cell potential readings that are obtained. Additionally, a statistical analysis is carried out in order to determine the most influential factor that affects the time to corrode the reinforcement in the concrete due to chloride diffusion. The factors considered for this analysis are w/c, bar diameter, and cover depth. The analysis is accomplished by using Minitab statistical software, and it showed that cover depth is the significant effect on the time to crack the concrete from chloride induced corrosion than other factors considered. Thus, the time predictions can be illustrated through the selected mathematical model as it covers a wide range of factors affecting the corrosion process, and it can be used to predetermine the durability concern of RC structures that are vulnerable to chloride exposure. And eventually, it is further concluded that cover thickness plays a vital role in durability in terms of chloride diffusion.Keywords: accelerated corrosion, chloride diffusion, corrosion cracks, passivation layer, reinforcement corrosion
Procedia PDF Downloads 218130 CT Images Based Dense Facial Soft Tissue Thickness Measurement by Open-source Tools in Chinese Population
Authors: Ye Xue, Zhenhua Deng
Abstract:
Objectives: Facial soft tissue thickness (FSTT) data could be obtained from CT scans by measuring the face-to-skull distances at sparsely distributed anatomical landmarks by manually located on face and skull. However, automated measurement using 3D facial and skull models by dense points using open-source software has become a viable option due to the development of computed assisted imaging technologies. By utilizing dense FSTT information, it becomes feasible to generate plausible automated facial approximations. Therefore, establishing a comprehensive and detailed, densely calculated FSTT database is crucial in enhancing the accuracy of facial approximation. Materials and methods: This study utilized head CT scans from 250 Chinese adults of Han ethnicity, with 170 participants originally born and residing in northern China and 80 participants in southern China. The age of the participants ranged from 14 to 82 years, and all samples were divided into five non-overlapping age groups. Additionally, samples were also divided into three categories based on BMI information. The 3D Slicer software was utilized to segment bone and soft tissue based on different Hounsfield Unit (HU) thresholds, and surface models of the face and skull were reconstructed for all samples from CT data. Following procedures were performed unsing MeshLab, including converting the face models into hollowed cropped surface models amd automatically measuring the Hausdorff Distance (referred to as FSTT) between the skull and face models. Hausdorff point clouds were colorized based on depth value and exported as PLY files. A histogram of the depth distributions could be view and subdivided into smaller increments. All PLY files were visualized of Hausdorff distance value of each vertex. Basic descriptive statistics (i.e., mean, maximum, minimum and standard deviation etc.) and distribution of FSTT were analysis considering the sex, age, BMI and birthplace. Statistical methods employed included Multiple Regression Analysis, ANOVA, principal component analysis (PCA). Results: The distribution of FSTT is mainly influenced by BMI and sex, as further supported by the results of the PCA analysis. Additionally, FSTT values exceeding 30mm were found to be more sensitive to sex. Birthplace-related differences were observed in regions such as the forehead, orbital, mandibular, and zygoma. Specifically, there are distribution variances in the depth range of 20-30mm, particularly in the mandibular region. Northern males exhibit thinner FSTT in the frontal region of the forehead compared to southern males, while females shows fewer distribution differences between the northern and southern, except for the zygoma region. The observed distribution variance in the orbital region could be attributed to differences in orbital size and shape. Discussion: This study provides a database of Chinese individuals distribution of FSTT and suggested opening source tool shows fine function for FSTT measurement. By incorporating birthplace as an influential factor in the distribution of FSTT, a greater level of detail can be achieved in facial approximation.Keywords: forensic anthropology, forensic imaging, cranial facial reconstruction, facial soft tissue thickness, CT, open-source tool
Procedia PDF Downloads 58129 Monitoring of Serological Test of Blood Serum in Indicator Groups of the Population of Central Kazakhstan
Authors: Praskovya Britskaya, Fatima Shaizadina, Alua Omarova, Nessipkul Alysheva
Abstract:
Planned preventive vaccination, which is carried out in the Republic of Kazakhstan, promoted permanent decrease in the incidence of measles and viral hepatitis B. In the structure of VHB patients prevail people of young, working age. Monitoring of infectious incidence, monitoring of coverage of immunization of the population, random serological control over the immunity enable well-timed identification of distribution of the activator, effectiveness of the taken measures and forecasting. The serological blood analysis was conducted in indicator groups of the population of Central Kazakhstan for the purpose of identification of antibody titre for vaccine preventable infections (measles, viral hepatitis B). Measles antibodies were defined by method of enzyme-linked assay (ELA) with test-systems "VektoKor" – Ig G ('Vektor-Best' JSC). Antibodies for HBs-antigen of hepatitis B virus in blood serum was identified by method of enzyme-linked assay (ELA) with VektoHBsAg test systems – antibodies ('Vektor-Best' JSC). The result of the analysis is positive, the concentration of IgG to measles virus in the studied sample is equal to 0.18 IU/ml or more. Protective level of concentration of anti-HBsAg makes 10 mIU/ml. The results of the study of postvaccinal measles immunity showed that the share of seropositive people made 87.7% of total number of surveyed. The level of postvaccinal immunity to measles in age groups differs. So, among people older than 56 the percentage of seropositive made 95.2%. Among people aged 15-25 were registered 87.0% seropositive, at the age of 36-45 – 86.6%. In age groups of 25-35 and 36-45 the share of seropositive people was approximately at the same level – 88.5% and 88.8% respectively. The share of people seronegative to a measles virus made 12.3%. The biggest share of seronegative people was found among people aged 36-45 – 13.4% and 15-25 – 13.0%. The analysis of results of the examined people for the existence of postvaccinal immunity to viral hepatitis B showed that from all surveyed only 33.5% have the protective level of concentration of anti-HBsAg of 10 mIU/ml and more. The biggest share of people protected from VHB virus is observed in the age group of 36-45 and makes 60%. In the indicator group – above 56 – seropositive people made 4.8%. The high percentage of seronegative people has been observed in all studied age groups from 40.0% to 95.2%. The group of people which is least protected from getting VHB is people above 56 (95.2%). The probability to get VHB is also high among young people aged 25-35, the percentage of seronegative people made 80%. Thus, the results of the conducted research testify to the need for carrying out serological monitoring of postvaccinal immunity for the purpose of operational assessment of the epidemiological situation, early identification of its changes and prediction of the approaching danger.Keywords: antibodies, blood serum, immunity, immunoglobulin
Procedia PDF Downloads 255128 Movie and Theater Marketing Using the Potentials of Social Networks
Authors: Seyed Reza Naghibulsadat
Abstract:
The nature of communication includes various forms of media productions, which include film and theater. In the current situation, since social networks have emerged, they have brought their own communication capabilities and have features that show speed, public access, lack of media organization and the production of extensive content, and the development of critical thinking; Also, they contain capabilities to develop access to all kinds of media productions, including movies and theater shows; Of course, this works differently in different conditions and communities. In terms of the scale of exploitation, the film has a more general audience, and the theater has a special audience. The film industry is more developed based on more modern technologies, but the theater, based on the older ways of communication, contains more intimate and emotional aspects. ; But in general, the main focus is the development of access to movies and theater shows, which is emphasized by those involved in this field due to the capabilities of social networks. In this research, we will look at these 2 areas and the relevant components for both areas through social networks and also the common points of both types of media production. The main goal of this research is to know the strengths and weaknesses of using social networks for the marketing of movies and theater shows and, at the same time are, also considered the opportunities and threats of this field. The attractions of these two types of media production, with the emergence of social networks, and the ability to change positions, can provide the opportunity to become a media with greater exploitation and higher profitability; But the main consideration is the opinions about these capabilities and the ability to use them for film and theater marketing. The main question of the research is, what are the marketing components for movies and theaters using social media capabilities? What are its strengths and weaknesses? And what opportunities and threats are facing this market? This research has been done with two methods SWOT and meta-analysis. Non-probability sampling has been used with purposeful technique. The results show that a recent approach is an approach based on eliminating threats and weaknesses and emphasizing strengths, and exploiting opportunities in the direction of developing film and theater marketing based on the capabilities of social networks within the framework of local cultural values and presenting achievements on an international scale or It is universal. This introduction leads to the introduction of authentic Iranian culture and foreign enthusiasts in the framework of movies and theater art. Therefore, for this issue, the model for using the capabilities of social networks for movie or theater marketing, according to the results obtained from Respondents, is a model based on SO strategies and, in other words, offensive strategies so that it can take advantage of the internal strengths and made maximum use of foreign situations and opportunities to develop the use of movies and theater performances.Keywords: marketing, movies, theatrical show, social network potentials
Procedia PDF Downloads 76127 i2kit: A Tool for Immutable Infrastructure Deployments
Authors: Pablo Chico De Guzman, Cesar Sanchez
Abstract:
Microservice architectures are increasingly in distributed cloud applications due to the advantages on the software composition, development speed, release cycle frequency and the business logic time to market. On the other hand, these architectures also introduce some challenges on the testing and release phases of applications. Container technology solves some of these issues by providing reproducible environments, easy of software distribution and isolation of processes. However, there are other issues that remain unsolved in current container technology when dealing with multiple machines, such as networking for multi-host communication, service discovery, load balancing or data persistency (even though some of these challenges are already solved by traditional cloud vendors in a very mature and widespread manner). Container cluster management tools, such as Kubernetes, Mesos or Docker Swarm, attempt to solve these problems by introducing a new control layer where the unit of deployment is the container (or the pod — a set of strongly related containers that must be deployed on the same machine). These tools are complex to configure and manage and they do not follow a pure immutable infrastructure approach since servers are reused between deployments. Indeed, these tools introduce dependencies at execution time for solving networking or service discovery problems. If an error on the control layer occurs, which would affect running applications, specific expertise is required to perform ad-hoc troubleshooting. As a consequence, it is not surprising that container cluster support is becoming a source of revenue for consulting services. This paper presents i2kit, a deployment tool based on the immutable infrastructure pattern, where the virtual machine is the unit of deployment. The input for i2kit is a declarative definition of a set of microservices, where each microservice is defined as a pod of containers. Microservices are built into machine images using linuxkit —- a tool for creating minimal linux distributions specialized in running containers. These machine images are then deployed to one or more virtual machines, which are exposed through a cloud vendor load balancer. Finally, the load balancer endpoint is set into other microservices using an environment variable, providing service discovery. The toolkit i2kit reuses the best ideas from container technology to solve problems like reproducible environments, process isolation, and software distribution, and at the same time relies on mature, proven cloud vendor technology for networking, load balancing and persistency. The result is a more robust system with no learning curve for troubleshooting running applications. We have implemented an open source prototype that transforms i2kit definitions into AWS cloud formation templates, where each microservice AMI (Amazon Machine Image) is created on the fly using linuxkit. Even though container cluster management tools have more flexibility for resource allocation optimization, we defend that adding a new control layer implies more important disadvantages. Resource allocation is greatly improved by using linuxkit, which introduces a very small footprint (around 35MB). Also, the system is more secure since linuxkit installs the minimum set of dependencies to run containers. The toolkit i2kit is currently under development at the IMDEA Software Institute.Keywords: container, deployment, immutable infrastructure, microservice
Procedia PDF Downloads 179126 Transcriptomic and Translational Regulation of Peroxisome Proliferator-Activated Receptors after Different Feedings in Salmon
Authors: Mahsa Jalili, Essa Ehsan Khan, Signe Dille Lovmo, Augustine Akruwe, Egil Lien, Rolf Erik Olsen, Trygve Sigholt, Atle Magnus Bones
Abstract:
Data from the Norwegian Directorate of Fisheries reported that >1.2 million tons of Atlantic salmon were produced in Norway aquaculture industry in 2016. Peroxisome proliferator-activated receptors (PPARs) are one of the key transcription factor families that respond to nutritional ligands. Recent studies have shown the connection between PPARs with lipid and carbohydrate metabolism in aquaculture. To our knowledge, there is no published data about the effects of krill meal, soybean meal, Bactocell ® and butyrate feedings compared to control group on PPARs gene and protein expressions in Atlantic salmon. Fish, 1year +postsmolt, average weight 250 gram were cultured for 12 weeks after acclimatization by control commercial feeding in 2 weeks after hatchery. Water oxygen rate, salinity, and temperature were monitored every second day. At the end of the trial, fish were taken from tanks randomly, and four replicates per group were collected and stored in -80 freezers until analysis. Total RNA extracted from posterior part of dorsal fin muscle tissues and Nanodrop and Bioanalyzer was used to check the quality of RNA. Gene expression of PPAR α, β and γ were determined by RT-PCR. The expression of genes of interest was measured relative to control group after normalization to three reference genes. Total protein concentration was calculated by Bradford method, and protein expression was determined with primary PPARγ antibody by western blot. All data were analyzed by ANOVA followed by Benjamini-Hochberg and Bonferroni tests. Probability values <0.05 considered significant. Bactocell® and butyrate groups showed significantly lower PPARα expression. PPARβ and γ were not significantly different among groups. PPARγ mRNA expression was approximately consistent with protein expression pattern, except than butyrate group showed lower mRNA level. The order of PPARγ expression was Bactocell® > soy meal > butyrate > krill meal > control respectively. PPARβ gene expression decreased more in soy meal > butyrate > krill meal > Bactocell® > control groups respectively. In conclusion, the increased expression of PPARγ and α is proposed to represent a reduction tendency of lipid storage in fish fed by Bactocell®, butyrate, soy and krill meal.Keywords: aquaculture, blotting western, gene expression, krill protein extract, prebiotics, probiotics, Salmo salar
Procedia PDF Downloads 225125 Preliminary Analysis on the Distribution of Elements in Cannabis
Authors: E. Zafeiraki, P. Nisianakis, K. Machera
Abstract:
Cannabis plant contains 113 cannabinoids and it is commonly known for its psychoactive substance tetrahydrocannabinol or as a source of narcotic substances. The recent years’ cannabis cultivation also increases due to its wide use both for medical and industrial purposes as well as for uses as para-pharmaceuticals, cosmetics and food commodities. Depending on the final product, different parts of the plant are utilized, with the leaves and bud (seeds) being the most frequently used. Cannabis can accumulate various contaminants, including heavy metals, both from the soil and the water in which the plant grows. More specifically, metals may occur naturally in the soil and water, or they can enter into the environment through fertilizers, pesticides and fungicides that are commonly applied to crops. The high probability of metals accumulation in cannabis, combined with the latter growing use, raise concerns about the potential health effects in humans and consequently lead to the need for the implementation of safety measures for cannabis products, such as guidelines for regulating contaminants, including metals, and especially the ones characterized by high toxicity in cannabis. Acknowledging the above, the aim of the current study was first to investigate metals contamination in cannabis samples collected from Greece, and secondly to examine potential differences in metals accumulation among the different parts of the plant. To our best knowledge, this is the first study presenting information on elements in cannabis cultivated in Greece, and also on the distribution pattern of the former in the plant body. To this end, the leaves and the seeds of all the samples were initially separated and dried and then digested with Nitric acid (HNO₃) and Hydrochloric acid (HCl). For the analysis of these samples, an Inductive Coupled Plasma-Mass Spectrometry (ICP-MS) method was developed, able to quantify 28 elements. Internal standards were added at a constant rate and concentration to all calibration standards and unknown samples, while two certified reference materials were analyzed in every batch to ensure the accuracy of the measurements. The repeatability of the method and the background contamination were controlled by the analysis of quality control (QC) standards and blank samples in every sequence, respectively. According to the results, essential metals, such as Ca, Zn and Mg, were detected at high levels. On the contrary, the concentration of high toxicity metals, like As (average: 0.10ppm), Pb (average: 0.36ppm), Cd (average: 0.04ppm), and Hg (average: 0.012ppm) were very low in all the samples, indicating that no harmful effects on human health can be caused by the analyzed samples. Moreover, it appears that the pattern of contamination of metals is very similar in all the analyzed samples, which could be attributed to the same origin of the analyzed cannabis, i.e., the common soil composition, use of fertilizers, pesticides, etc. Finally, as far as the distribution pattern between the different parts of the plant is concerned, it was revealed that leaves present a higher concentration in comparison to seeds for all metals examined.Keywords: cannabis, heavy metals, ICP-MS, leaves and seeds, elements
Procedia PDF Downloads 99124 Prospective Relations of Childhood Maltreatment, Temperament and Delinquency among Prisoners: Moderated Mediation Effect of Age and Education
Authors: Razia Anjum, Zaqia Bano, Chan Wai
Abstract:
Temperament has been described as a multifaceted and potentially value-laden construct in literature but there is scarcity of research work in area of forensic psychology predominantly in south Asian countries. Present exposition explored the mediated effect of temperament towards the childhood maltreatment and delinquency. Further the moderated effect of prisoner’s age and education will be examined. Variable System for Windows 1.3 version was used to analyze the data provided by 517 prisoners (407 males, 110 females) from four districts prisons situated at Pakistan. Cross sectional research design was used in this study and representative sample was approached through purposive sampling technique. Only those prisoners were the part of study who maltreated in their childhood in form of physical abuse, psychological abuse, sexual abuse or experienced the emotional neglect. After exploration the childhood adversities through ‘Child Abuse Self-Report Scale’, then the prisoner’s temperament styles were explored through ‘Adult Temperament Scale’. Later on, the investigation with particular to the delinquent behaviors was carried out. The findings suggested that the presence of four temperamental styles (choleric, melancholic, phlegmatic, and sanguine) mediated the childhood maltreatment-delinquency relationship in late adulthood but not in early adulthood. Marked exploration was the significant moderated effect of Prisoner’s age and their level of education that effect the relationship of temperament towards the childhood maltreatment and the delinquency, in this way results are consistent with views on cumulative pathways to delinquency that undergone through the effect of childhood maltreatment. Results indicated that Choleric, Melancholic temperament was the positive predictor of delinquency, whereas. The Phlegmatic and Sanguine temperament were the negative predictor of delinquency, in this way, different types of temperament left an indelible trace on delinquency that can work out by modifying the individual temperament. On the basis of results, it could be concluded that inclination towards the delinquent behaviors including theft, drug abuse, lying, noncompliance behavior, police encounter, violence, cheating, gambling, harassment, homosexuality and heterosexuality could be minimized if properly screen out the temperament. Moreover, study determined the two other significant moderated effect of age towards the involvement in delinquent behaviors and moderated effect of education towards childhood maltreatment and the temperament. Findings suggested that with marked increase in number of years in age the probability to get involve in delinquent behaviors will decrease and the result was consistent with the assumption that education can work as buffered to maximize or minimize the effect of trauma and can shape the temperament accordingly. Results are consistent with views on cumulative disadvantage with the socio-psychological faultiness of community.Keywords: delinquent behaviors, temperament, prisoners, moderated mediation analysis
Procedia PDF Downloads 104123 Enhanced Field Emission from Plasma Treated Graphene and 2D Layered Hybrids
Authors: R. Khare, R. V. Gelamo, M. A. More, D. J. Late, Chandra Sekhar Rout
Abstract:
Graphene emerges out as a promising material for various applications ranging from complementary integrated circuits to optically transparent electrode for displays and sensors. The excellent conductivity and atomic sharp edges of unique two-dimensional structure makes graphene a propitious field emitter. Graphene analogues of other 2D layered materials have emerged in material science and nanotechnology due to the enriched physics and novel enhanced properties they present. There are several advantages of using 2D nanomaterials in field emission based devices, including a thickness of only a few atomic layers, high aspect ratio (the ratio of lateral size to sheet thickness), excellent electrical properties, extraordinary mechanical strength and ease of synthesis. Furthermore, the presence of edges can enhance the tunneling probability for the electrons in layered nanomaterials similar to that seen in nanotubes. Here we report electron emission properties of multilayer graphene and effect of plasma (CO2, O2, Ar and N2) treatment. The plasma treated multilayer graphene shows an enhanced field emission behavior with a low turn on field of 0.18 V/μm and high emission current density of 1.89 mA/cm2 at an applied field of 0.35 V/μm. Further, we report the field emission studies of layered WS2/RGO and SnS2/RGO composites. The turn on field required to draw a field emission current density of 1μA/cm2 is found to be 3.5, 2.3 and 2 V/μm for WS2, RGO and the WS2/RGO composite respectively. The enhanced field emission behavior observed for the WS2/RGO nanocomposite is attributed to a high field enhancement factor of 2978, which is associated with the surface protrusions of the single-to-few layer thick sheets of the nanocomposite. The highest current density of ~800 µA/cm2 is drawn at an applied field of 4.1 V/μm from a few layers of the WS2/RGO nanocomposite. Furthermore, first-principles density functional calculations suggest that the enhanced field emission may also be due to an overlap of the electronic structures of WS2 and RGO, where graphene-like states are dumped in the region of the WS2 fundamental gap. Similarly, the turn on field required to draw an emission current density of 1µA/cm2 is significantly low (almost half the value) for the SnS2/RGO nanocomposite (2.65 V/µm) compared to pristine SnS2 (4.8 V/µm) nanosheets. The field enhancement factor β (~3200 for SnS2 and ~3700 for SnS2/RGO composite) was calculated from Fowler-Nordheim (FN) plots and indicates emission from the nanometric geometry of the emitter. The field emission current versus time plot shows overall good emission stability for the SnS2/RGO emitter. The DFT calculations reveal that the enhanced field emission properties of SnS2/RGO composites are because of a substantial lowering of work function of SnS2 when supported by graphene, which is in response to p-type doping of the graphene substrate. Graphene and 2D analogue materials emerge as a potential candidate for future field emission applications.Keywords: graphene, layered material, field emission, plasma, doping
Procedia PDF Downloads 361