Mercurial > hg > CbC > CbC_gcc
comparison gcc/except.c @ 55:77e2b8dfacca gcc-4.4.5
update it from 4.4.3 to 4.5.0
author | ryoma <e075725@ie.u-ryukyu.ac.jp> |
---|---|
date | Fri, 12 Feb 2010 23:39:51 +0900 |
parents | a06113de4d67 |
children | b7f97abdc517 |
comparison
equal
deleted
inserted
replaced
52:c156f1bd5cd9 | 55:77e2b8dfacca |
---|---|
19 You should have received a copy of the GNU General Public License | 19 You should have received a copy of the GNU General Public License |
20 along with GCC; see the file COPYING3. If not see | 20 along with GCC; see the file COPYING3. If not see |
21 <http://www.gnu.org/licenses/>. */ | 21 <http://www.gnu.org/licenses/>. */ |
22 | 22 |
23 | 23 |
24 /* An exception is an event that can be signaled from within a | 24 /* An exception is an event that can be "thrown" from within a |
25 function. This event can then be "caught" or "trapped" by the | 25 function. This event can then be "caught" by the callers of |
26 callers of this function. This potentially allows program flow to | 26 the function. |
27 be transferred to any arbitrary code associated with a function call | 27 |
28 several levels up the stack. | 28 The representation of exceptions changes several times during |
29 | 29 the compilation process: |
30 The intended use for this mechanism is for signaling "exceptional | 30 |
31 events" in an out-of-band fashion, hence its name. The C++ language | 31 In the beginning, in the front end, we have the GENERIC trees |
32 (and many other OO-styled or functional languages) practically | 32 TRY_CATCH_EXPR, TRY_FINALLY_EXPR, WITH_CLEANUP_EXPR, |
33 requires such a mechanism, as otherwise it becomes very difficult | 33 CLEANUP_POINT_EXPR, CATCH_EXPR, and EH_FILTER_EXPR. |
34 or even impossible to signal failure conditions in complex | 34 |
35 situations. The traditional C++ example is when an error occurs in | 35 During initial gimplification (gimplify.c) these are lowered |
36 the process of constructing an object; without such a mechanism, it | 36 to the GIMPLE_TRY, GIMPLE_CATCH, and GIMPLE_EH_FILTER nodes. |
37 is impossible to signal that the error occurs without adding global | 37 The WITH_CLEANUP_EXPR and CLEANUP_POINT_EXPR nodes are converted |
38 state variables and error checks around every object construction. | 38 into GIMPLE_TRY_FINALLY nodes; the others are a more direct 1-1 |
39 | 39 conversion. |
40 The act of causing this event to occur is referred to as "throwing | 40 |
41 an exception". (Alternate terms include "raising an exception" or | 41 During pass_lower_eh (tree-eh.c) we record the nested structure |
42 "signaling an exception".) The term "throw" is used because control | 42 of the TRY nodes in EH_REGION nodes in CFUN->EH->REGION_TREE. |
43 is returned to the callers of the function that is signaling the | 43 We expand the lang_protect_cleanup_actions hook into MUST_NOT_THROW |
44 exception, and thus there is the concept of "throwing" the | 44 regions at this time. We can then flatten the statements within |
45 exception up the call stack. | 45 the TRY nodes to straight-line code. Statements that had been within |
46 | 46 TRY nodes that can throw are recorded within CFUN->EH->THROW_STMT_TABLE, |
47 [ Add updated documentation on how to use this. ] */ | 47 so that we may remember what action is supposed to be taken if |
48 a given statement does throw. During this lowering process, | |
49 we create an EH_LANDING_PAD node for each EH_REGION that has | |
50 some code within the function that needs to be executed if a | |
51 throw does happen. We also create RESX statements that are | |
52 used to transfer control from an inner EH_REGION to an outer | |
53 EH_REGION. We also create EH_DISPATCH statements as placeholders | |
54 for a runtime type comparison that should be made in order to | |
55 select the action to perform among different CATCH and EH_FILTER | |
56 regions. | |
57 | |
58 During pass_lower_eh_dispatch (tree-eh.c), which is run after | |
59 all inlining is complete, we are able to run assign_filter_values, | |
60 which allows us to map the set of types manipulated by all of the | |
61 CATCH and EH_FILTER regions to a set of integers. This set of integers | |
62 will be how the exception runtime communicates with the code generated | |
63 within the function. We then expand the GIMPLE_EH_DISPATCH statements | |
64 to a switch or conditional branches that use the argument provided by | |
65 the runtime (__builtin_eh_filter) and the set of integers we computed | |
66 in assign_filter_values. | |
67 | |
68 During pass_lower_resx (tree-eh.c), which is run near the end | |
69 of optimization, we expand RESX statements. If the eh region | |
70 that is outer to the RESX statement is a MUST_NOT_THROW, then | |
71 the RESX expands to some form of abort statement. If the eh | |
72 region that is outer to the RESX statement is within the current | |
73 function, then the RESX expands to a bookkeeping call | |
74 (__builtin_eh_copy_values) and a goto. Otherwise, the next | |
75 handler for the exception must be within a function somewhere | |
76 up the call chain, so we call back into the exception runtime | |
77 (__builtin_unwind_resume). | |
78 | |
79 During pass_expand (cfgexpand.c), we generate REG_EH_REGION notes | |
80 that create an rtl to eh_region mapping that corresponds to the | |
81 gimple to eh_region mapping that had been recorded in the | |
82 THROW_STMT_TABLE. | |
83 | |
84 During pass_rtl_eh (except.c), we generate the real landing pads | |
85 to which the runtime will actually transfer control. These new | |
86 landing pads perform whatever bookkeeping is needed by the target | |
87 backend in order to resume execution within the current function. | |
88 Each of these new landing pads falls through into the post_landing_pad | |
89 label which had been used within the CFG up to this point. All | |
90 exception edges within the CFG are redirected to the new landing pads. | |
91 If the target uses setjmp to implement exceptions, the various extra | |
92 calls into the runtime to register and unregister the current stack | |
93 frame are emitted at this time. | |
94 | |
95 During pass_convert_to_eh_region_ranges (except.c), we transform | |
96 the REG_EH_REGION notes attached to individual insns into | |
97 non-overlapping ranges of insns bounded by NOTE_INSN_EH_REGION_BEG | |
98 and NOTE_INSN_EH_REGION_END. Each insn within such ranges has the | |
99 same associated action within the exception region tree, meaning | |
100 that (1) the exception is caught by the same landing pad within the | |
101 current function, (2) the exception is blocked by the runtime with | |
102 a MUST_NOT_THROW region, or (3) the exception is not handled at all | |
103 within the current function. | |
104 | |
105 Finally, during assembly generation, we call | |
106 output_function_exception_table (except.c) to emit the tables with | |
107 which the exception runtime can determine if a given stack frame | |
108 handles a given exception, and if so what filter value to provide | |
109 to the function when the non-local control transfer is effected. | |
110 If the target uses dwarf2 unwinding to implement exceptions, then | |
111 output_call_frame_info (dwarf2out.c) emits the required unwind data. */ | |
48 | 112 |
49 | 113 |
50 #include "config.h" | 114 #include "config.h" |
51 #include "system.h" | 115 #include "system.h" |
52 #include "coretypes.h" | 116 #include "coretypes.h" |
75 #include "langhooks.h" | 139 #include "langhooks.h" |
76 #include "cgraph.h" | 140 #include "cgraph.h" |
77 #include "diagnostic.h" | 141 #include "diagnostic.h" |
78 #include "tree-pass.h" | 142 #include "tree-pass.h" |
79 #include "timevar.h" | 143 #include "timevar.h" |
144 #include "tree-flow.h" | |
80 | 145 |
81 /* Provide defaults for stuff that may not be defined when using | 146 /* Provide defaults for stuff that may not be defined when using |
82 sjlj exceptions. */ | 147 sjlj exceptions. */ |
83 #ifndef EH_RETURN_DATA_REGNO | 148 #ifndef EH_RETURN_DATA_REGNO |
84 #define EH_RETURN_DATA_REGNO(N) INVALID_REGNUM | 149 #define EH_RETURN_DATA_REGNO(N) INVALID_REGNUM |
85 #endif | 150 #endif |
86 | 151 |
87 /* Protect cleanup actions with must-not-throw regions, with a call | 152 /* Protect cleanup actions with must-not-throw regions, with a call |
88 to the given failure handler. */ | 153 to the given failure handler. */ |
89 gimple (*lang_protect_cleanup_actions) (void); | 154 tree (*lang_protect_cleanup_actions) (void); |
90 | 155 |
91 /* Return true if type A catches type B. */ | 156 /* Return true if type A catches type B. */ |
92 int (*lang_eh_type_covers) (tree a, tree b); | 157 int (*lang_eh_type_covers) (tree a, tree b); |
93 | |
94 /* Map a type to a runtime object to match type. */ | |
95 tree (*lang_eh_runtime_type) (tree); | |
96 | |
97 /* A hash table of label to region number. */ | |
98 | |
99 struct ehl_map_entry GTY(()) | |
100 { | |
101 rtx label; | |
102 struct eh_region *region; | |
103 }; | |
104 | 158 |
105 static GTY(()) int call_site_base; | 159 static GTY(()) int call_site_base; |
106 static GTY ((param_is (union tree_node))) | 160 static GTY ((param_is (union tree_node))) |
107 htab_t type_to_runtime_map; | 161 htab_t type_to_runtime_map; |
108 | 162 |
112 static int sjlj_fc_data_ofs; | 166 static int sjlj_fc_data_ofs; |
113 static int sjlj_fc_personality_ofs; | 167 static int sjlj_fc_personality_ofs; |
114 static int sjlj_fc_lsda_ofs; | 168 static int sjlj_fc_lsda_ofs; |
115 static int sjlj_fc_jbuf_ofs; | 169 static int sjlj_fc_jbuf_ofs; |
116 | 170 |
117 /* Describes one exception region. */ | 171 |
118 struct eh_region GTY(()) | 172 struct GTY(()) call_site_record_d |
119 { | |
120 /* The immediately surrounding region. */ | |
121 struct eh_region *outer; | |
122 | |
123 /* The list of immediately contained regions. */ | |
124 struct eh_region *inner; | |
125 struct eh_region *next_peer; | |
126 | |
127 /* An identifier for this region. */ | |
128 int region_number; | |
129 | |
130 /* When a region is deleted, its parents inherit the REG_EH_REGION | |
131 numbers already assigned. */ | |
132 bitmap aka; | |
133 | |
134 /* Each region does exactly one thing. */ | |
135 enum eh_region_type | |
136 { | |
137 ERT_UNKNOWN = 0, | |
138 ERT_CLEANUP, | |
139 ERT_TRY, | |
140 ERT_CATCH, | |
141 ERT_ALLOWED_EXCEPTIONS, | |
142 ERT_MUST_NOT_THROW, | |
143 ERT_THROW | |
144 } type; | |
145 | |
146 /* Holds the action to perform based on the preceding type. */ | |
147 union eh_region_u { | |
148 /* A list of catch blocks, a surrounding try block, | |
149 and the label for continuing after a catch. */ | |
150 struct eh_region_u_try { | |
151 struct eh_region *eh_catch; | |
152 struct eh_region *last_catch; | |
153 } GTY ((tag ("ERT_TRY"))) eh_try; | |
154 | |
155 /* The list through the catch handlers, the list of type objects | |
156 matched, and the list of associated filters. */ | |
157 struct eh_region_u_catch { | |
158 struct eh_region *next_catch; | |
159 struct eh_region *prev_catch; | |
160 tree type_list; | |
161 tree filter_list; | |
162 } GTY ((tag ("ERT_CATCH"))) eh_catch; | |
163 | |
164 /* A tree_list of allowed types. */ | |
165 struct eh_region_u_allowed { | |
166 tree type_list; | |
167 int filter; | |
168 } GTY ((tag ("ERT_ALLOWED_EXCEPTIONS"))) allowed; | |
169 | |
170 /* The type given by a call to "throw foo();", or discovered | |
171 for a throw. */ | |
172 struct eh_region_u_throw { | |
173 tree type; | |
174 } GTY ((tag ("ERT_THROW"))) eh_throw; | |
175 | |
176 /* Retain the cleanup expression even after expansion so that | |
177 we can match up fixup regions. */ | |
178 struct eh_region_u_cleanup { | |
179 struct eh_region *prev_try; | |
180 } GTY ((tag ("ERT_CLEANUP"))) cleanup; | |
181 } GTY ((desc ("%0.type"))) u; | |
182 | |
183 /* Entry point for this region's handler before landing pads are built. */ | |
184 rtx label; | |
185 tree tree_label; | |
186 | |
187 /* Entry point for this region's handler from the runtime eh library. */ | |
188 rtx landing_pad; | |
189 | |
190 /* Entry point for this region's handler from an inner region. */ | |
191 rtx post_landing_pad; | |
192 | |
193 /* The RESX insn for handing off control to the next outermost handler, | |
194 if appropriate. */ | |
195 rtx resume; | |
196 | |
197 /* True if something in this region may throw. */ | |
198 unsigned may_contain_throw : 1; | |
199 }; | |
200 | |
201 typedef struct eh_region *eh_region; | |
202 | |
203 struct call_site_record GTY(()) | |
204 { | 173 { |
205 rtx landing_pad; | 174 rtx landing_pad; |
206 int action; | 175 int action; |
207 }; | 176 }; |
208 | |
209 DEF_VEC_P(eh_region); | |
210 DEF_VEC_ALLOC_P(eh_region, gc); | |
211 | |
212 /* Used to save exception status for each function. */ | |
213 struct eh_status GTY(()) | |
214 { | |
215 /* The tree of all regions for this function. */ | |
216 struct eh_region *region_tree; | |
217 | |
218 /* The same information as an indexable array. */ | |
219 VEC(eh_region,gc) *region_array; | |
220 int last_region_number; | |
221 | |
222 htab_t GTY((param_is (struct throw_stmt_node))) throw_stmt_table; | |
223 }; | |
224 | 177 |
178 static bool get_eh_region_and_lp_from_rtx (const_rtx, eh_region *, | |
179 eh_landing_pad *); | |
180 | |
225 static int t2r_eq (const void *, const void *); | 181 static int t2r_eq (const void *, const void *); |
226 static hashval_t t2r_hash (const void *); | 182 static hashval_t t2r_hash (const void *); |
227 static void add_type_for_runtime (tree); | |
228 static tree lookup_type_for_runtime (tree); | |
229 | |
230 static void remove_unreachable_regions (rtx); | |
231 | 183 |
232 static int ttypes_filter_eq (const void *, const void *); | 184 static int ttypes_filter_eq (const void *, const void *); |
233 static hashval_t ttypes_filter_hash (const void *); | 185 static hashval_t ttypes_filter_hash (const void *); |
234 static int ehspec_filter_eq (const void *, const void *); | 186 static int ehspec_filter_eq (const void *, const void *); |
235 static hashval_t ehspec_filter_hash (const void *); | 187 static hashval_t ehspec_filter_hash (const void *); |
236 static int add_ttypes_entry (htab_t, tree); | 188 static int add_ttypes_entry (htab_t, tree); |
237 static int add_ehspec_entry (htab_t, htab_t, tree); | 189 static int add_ehspec_entry (htab_t, htab_t, tree); |
238 static void assign_filter_values (void); | |
239 static void build_post_landing_pads (void); | |
240 static void connect_post_landing_pads (void); | |
241 static void dw2_build_landing_pads (void); | 190 static void dw2_build_landing_pads (void); |
242 | |
243 struct sjlj_lp_info; | |
244 static bool sjlj_find_directly_reachable_regions (struct sjlj_lp_info *); | |
245 static void sjlj_assign_call_site_values (rtx, struct sjlj_lp_info *); | |
246 static void sjlj_mark_call_sites (struct sjlj_lp_info *); | |
247 static void sjlj_emit_function_enter (rtx); | |
248 static void sjlj_emit_function_exit (void); | |
249 static void sjlj_emit_dispatch_table (rtx, struct sjlj_lp_info *); | |
250 static void sjlj_build_landing_pads (void); | |
251 | |
252 static hashval_t ehl_hash (const void *); | |
253 static int ehl_eq (const void *, const void *); | |
254 static void add_ehl_entry (rtx, struct eh_region *); | |
255 static void remove_exception_handler_label (rtx); | |
256 static void remove_eh_handler (struct eh_region *); | |
257 static int for_each_eh_label_1 (void **, void *); | |
258 | |
259 /* The return value of reachable_next_level. */ | |
260 enum reachable_code | |
261 { | |
262 /* The given exception is not processed by the given region. */ | |
263 RNL_NOT_CAUGHT, | |
264 /* The given exception may need processing by the given region. */ | |
265 RNL_MAYBE_CAUGHT, | |
266 /* The given exception is completely processed by the given region. */ | |
267 RNL_CAUGHT, | |
268 /* The given exception is completely processed by the runtime. */ | |
269 RNL_BLOCKED | |
270 }; | |
271 | |
272 struct reachable_info; | |
273 static enum reachable_code reachable_next_level (struct eh_region *, tree, | |
274 struct reachable_info *); | |
275 | 191 |
276 static int action_record_eq (const void *, const void *); | 192 static int action_record_eq (const void *, const void *); |
277 static hashval_t action_record_hash (const void *); | 193 static hashval_t action_record_hash (const void *); |
278 static int add_action_record (htab_t, int, int); | 194 static int add_action_record (htab_t, int, int); |
279 static int collect_one_action_chain (htab_t, struct eh_region *); | 195 static int collect_one_action_chain (htab_t, eh_region); |
280 static int add_call_site (rtx, int); | 196 static int add_call_site (rtx, int, int); |
281 | 197 |
282 static void push_uleb128 (varray_type *, unsigned int); | 198 static void push_uleb128 (VEC (uchar, gc) **, unsigned int); |
283 static void push_sleb128 (varray_type *, int); | 199 static void push_sleb128 (VEC (uchar, gc) **, int); |
284 #ifndef HAVE_AS_LEB128 | 200 #ifndef HAVE_AS_LEB128 |
285 static int dw2_size_of_call_site_table (void); | 201 static int dw2_size_of_call_site_table (int); |
286 static int sjlj_size_of_call_site_table (void); | 202 static int sjlj_size_of_call_site_table (void); |
287 #endif | 203 #endif |
288 static void dw2_output_call_site_table (void); | 204 static void dw2_output_call_site_table (int, int); |
289 static void sjlj_output_call_site_table (void); | 205 static void sjlj_output_call_site_table (void); |
290 | 206 |
291 | 207 |
292 /* Routine to see if exception handling is turned on. | 208 /* Routine to see if exception handling is turned on. |
293 DO_WARN is nonzero if we want to inform the user that exception | 209 DO_WARN is nonzero if we want to inform the user that exception |
327 { | 243 { |
328 tree f_jbuf, f_per, f_lsda, f_prev, f_cs, f_data, tmp; | 244 tree f_jbuf, f_per, f_lsda, f_prev, f_cs, f_data, tmp; |
329 | 245 |
330 sjlj_fc_type_node = lang_hooks.types.make_type (RECORD_TYPE); | 246 sjlj_fc_type_node = lang_hooks.types.make_type (RECORD_TYPE); |
331 | 247 |
332 f_prev = build_decl (FIELD_DECL, get_identifier ("__prev"), | 248 f_prev = build_decl (BUILTINS_LOCATION, |
249 FIELD_DECL, get_identifier ("__prev"), | |
333 build_pointer_type (sjlj_fc_type_node)); | 250 build_pointer_type (sjlj_fc_type_node)); |
334 DECL_FIELD_CONTEXT (f_prev) = sjlj_fc_type_node; | 251 DECL_FIELD_CONTEXT (f_prev) = sjlj_fc_type_node; |
335 | 252 |
336 f_cs = build_decl (FIELD_DECL, get_identifier ("__call_site"), | 253 f_cs = build_decl (BUILTINS_LOCATION, |
254 FIELD_DECL, get_identifier ("__call_site"), | |
337 integer_type_node); | 255 integer_type_node); |
338 DECL_FIELD_CONTEXT (f_cs) = sjlj_fc_type_node; | 256 DECL_FIELD_CONTEXT (f_cs) = sjlj_fc_type_node; |
339 | 257 |
340 tmp = build_index_type (build_int_cst (NULL_TREE, 4 - 1)); | 258 tmp = build_index_type (build_int_cst (NULL_TREE, 4 - 1)); |
341 tmp = build_array_type (lang_hooks.types.type_for_mode | 259 tmp = build_array_type (lang_hooks.types.type_for_mode |
342 (targetm.unwind_word_mode (), 1), | 260 (targetm.unwind_word_mode (), 1), |
343 tmp); | 261 tmp); |
344 f_data = build_decl (FIELD_DECL, get_identifier ("__data"), tmp); | 262 f_data = build_decl (BUILTINS_LOCATION, |
263 FIELD_DECL, get_identifier ("__data"), tmp); | |
345 DECL_FIELD_CONTEXT (f_data) = sjlj_fc_type_node; | 264 DECL_FIELD_CONTEXT (f_data) = sjlj_fc_type_node; |
346 | 265 |
347 f_per = build_decl (FIELD_DECL, get_identifier ("__personality"), | 266 f_per = build_decl (BUILTINS_LOCATION, |
267 FIELD_DECL, get_identifier ("__personality"), | |
348 ptr_type_node); | 268 ptr_type_node); |
349 DECL_FIELD_CONTEXT (f_per) = sjlj_fc_type_node; | 269 DECL_FIELD_CONTEXT (f_per) = sjlj_fc_type_node; |
350 | 270 |
351 f_lsda = build_decl (FIELD_DECL, get_identifier ("__lsda"), | 271 f_lsda = build_decl (BUILTINS_LOCATION, |
272 FIELD_DECL, get_identifier ("__lsda"), | |
352 ptr_type_node); | 273 ptr_type_node); |
353 DECL_FIELD_CONTEXT (f_lsda) = sjlj_fc_type_node; | 274 DECL_FIELD_CONTEXT (f_lsda) = sjlj_fc_type_node; |
354 | 275 |
355 #ifdef DONT_USE_BUILTIN_SETJMP | 276 #ifdef DONT_USE_BUILTIN_SETJMP |
356 #ifdef JMP_BUF_SIZE | 277 #ifdef JMP_BUF_SIZE |
366 /* builtin_setjmp takes a pointer to 5 words. */ | 287 /* builtin_setjmp takes a pointer to 5 words. */ |
367 tmp = build_int_cst (NULL_TREE, 5 * BITS_PER_WORD / POINTER_SIZE - 1); | 288 tmp = build_int_cst (NULL_TREE, 5 * BITS_PER_WORD / POINTER_SIZE - 1); |
368 #endif | 289 #endif |
369 tmp = build_index_type (tmp); | 290 tmp = build_index_type (tmp); |
370 tmp = build_array_type (ptr_type_node, tmp); | 291 tmp = build_array_type (ptr_type_node, tmp); |
371 f_jbuf = build_decl (FIELD_DECL, get_identifier ("__jbuf"), tmp); | 292 f_jbuf = build_decl (BUILTINS_LOCATION, |
293 FIELD_DECL, get_identifier ("__jbuf"), tmp); | |
372 #ifdef DONT_USE_BUILTIN_SETJMP | 294 #ifdef DONT_USE_BUILTIN_SETJMP |
373 /* We don't know what the alignment requirements of the | 295 /* We don't know what the alignment requirements of the |
374 runtime's jmp_buf has. Overestimate. */ | 296 runtime's jmp_buf has. Overestimate. */ |
375 DECL_ALIGN (f_jbuf) = BIGGEST_ALIGNMENT; | 297 DECL_ALIGN (f_jbuf) = BIGGEST_ALIGNMENT; |
376 DECL_USER_ALIGN (f_jbuf) = 1; | 298 DECL_USER_ALIGN (f_jbuf) = 1; |
408 | 330 |
409 void | 331 void |
410 init_eh_for_function (void) | 332 init_eh_for_function (void) |
411 { | 333 { |
412 cfun->eh = GGC_CNEW (struct eh_status); | 334 cfun->eh = GGC_CNEW (struct eh_status); |
335 | |
336 /* Make sure zero'th entries are used. */ | |
337 VEC_safe_push (eh_region, gc, cfun->eh->region_array, NULL); | |
338 VEC_safe_push (eh_landing_pad, gc, cfun->eh->lp_array, NULL); | |
413 } | 339 } |
414 | 340 |
415 /* Routines to generate the exception tree somewhat directly. | 341 /* Routines to generate the exception tree somewhat directly. |
416 These are used from tree-eh.c when processing exception related | 342 These are used from tree-eh.c when processing exception related |
417 nodes during tree optimization. */ | 343 nodes during tree optimization. */ |
418 | 344 |
419 static struct eh_region * | 345 static eh_region |
420 gen_eh_region (enum eh_region_type type, struct eh_region *outer) | 346 gen_eh_region (enum eh_region_type type, eh_region outer) |
421 { | 347 { |
422 struct eh_region *new_eh; | 348 eh_region new_eh; |
423 | 349 |
424 #ifdef ENABLE_CHECKING | 350 #ifdef ENABLE_CHECKING |
425 gcc_assert (doing_eh (0)); | 351 gcc_assert (doing_eh (0)); |
426 #endif | 352 #endif |
427 | 353 |
428 /* Insert a new blank region as a leaf in the tree. */ | 354 /* Insert a new blank region as a leaf in the tree. */ |
429 new_eh = GGC_CNEW (struct eh_region); | 355 new_eh = GGC_CNEW (struct eh_region_d); |
430 new_eh->type = type; | 356 new_eh->type = type; |
431 new_eh->outer = outer; | 357 new_eh->outer = outer; |
432 if (outer) | 358 if (outer) |
433 { | 359 { |
434 new_eh->next_peer = outer->inner; | 360 new_eh->next_peer = outer->inner; |
438 { | 364 { |
439 new_eh->next_peer = cfun->eh->region_tree; | 365 new_eh->next_peer = cfun->eh->region_tree; |
440 cfun->eh->region_tree = new_eh; | 366 cfun->eh->region_tree = new_eh; |
441 } | 367 } |
442 | 368 |
443 new_eh->region_number = ++cfun->eh->last_region_number; | 369 new_eh->index = VEC_length (eh_region, cfun->eh->region_array); |
370 VEC_safe_push (eh_region, gc, cfun->eh->region_array, new_eh); | |
371 | |
372 /* Copy the language's notion of whether to use __cxa_end_cleanup. */ | |
373 if (targetm.arm_eabi_unwinder && lang_hooks.eh_use_cxa_end_cleanup) | |
374 new_eh->use_cxa_end_cleanup = true; | |
444 | 375 |
445 return new_eh; | 376 return new_eh; |
446 } | 377 } |
447 | 378 |
448 struct eh_region * | 379 eh_region |
449 gen_eh_region_cleanup (struct eh_region *outer, struct eh_region *prev_try) | 380 gen_eh_region_cleanup (eh_region outer) |
450 { | 381 { |
451 struct eh_region *cleanup = gen_eh_region (ERT_CLEANUP, outer); | 382 return gen_eh_region (ERT_CLEANUP, outer); |
452 cleanup->u.cleanup.prev_try = prev_try; | 383 } |
453 return cleanup; | 384 |
454 } | 385 eh_region |
455 | 386 gen_eh_region_try (eh_region outer) |
456 struct eh_region * | |
457 gen_eh_region_try (struct eh_region *outer) | |
458 { | 387 { |
459 return gen_eh_region (ERT_TRY, outer); | 388 return gen_eh_region (ERT_TRY, outer); |
460 } | 389 } |
461 | 390 |
462 struct eh_region * | 391 eh_catch |
463 gen_eh_region_catch (struct eh_region *t, tree type_or_list) | 392 gen_eh_region_catch (eh_region t, tree type_or_list) |
464 { | 393 { |
465 struct eh_region *c, *l; | 394 eh_catch c, l; |
466 tree type_list, type_node; | 395 tree type_list, type_node; |
396 | |
397 gcc_assert (t->type == ERT_TRY); | |
467 | 398 |
468 /* Ensure to always end up with a type list to normalize further | 399 /* Ensure to always end up with a type list to normalize further |
469 processing, then register each type against the runtime types map. */ | 400 processing, then register each type against the runtime types map. */ |
470 type_list = type_or_list; | 401 type_list = type_or_list; |
471 if (type_or_list) | 402 if (type_or_list) |
476 type_node = type_list; | 407 type_node = type_list; |
477 for (; type_node; type_node = TREE_CHAIN (type_node)) | 408 for (; type_node; type_node = TREE_CHAIN (type_node)) |
478 add_type_for_runtime (TREE_VALUE (type_node)); | 409 add_type_for_runtime (TREE_VALUE (type_node)); |
479 } | 410 } |
480 | 411 |
481 c = gen_eh_region (ERT_CATCH, t->outer); | 412 c = GGC_CNEW (struct eh_catch_d); |
482 c->u.eh_catch.type_list = type_list; | 413 c->type_list = type_list; |
483 l = t->u.eh_try.last_catch; | 414 l = t->u.eh_try.last_catch; |
484 c->u.eh_catch.prev_catch = l; | 415 c->prev_catch = l; |
485 if (l) | 416 if (l) |
486 l->u.eh_catch.next_catch = c; | 417 l->next_catch = c; |
487 else | 418 else |
488 t->u.eh_try.eh_catch = c; | 419 t->u.eh_try.first_catch = c; |
489 t->u.eh_try.last_catch = c; | 420 t->u.eh_try.last_catch = c; |
490 | 421 |
491 return c; | 422 return c; |
492 } | 423 } |
493 | 424 |
494 struct eh_region * | 425 eh_region |
495 gen_eh_region_allowed (struct eh_region *outer, tree allowed) | 426 gen_eh_region_allowed (eh_region outer, tree allowed) |
496 { | 427 { |
497 struct eh_region *region = gen_eh_region (ERT_ALLOWED_EXCEPTIONS, outer); | 428 eh_region region = gen_eh_region (ERT_ALLOWED_EXCEPTIONS, outer); |
498 region->u.allowed.type_list = allowed; | 429 region->u.allowed.type_list = allowed; |
499 | 430 |
500 for (; allowed ; allowed = TREE_CHAIN (allowed)) | 431 for (; allowed ; allowed = TREE_CHAIN (allowed)) |
501 add_type_for_runtime (TREE_VALUE (allowed)); | 432 add_type_for_runtime (TREE_VALUE (allowed)); |
502 | 433 |
503 return region; | 434 return region; |
504 } | 435 } |
505 | 436 |
506 struct eh_region * | 437 eh_region |
507 gen_eh_region_must_not_throw (struct eh_region *outer) | 438 gen_eh_region_must_not_throw (eh_region outer) |
508 { | 439 { |
509 return gen_eh_region (ERT_MUST_NOT_THROW, outer); | 440 return gen_eh_region (ERT_MUST_NOT_THROW, outer); |
510 } | 441 } |
511 | 442 |
512 int | 443 eh_landing_pad |
513 get_eh_region_number (struct eh_region *region) | 444 gen_eh_landing_pad (eh_region region) |
514 { | 445 { |
515 return region->region_number; | 446 eh_landing_pad lp = GGC_CNEW (struct eh_landing_pad_d); |
516 } | 447 |
517 | 448 lp->next_lp = region->landing_pads; |
518 bool | 449 lp->region = region; |
519 get_eh_region_may_contain_throw (struct eh_region *region) | 450 lp->index = VEC_length (eh_landing_pad, cfun->eh->lp_array); |
520 { | 451 region->landing_pads = lp; |
521 return region->may_contain_throw; | 452 |
522 } | 453 VEC_safe_push (eh_landing_pad, gc, cfun->eh->lp_array, lp); |
523 | 454 |
524 tree | 455 return lp; |
525 get_eh_region_tree_label (struct eh_region *region) | 456 } |
526 { | 457 |
527 return region->tree_label; | 458 eh_region |
528 } | 459 get_eh_region_from_number_fn (struct function *ifun, int i) |
529 | 460 { |
530 void | 461 return VEC_index (eh_region, ifun->eh->region_array, i); |
531 set_eh_region_tree_label (struct eh_region *region, tree lab) | 462 } |
532 { | 463 |
533 region->tree_label = lab; | 464 eh_region |
465 get_eh_region_from_number (int i) | |
466 { | |
467 return get_eh_region_from_number_fn (cfun, i); | |
468 } | |
469 | |
470 eh_landing_pad | |
471 get_eh_landing_pad_from_number_fn (struct function *ifun, int i) | |
472 { | |
473 return VEC_index (eh_landing_pad, ifun->eh->lp_array, i); | |
474 } | |
475 | |
476 eh_landing_pad | |
477 get_eh_landing_pad_from_number (int i) | |
478 { | |
479 return get_eh_landing_pad_from_number_fn (cfun, i); | |
480 } | |
481 | |
482 eh_region | |
483 get_eh_region_from_lp_number_fn (struct function *ifun, int i) | |
484 { | |
485 if (i < 0) | |
486 return VEC_index (eh_region, ifun->eh->region_array, -i); | |
487 else if (i == 0) | |
488 return NULL; | |
489 else | |
490 { | |
491 eh_landing_pad lp; | |
492 lp = VEC_index (eh_landing_pad, ifun->eh->lp_array, i); | |
493 return lp->region; | |
494 } | |
495 } | |
496 | |
497 eh_region | |
498 get_eh_region_from_lp_number (int i) | |
499 { | |
500 return get_eh_region_from_lp_number_fn (cfun, i); | |
534 } | 501 } |
535 | 502 |
536 void | |
537 expand_resx_expr (tree exp) | |
538 { | |
539 int region_nr = TREE_INT_CST_LOW (TREE_OPERAND (exp, 0)); | |
540 struct eh_region *reg = VEC_index (eh_region, | |
541 cfun->eh->region_array, region_nr); | |
542 | |
543 gcc_assert (!reg->resume); | |
544 do_pending_stack_adjust (); | |
545 reg->resume = emit_jump_insn (gen_rtx_RESX (VOIDmode, region_nr)); | |
546 emit_barrier (); | |
547 } | |
548 | |
549 /* Note that the current EH region (if any) may contain a throw, or a | |
550 call to a function which itself may contain a throw. */ | |
551 | |
552 void | |
553 note_eh_region_may_contain_throw (struct eh_region *region) | |
554 { | |
555 while (region && !region->may_contain_throw) | |
556 { | |
557 region->may_contain_throw = 1; | |
558 region = region->outer; | |
559 } | |
560 } | |
561 | |
562 | |
563 /* Return an rtl expression for a pointer to the exception object | |
564 within a handler. */ | |
565 | |
566 rtx | |
567 get_exception_pointer (void) | |
568 { | |
569 if (! crtl->eh.exc_ptr) | |
570 crtl->eh.exc_ptr = gen_reg_rtx (ptr_mode); | |
571 return crtl->eh.exc_ptr; | |
572 } | |
573 | |
574 /* Return an rtl expression for the exception dispatch filter | |
575 within a handler. */ | |
576 | |
577 rtx | |
578 get_exception_filter (void) | |
579 { | |
580 if (! crtl->eh.filter) | |
581 crtl->eh.filter = gen_reg_rtx (targetm.eh_return_filter_mode ()); | |
582 return crtl->eh.filter; | |
583 } | |
584 | |
585 /* This section is for the exception handling specific optimization pass. */ | |
586 | |
587 /* Random access the exception region tree. */ | |
588 | |
589 void | |
590 collect_eh_region_array (void) | |
591 { | |
592 struct eh_region *i; | |
593 | |
594 i = cfun->eh->region_tree; | |
595 if (! i) | |
596 return; | |
597 | |
598 VEC_safe_grow (eh_region, gc, cfun->eh->region_array, | |
599 cfun->eh->last_region_number + 1); | |
600 VEC_replace (eh_region, cfun->eh->region_array, 0, 0); | |
601 | |
602 while (1) | |
603 { | |
604 VEC_replace (eh_region, cfun->eh->region_array, i->region_number, i); | |
605 | |
606 /* If there are sub-regions, process them. */ | |
607 if (i->inner) | |
608 i = i->inner; | |
609 /* If there are peers, process them. */ | |
610 else if (i->next_peer) | |
611 i = i->next_peer; | |
612 /* Otherwise, step back up the tree to the next peer. */ | |
613 else | |
614 { | |
615 do { | |
616 i = i->outer; | |
617 if (i == NULL) | |
618 return; | |
619 } while (i->next_peer == NULL); | |
620 i = i->next_peer; | |
621 } | |
622 } | |
623 } | |
624 | |
625 /* Remove all regions whose labels are not reachable from insns. */ | |
626 | |
627 static void | |
628 remove_unreachable_regions (rtx insns) | |
629 { | |
630 int i, *uid_region_num; | |
631 bool *reachable; | |
632 struct eh_region *r; | |
633 rtx insn; | |
634 | |
635 uid_region_num = XCNEWVEC (int, get_max_uid ()); | |
636 reachable = XCNEWVEC (bool, cfun->eh->last_region_number + 1); | |
637 | |
638 for (i = cfun->eh->last_region_number; i > 0; --i) | |
639 { | |
640 r = VEC_index (eh_region, cfun->eh->region_array, i); | |
641 if (!r || r->region_number != i) | |
642 continue; | |
643 | |
644 if (r->resume) | |
645 { | |
646 gcc_assert (!uid_region_num[INSN_UID (r->resume)]); | |
647 uid_region_num[INSN_UID (r->resume)] = i; | |
648 } | |
649 if (r->label) | |
650 { | |
651 gcc_assert (!uid_region_num[INSN_UID (r->label)]); | |
652 uid_region_num[INSN_UID (r->label)] = i; | |
653 } | |
654 } | |
655 | |
656 for (insn = insns; insn; insn = NEXT_INSN (insn)) | |
657 reachable[uid_region_num[INSN_UID (insn)]] = true; | |
658 | |
659 for (i = cfun->eh->last_region_number; i > 0; --i) | |
660 { | |
661 r = VEC_index (eh_region, cfun->eh->region_array, i); | |
662 if (r && r->region_number == i && !reachable[i]) | |
663 { | |
664 bool kill_it = true; | |
665 switch (r->type) | |
666 { | |
667 case ERT_THROW: | |
668 /* Don't remove ERT_THROW regions if their outer region | |
669 is reachable. */ | |
670 if (r->outer && reachable[r->outer->region_number]) | |
671 kill_it = false; | |
672 break; | |
673 | |
674 case ERT_MUST_NOT_THROW: | |
675 /* MUST_NOT_THROW regions are implementable solely in the | |
676 runtime, but their existence continues to affect calls | |
677 within that region. Never delete them here. */ | |
678 kill_it = false; | |
679 break; | |
680 | |
681 case ERT_TRY: | |
682 { | |
683 /* TRY regions are reachable if any of its CATCH regions | |
684 are reachable. */ | |
685 struct eh_region *c; | |
686 for (c = r->u.eh_try.eh_catch; c ; c = c->u.eh_catch.next_catch) | |
687 if (reachable[c->region_number]) | |
688 { | |
689 kill_it = false; | |
690 break; | |
691 } | |
692 break; | |
693 } | |
694 | |
695 default: | |
696 break; | |
697 } | |
698 | |
699 if (kill_it) | |
700 remove_eh_handler (r); | |
701 } | |
702 } | |
703 | |
704 free (reachable); | |
705 free (uid_region_num); | |
706 } | |
707 | |
708 /* Set up EH labels for RTL. */ | |
709 | |
710 void | |
711 convert_from_eh_region_ranges (void) | |
712 { | |
713 rtx insns = get_insns (); | |
714 int i, n = cfun->eh->last_region_number; | |
715 | |
716 /* Most of the work is already done at the tree level. All we need to | |
717 do is collect the rtl labels that correspond to the tree labels that | |
718 collect the rtl labels that correspond to the tree labels | |
719 we allocated earlier. */ | |
720 for (i = 1; i <= n; ++i) | |
721 { | |
722 struct eh_region *region; | |
723 | |
724 region = VEC_index (eh_region, cfun->eh->region_array, i); | |
725 if (region && region->tree_label) | |
726 region->label = DECL_RTL_IF_SET (region->tree_label); | |
727 } | |
728 | |
729 remove_unreachable_regions (insns); | |
730 } | |
731 | |
732 static void | |
733 add_ehl_entry (rtx label, struct eh_region *region) | |
734 { | |
735 struct ehl_map_entry **slot, *entry; | |
736 | |
737 LABEL_PRESERVE_P (label) = 1; | |
738 | |
739 entry = GGC_NEW (struct ehl_map_entry); | |
740 entry->label = label; | |
741 entry->region = region; | |
742 | |
743 slot = (struct ehl_map_entry **) | |
744 htab_find_slot (crtl->eh.exception_handler_label_map, entry, INSERT); | |
745 | |
746 /* Before landing pad creation, each exception handler has its own | |
747 label. After landing pad creation, the exception handlers may | |
748 share landing pads. This is ok, since maybe_remove_eh_handler | |
749 only requires the 1-1 mapping before landing pad creation. */ | |
750 gcc_assert (!*slot || crtl->eh.built_landing_pads); | |
751 | |
752 *slot = entry; | |
753 } | |
754 | |
755 void | |
756 find_exception_handler_labels (void) | |
757 { | |
758 int i; | |
759 | |
760 if (crtl->eh.exception_handler_label_map) | |
761 htab_empty (crtl->eh.exception_handler_label_map); | |
762 else | |
763 { | |
764 /* ??? The expansion factor here (3/2) must be greater than the htab | |
765 occupancy factor (4/3) to avoid unnecessary resizing. */ | |
766 crtl->eh.exception_handler_label_map | |
767 = htab_create_ggc (cfun->eh->last_region_number * 3 / 2, | |
768 ehl_hash, ehl_eq, NULL); | |
769 } | |
770 | |
771 if (cfun->eh->region_tree == NULL) | |
772 return; | |
773 | |
774 for (i = cfun->eh->last_region_number; i > 0; --i) | |
775 { | |
776 struct eh_region *region; | |
777 rtx lab; | |
778 | |
779 region = VEC_index (eh_region, cfun->eh->region_array, i); | |
780 if (! region || region->region_number != i) | |
781 continue; | |
782 if (crtl->eh.built_landing_pads) | |
783 lab = region->landing_pad; | |
784 else | |
785 lab = region->label; | |
786 | |
787 if (lab) | |
788 add_ehl_entry (lab, region); | |
789 } | |
790 | |
791 /* For sjlj exceptions, need the return label to remain live until | |
792 after landing pad generation. */ | |
793 if (USING_SJLJ_EXCEPTIONS && ! crtl->eh.built_landing_pads) | |
794 add_ehl_entry (return_label, NULL); | |
795 } | |
796 | |
797 /* Returns true if the current function has exception handling regions. */ | 503 /* Returns true if the current function has exception handling regions. */ |
798 | 504 |
799 bool | 505 bool |
800 current_function_has_exception_handlers (void) | 506 current_function_has_exception_handlers (void) |
801 { | 507 { |
802 int i; | 508 return cfun->eh->region_tree != NULL; |
803 | |
804 for (i = cfun->eh->last_region_number; i > 0; --i) | |
805 { | |
806 struct eh_region *region; | |
807 | |
808 region = VEC_index (eh_region, cfun->eh->region_array, i); | |
809 if (region | |
810 && region->region_number == i | |
811 && region->type != ERT_THROW) | |
812 return true; | |
813 } | |
814 | |
815 return false; | |
816 } | 509 } |
817 | 510 |
818 /* A subroutine of duplicate_eh_regions. Search the region tree under O | 511 /* A subroutine of duplicate_eh_regions. Copy the eh_region tree at OLD. |
819 for the minimum and maximum region numbers. Update *MIN and *MAX. */ | 512 Root it at OUTER, and apply LP_OFFSET to the lp numbers. */ |
513 | |
514 struct duplicate_eh_regions_data | |
515 { | |
516 duplicate_eh_regions_map label_map; | |
517 void *label_map_data; | |
518 struct pointer_map_t *eh_map; | |
519 }; | |
820 | 520 |
821 static void | 521 static void |
822 duplicate_eh_regions_0 (eh_region o, int *min, int *max) | 522 duplicate_eh_regions_1 (struct duplicate_eh_regions_data *data, |
823 { | 523 eh_region old_r, eh_region outer) |
824 if (o->region_number < *min) | 524 { |
825 *min = o->region_number; | 525 eh_landing_pad old_lp, new_lp; |
826 if (o->region_number > *max) | 526 eh_region new_r; |
827 *max = o->region_number; | 527 void **slot; |
828 | 528 |
829 if (o->inner) | 529 new_r = gen_eh_region (old_r->type, outer); |
830 { | 530 slot = pointer_map_insert (data->eh_map, (void *)old_r); |
831 o = o->inner; | 531 gcc_assert (*slot == NULL); |
832 duplicate_eh_regions_0 (o, min, max); | 532 *slot = (void *)new_r; |
833 while (o->next_peer) | 533 |
834 { | 534 switch (old_r->type) |
835 o = o->next_peer; | 535 { |
836 duplicate_eh_regions_0 (o, min, max); | 536 case ERT_CLEANUP: |
837 } | 537 break; |
838 } | 538 |
839 } | 539 case ERT_TRY: |
840 | 540 { |
841 /* A subroutine of duplicate_eh_regions. Copy the region tree under OLD. | 541 eh_catch oc, nc; |
842 Root it at OUTER, and apply EH_OFFSET to the region number. Don't worry | 542 for (oc = old_r->u.eh_try.first_catch; oc ; oc = oc->next_catch) |
843 about the other internal pointers just yet, just the tree-like pointers. */ | 543 { |
844 | 544 /* We should be doing all our region duplication before and |
845 static eh_region | 545 during inlining, which is before filter lists are created. */ |
846 duplicate_eh_regions_1 (eh_region old, eh_region outer, int eh_offset) | 546 gcc_assert (oc->filter_list == NULL); |
847 { | 547 nc = gen_eh_region_catch (new_r, oc->type_list); |
848 eh_region ret, n; | 548 nc->label = data->label_map (oc->label, data->label_map_data); |
849 | 549 } |
850 ret = n = GGC_NEW (struct eh_region); | 550 } |
851 | 551 break; |
852 *n = *old; | 552 |
853 n->outer = outer; | 553 case ERT_ALLOWED_EXCEPTIONS: |
854 n->next_peer = NULL; | 554 new_r->u.allowed.type_list = old_r->u.allowed.type_list; |
855 gcc_assert (!old->aka); | 555 if (old_r->u.allowed.label) |
856 | 556 new_r->u.allowed.label |
857 n->region_number += eh_offset; | 557 = data->label_map (old_r->u.allowed.label, data->label_map_data); |
858 VEC_replace (eh_region, cfun->eh->region_array, n->region_number, n); | 558 else |
859 | 559 new_r->u.allowed.label = NULL_TREE; |
860 if (old->inner) | 560 break; |
861 { | 561 |
862 old = old->inner; | 562 case ERT_MUST_NOT_THROW: |
863 n = n->inner = duplicate_eh_regions_1 (old, ret, eh_offset); | 563 new_r->u.must_not_throw = old_r->u.must_not_throw; |
864 while (old->next_peer) | 564 break; |
865 { | 565 } |
866 old = old->next_peer; | 566 |
867 n = n->next_peer = duplicate_eh_regions_1 (old, ret, eh_offset); | 567 for (old_lp = old_r->landing_pads; old_lp ; old_lp = old_lp->next_lp) |
868 } | 568 { |
869 } | 569 /* Don't bother copying unused landing pads. */ |
870 | 570 if (old_lp->post_landing_pad == NULL) |
871 return ret; | 571 continue; |
872 } | 572 |
873 | 573 new_lp = gen_eh_landing_pad (new_r); |
874 /* Duplicate the EH regions of IFUN, rooted at COPY_REGION, into current | 574 slot = pointer_map_insert (data->eh_map, (void *)old_lp); |
875 function and root the tree below OUTER_REGION. Remap labels using MAP | 575 gcc_assert (*slot == NULL); |
876 callback. The special case of COPY_REGION of 0 means all regions. */ | 576 *slot = (void *)new_lp; |
877 | 577 |
878 int | 578 new_lp->post_landing_pad |
879 duplicate_eh_regions (struct function *ifun, duplicate_eh_regions_map map, | 579 = data->label_map (old_lp->post_landing_pad, data->label_map_data); |
880 void *data, int copy_region, int outer_region) | 580 EH_LANDING_PAD_NR (new_lp->post_landing_pad) = new_lp->index; |
881 { | 581 } |
882 eh_region cur, prev_try, outer, *splice; | 582 |
883 int i, min_region, max_region, eh_offset, cfun_last_region_number; | 583 /* Make sure to preserve the original use of __cxa_end_cleanup. */ |
884 int num_regions; | 584 new_r->use_cxa_end_cleanup = old_r->use_cxa_end_cleanup; |
885 | 585 |
886 if (!ifun->eh->region_tree) | 586 for (old_r = old_r->inner; old_r ; old_r = old_r->next_peer) |
887 return 0; | 587 duplicate_eh_regions_1 (data, old_r, new_r); |
888 | 588 } |
889 /* Find the range of region numbers to be copied. The interface we | 589 |
890 provide here mandates a single offset to find new number from old, | 590 /* Duplicate the EH regions from IFUN rooted at COPY_REGION into |
891 which means we must look at the numbers present, instead of the | 591 the current function and root the tree below OUTER_REGION. |
892 count or something else. */ | 592 The special case of COPY_REGION of NULL means all regions. |
893 if (copy_region > 0) | 593 Remap labels using MAP/MAP_DATA callback. Return a pointer map |
894 { | 594 that allows the caller to remap uses of both EH regions and |
895 min_region = INT_MAX; | 595 EH landing pads. */ |
896 max_region = 0; | 596 |
897 | 597 struct pointer_map_t * |
898 cur = VEC_index (eh_region, ifun->eh->region_array, copy_region); | 598 duplicate_eh_regions (struct function *ifun, |
899 duplicate_eh_regions_0 (cur, &min_region, &max_region); | 599 eh_region copy_region, int outer_lp, |
900 } | 600 duplicate_eh_regions_map map, void *map_data) |
601 { | |
602 struct duplicate_eh_regions_data data; | |
603 eh_region outer_region; | |
604 | |
605 #ifdef ENABLE_CHECKING | |
606 verify_eh_tree (ifun); | |
607 #endif | |
608 | |
609 data.label_map = map; | |
610 data.label_map_data = map_data; | |
611 data.eh_map = pointer_map_create (); | |
612 | |
613 outer_region = get_eh_region_from_lp_number (outer_lp); | |
614 | |
615 /* Copy all the regions in the subtree. */ | |
616 if (copy_region) | |
617 duplicate_eh_regions_1 (&data, copy_region, outer_region); | |
901 else | 618 else |
902 min_region = 1, max_region = ifun->eh->last_region_number; | 619 { |
903 num_regions = max_region - min_region + 1; | 620 eh_region r; |
904 cfun_last_region_number = cfun->eh->last_region_number; | 621 for (r = ifun->eh->region_tree; r ; r = r->next_peer) |
905 eh_offset = cfun_last_region_number + 1 - min_region; | 622 duplicate_eh_regions_1 (&data, r, outer_region); |
906 | 623 } |
907 /* If we've not yet created a region array, do so now. */ | 624 |
908 VEC_safe_grow (eh_region, gc, cfun->eh->region_array, | 625 #ifdef ENABLE_CHECKING |
909 cfun_last_region_number + 1 + num_regions); | 626 verify_eh_tree (cfun); |
910 cfun->eh->last_region_number = max_region + eh_offset; | 627 #endif |
911 | 628 |
912 /* We may have just allocated the array for the first time. | 629 return data.eh_map; |
913 Make sure that element zero is null. */ | 630 } |
914 VEC_replace (eh_region, cfun->eh->region_array, 0, 0); | 631 |
915 | 632 /* Return the region that is outer to both REGION_A and REGION_B in IFUN. */ |
916 /* Zero all entries in the range allocated. */ | 633 |
917 memset (VEC_address (eh_region, cfun->eh->region_array) | 634 eh_region |
918 + cfun_last_region_number + 1, 0, num_regions * sizeof (eh_region)); | 635 eh_region_outermost (struct function *ifun, eh_region region_a, |
919 | 636 eh_region region_b) |
920 /* Locate the spot at which to insert the new tree. */ | 637 { |
921 if (outer_region > 0) | 638 sbitmap b_outer; |
922 { | 639 |
923 outer = VEC_index (eh_region, cfun->eh->region_array, outer_region); | 640 gcc_assert (ifun->eh->region_array); |
924 splice = &outer->inner; | |
925 } | |
926 else | |
927 { | |
928 outer = NULL; | |
929 splice = &cfun->eh->region_tree; | |
930 } | |
931 while (*splice) | |
932 splice = &(*splice)->next_peer; | |
933 | |
934 /* Copy all the regions in the subtree. */ | |
935 if (copy_region > 0) | |
936 { | |
937 cur = VEC_index (eh_region, ifun->eh->region_array, copy_region); | |
938 *splice = duplicate_eh_regions_1 (cur, outer, eh_offset); | |
939 } | |
940 else | |
941 { | |
942 eh_region n; | |
943 | |
944 cur = ifun->eh->region_tree; | |
945 *splice = n = duplicate_eh_regions_1 (cur, outer, eh_offset); | |
946 while (cur->next_peer) | |
947 { | |
948 cur = cur->next_peer; | |
949 n = n->next_peer = duplicate_eh_regions_1 (cur, outer, eh_offset); | |
950 } | |
951 } | |
952 | |
953 /* Remap all the labels in the new regions. */ | |
954 for (i = cfun_last_region_number + 1; | |
955 VEC_iterate (eh_region, cfun->eh->region_array, i, cur); ++i) | |
956 if (cur && cur->tree_label) | |
957 cur->tree_label = map (cur->tree_label, data); | |
958 | |
959 /* Search for the containing ERT_TRY region to fix up | |
960 the prev_try short-cuts for ERT_CLEANUP regions. */ | |
961 prev_try = NULL; | |
962 if (outer_region > 0) | |
963 for (prev_try = VEC_index (eh_region, cfun->eh->region_array, outer_region); | |
964 prev_try && prev_try->type != ERT_TRY; | |
965 prev_try = prev_try->outer) | |
966 if (prev_try->type == ERT_MUST_NOT_THROW | |
967 || (prev_try->type == ERT_ALLOWED_EXCEPTIONS | |
968 && !prev_try->u.allowed.type_list)) | |
969 { | |
970 prev_try = NULL; | |
971 break; | |
972 } | |
973 | |
974 /* Remap all of the internal catch and cleanup linkages. Since we | |
975 duplicate entire subtrees, all of the referenced regions will have | |
976 been copied too. And since we renumbered them as a block, a simple | |
977 bit of arithmetic finds us the index for the replacement region. */ | |
978 for (i = cfun_last_region_number + 1; | |
979 VEC_iterate (eh_region, cfun->eh->region_array, i, cur); ++i) | |
980 { | |
981 if (cur == NULL) | |
982 continue; | |
983 | |
984 #define REMAP(REG) \ | |
985 (REG) = VEC_index (eh_region, cfun->eh->region_array, \ | |
986 (REG)->region_number + eh_offset) | |
987 | |
988 switch (cur->type) | |
989 { | |
990 case ERT_TRY: | |
991 if (cur->u.eh_try.eh_catch) | |
992 REMAP (cur->u.eh_try.eh_catch); | |
993 if (cur->u.eh_try.last_catch) | |
994 REMAP (cur->u.eh_try.last_catch); | |
995 break; | |
996 | |
997 case ERT_CATCH: | |
998 if (cur->u.eh_catch.next_catch) | |
999 REMAP (cur->u.eh_catch.next_catch); | |
1000 if (cur->u.eh_catch.prev_catch) | |
1001 REMAP (cur->u.eh_catch.prev_catch); | |
1002 break; | |
1003 | |
1004 case ERT_CLEANUP: | |
1005 if (cur->u.cleanup.prev_try) | |
1006 REMAP (cur->u.cleanup.prev_try); | |
1007 else | |
1008 cur->u.cleanup.prev_try = prev_try; | |
1009 break; | |
1010 | |
1011 default: | |
1012 break; | |
1013 } | |
1014 | |
1015 #undef REMAP | |
1016 } | |
1017 | |
1018 return eh_offset; | |
1019 } | |
1020 | |
1021 /* Return true if REGION_A is outer to REGION_B in IFUN. */ | |
1022 | |
1023 bool | |
1024 eh_region_outer_p (struct function *ifun, int region_a, int region_b) | |
1025 { | |
1026 struct eh_region *rp_a, *rp_b; | |
1027 | |
1028 gcc_assert (ifun->eh->last_region_number > 0); | |
1029 gcc_assert (ifun->eh->region_tree); | 641 gcc_assert (ifun->eh->region_tree); |
1030 | 642 |
1031 rp_a = VEC_index (eh_region, ifun->eh->region_array, region_a); | 643 b_outer = sbitmap_alloc (VEC_length (eh_region, ifun->eh->region_array)); |
1032 rp_b = VEC_index (eh_region, ifun->eh->region_array, region_b); | 644 sbitmap_zero (b_outer); |
1033 gcc_assert (rp_a != NULL); | |
1034 gcc_assert (rp_b != NULL); | |
1035 | 645 |
1036 do | 646 do |
1037 { | 647 { |
1038 if (rp_a == rp_b) | 648 SET_BIT (b_outer, region_b->index); |
1039 return true; | 649 region_b = region_b->outer; |
1040 rp_b = rp_b->outer; | 650 } |
1041 } | 651 while (region_b); |
1042 while (rp_b); | |
1043 | |
1044 return false; | |
1045 } | |
1046 | |
1047 /* Return region number of region that is outer to both if REGION_A and | |
1048 REGION_B in IFUN. */ | |
1049 | |
1050 int | |
1051 eh_region_outermost (struct function *ifun, int region_a, int region_b) | |
1052 { | |
1053 struct eh_region *rp_a, *rp_b; | |
1054 sbitmap b_outer; | |
1055 | |
1056 gcc_assert (ifun->eh->last_region_number > 0); | |
1057 gcc_assert (ifun->eh->region_tree); | |
1058 | |
1059 rp_a = VEC_index (eh_region, ifun->eh->region_array, region_a); | |
1060 rp_b = VEC_index (eh_region, ifun->eh->region_array, region_b); | |
1061 gcc_assert (rp_a != NULL); | |
1062 gcc_assert (rp_b != NULL); | |
1063 | |
1064 b_outer = sbitmap_alloc (ifun->eh->last_region_number + 1); | |
1065 sbitmap_zero (b_outer); | |
1066 | 652 |
1067 do | 653 do |
1068 { | 654 { |
1069 SET_BIT (b_outer, rp_b->region_number); | 655 if (TEST_BIT (b_outer, region_a->index)) |
1070 rp_b = rp_b->outer; | 656 break; |
1071 } | 657 region_a = region_a->outer; |
1072 while (rp_b); | 658 } |
1073 | 659 while (region_a); |
1074 do | |
1075 { | |
1076 if (TEST_BIT (b_outer, rp_a->region_number)) | |
1077 { | |
1078 sbitmap_free (b_outer); | |
1079 return rp_a->region_number; | |
1080 } | |
1081 rp_a = rp_a->outer; | |
1082 } | |
1083 while (rp_a); | |
1084 | 660 |
1085 sbitmap_free (b_outer); | 661 sbitmap_free (b_outer); |
1086 return -1; | 662 return region_a; |
1087 } | 663 } |
1088 | 664 |
1089 static int | 665 static int |
1090 t2r_eq (const void *pentry, const void *pdata) | 666 t2r_eq (const void *pentry, const void *pdata) |
1091 { | 667 { |
1100 { | 676 { |
1101 const_tree const entry = (const_tree) pentry; | 677 const_tree const entry = (const_tree) pentry; |
1102 return TREE_HASH (TREE_PURPOSE (entry)); | 678 return TREE_HASH (TREE_PURPOSE (entry)); |
1103 } | 679 } |
1104 | 680 |
1105 static void | 681 void |
1106 add_type_for_runtime (tree type) | 682 add_type_for_runtime (tree type) |
1107 { | 683 { |
1108 tree *slot; | 684 tree *slot; |
685 | |
686 /* If TYPE is NOP_EXPR, it means that it already is a runtime type. */ | |
687 if (TREE_CODE (type) == NOP_EXPR) | |
688 return; | |
1109 | 689 |
1110 slot = (tree *) htab_find_slot_with_hash (type_to_runtime_map, type, | 690 slot = (tree *) htab_find_slot_with_hash (type_to_runtime_map, type, |
1111 TREE_HASH (type), INSERT); | 691 TREE_HASH (type), INSERT); |
1112 if (*slot == NULL) | 692 if (*slot == NULL) |
1113 { | 693 { |
1114 tree runtime = (*lang_eh_runtime_type) (type); | 694 tree runtime = lang_hooks.eh_runtime_type (type); |
1115 *slot = tree_cons (type, runtime, NULL_TREE); | 695 *slot = tree_cons (type, runtime, NULL_TREE); |
1116 } | 696 } |
1117 } | 697 } |
1118 | 698 |
1119 static tree | 699 tree |
1120 lookup_type_for_runtime (tree type) | 700 lookup_type_for_runtime (tree type) |
1121 { | 701 { |
1122 tree *slot; | 702 tree *slot; |
703 | |
704 /* If TYPE is NOP_EXPR, it means that it already is a runtime type. */ | |
705 if (TREE_CODE (type) == NOP_EXPR) | |
706 return type; | |
1123 | 707 |
1124 slot = (tree *) htab_find_slot_with_hash (type_to_runtime_map, type, | 708 slot = (tree *) htab_find_slot_with_hash (type_to_runtime_map, type, |
1125 TREE_HASH (type), NO_INSERT); | 709 TREE_HASH (type), NO_INSERT); |
1126 | 710 |
1127 /* We should have always inserted the data earlier. */ | 711 /* We should have always inserted the data earlier. */ |
1129 } | 713 } |
1130 | 714 |
1131 | 715 |
1132 /* Represent an entry in @TTypes for either catch actions | 716 /* Represent an entry in @TTypes for either catch actions |
1133 or exception filter actions. */ | 717 or exception filter actions. */ |
1134 struct ttypes_filter GTY(()) | 718 struct GTY(()) ttypes_filter { |
1135 { | |
1136 tree t; | 719 tree t; |
1137 int filter; | 720 int filter; |
1138 }; | 721 }; |
1139 | 722 |
1140 /* Compare ENTRY (a ttypes_filter entry in the hash table) with DATA | 723 /* Compare ENTRY (a ttypes_filter entry in the hash table) with DATA |
1183 for (list = entry->t; list ; list = TREE_CHAIN (list)) | 766 for (list = entry->t; list ; list = TREE_CHAIN (list)) |
1184 h = (h << 5) + (h >> 27) + TREE_HASH (TREE_VALUE (list)); | 767 h = (h << 5) + (h >> 27) + TREE_HASH (TREE_VALUE (list)); |
1185 return h; | 768 return h; |
1186 } | 769 } |
1187 | 770 |
1188 /* Add TYPE (which may be NULL) to crtl->eh.ttype_data, using TYPES_HASH | 771 /* Add TYPE (which may be NULL) to cfun->eh->ttype_data, using TYPES_HASH |
1189 to speed up the search. Return the filter value to be used. */ | 772 to speed up the search. Return the filter value to be used. */ |
1190 | 773 |
1191 static int | 774 static int |
1192 add_ttypes_entry (htab_t ttypes_hash, tree type) | 775 add_ttypes_entry (htab_t ttypes_hash, tree type) |
1193 { | 776 { |
1200 { | 783 { |
1201 /* Filter value is a 1 based table index. */ | 784 /* Filter value is a 1 based table index. */ |
1202 | 785 |
1203 n = XNEW (struct ttypes_filter); | 786 n = XNEW (struct ttypes_filter); |
1204 n->t = type; | 787 n->t = type; |
1205 n->filter = VEC_length (tree, crtl->eh.ttype_data) + 1; | 788 n->filter = VEC_length (tree, cfun->eh->ttype_data) + 1; |
1206 *slot = n; | 789 *slot = n; |
1207 | 790 |
1208 VEC_safe_push (tree, gc, crtl->eh.ttype_data, type); | 791 VEC_safe_push (tree, gc, cfun->eh->ttype_data, type); |
1209 } | 792 } |
1210 | 793 |
1211 return n->filter; | 794 return n->filter; |
1212 } | 795 } |
1213 | 796 |
1214 /* Add LIST to crtl->eh.ehspec_data, using EHSPEC_HASH and TYPES_HASH | 797 /* Add LIST to cfun->eh->ehspec_data, using EHSPEC_HASH and TYPES_HASH |
1215 to speed up the search. Return the filter value to be used. */ | 798 to speed up the search. Return the filter value to be used. */ |
1216 | 799 |
1217 static int | 800 static int |
1218 add_ehspec_entry (htab_t ehspec_hash, htab_t ttypes_hash, tree list) | 801 add_ehspec_entry (htab_t ehspec_hash, htab_t ttypes_hash, tree list) |
1219 { | 802 { |
1224 slot = (struct ttypes_filter **) | 807 slot = (struct ttypes_filter **) |
1225 htab_find_slot (ehspec_hash, &dummy, INSERT); | 808 htab_find_slot (ehspec_hash, &dummy, INSERT); |
1226 | 809 |
1227 if ((n = *slot) == NULL) | 810 if ((n = *slot) == NULL) |
1228 { | 811 { |
812 int len; | |
813 | |
814 if (targetm.arm_eabi_unwinder) | |
815 len = VEC_length (tree, cfun->eh->ehspec_data.arm_eabi); | |
816 else | |
817 len = VEC_length (uchar, cfun->eh->ehspec_data.other); | |
818 | |
1229 /* Filter value is a -1 based byte index into a uleb128 buffer. */ | 819 /* Filter value is a -1 based byte index into a uleb128 buffer. */ |
1230 | 820 |
1231 n = XNEW (struct ttypes_filter); | 821 n = XNEW (struct ttypes_filter); |
1232 n->t = list; | 822 n->t = list; |
1233 n->filter = -(VARRAY_ACTIVE_SIZE (crtl->eh.ehspec_data) + 1); | 823 n->filter = -(len + 1); |
1234 *slot = n; | 824 *slot = n; |
1235 | 825 |
1236 /* Generate a 0 terminated list of filter values. */ | 826 /* Generate a 0 terminated list of filter values. */ |
1237 for (; list ; list = TREE_CHAIN (list)) | 827 for (; list ; list = TREE_CHAIN (list)) |
1238 { | 828 { |
1239 if (targetm.arm_eabi_unwinder) | 829 if (targetm.arm_eabi_unwinder) |
1240 VARRAY_PUSH_TREE (crtl->eh.ehspec_data, TREE_VALUE (list)); | 830 VEC_safe_push (tree, gc, cfun->eh->ehspec_data.arm_eabi, |
831 TREE_VALUE (list)); | |
1241 else | 832 else |
1242 { | 833 { |
1243 /* Look up each type in the list and encode its filter | 834 /* Look up each type in the list and encode its filter |
1244 value as a uleb128. */ | 835 value as a uleb128. */ |
1245 push_uleb128 (&crtl->eh.ehspec_data, | 836 push_uleb128 (&cfun->eh->ehspec_data.other, |
1246 add_ttypes_entry (ttypes_hash, TREE_VALUE (list))); | 837 add_ttypes_entry (ttypes_hash, TREE_VALUE (list))); |
1247 } | 838 } |
1248 } | 839 } |
1249 if (targetm.arm_eabi_unwinder) | 840 if (targetm.arm_eabi_unwinder) |
1250 VARRAY_PUSH_TREE (crtl->eh.ehspec_data, NULL_TREE); | 841 VEC_safe_push (tree, gc, cfun->eh->ehspec_data.arm_eabi, NULL_TREE); |
1251 else | 842 else |
1252 VARRAY_PUSH_UCHAR (crtl->eh.ehspec_data, 0); | 843 VEC_safe_push (uchar, gc, cfun->eh->ehspec_data.other, 0); |
1253 } | 844 } |
1254 | 845 |
1255 return n->filter; | 846 return n->filter; |
1256 } | 847 } |
1257 | 848 |
1258 /* Generate the action filter values to be used for CATCH and | 849 /* Generate the action filter values to be used for CATCH and |
1259 ALLOWED_EXCEPTIONS regions. When using dwarf2 exception regions, | 850 ALLOWED_EXCEPTIONS regions. When using dwarf2 exception regions, |
1260 we use lots of landing pads, and so every type or list can share | 851 we use lots of landing pads, and so every type or list can share |
1261 the same filter value, which saves table space. */ | 852 the same filter value, which saves table space. */ |
1262 | 853 |
1263 static void | 854 void |
1264 assign_filter_values (void) | 855 assign_filter_values (void) |
1265 { | 856 { |
1266 int i; | 857 int i; |
1267 htab_t ttypes, ehspec; | 858 htab_t ttypes, ehspec; |
1268 | 859 eh_region r; |
1269 crtl->eh.ttype_data = VEC_alloc (tree, gc, 16); | 860 eh_catch c; |
861 | |
862 cfun->eh->ttype_data = VEC_alloc (tree, gc, 16); | |
1270 if (targetm.arm_eabi_unwinder) | 863 if (targetm.arm_eabi_unwinder) |
1271 VARRAY_TREE_INIT (crtl->eh.ehspec_data, 64, "ehspec_data"); | 864 cfun->eh->ehspec_data.arm_eabi = VEC_alloc (tree, gc, 64); |
1272 else | 865 else |
1273 VARRAY_UCHAR_INIT (crtl->eh.ehspec_data, 64, "ehspec_data"); | 866 cfun->eh->ehspec_data.other = VEC_alloc (uchar, gc, 64); |
1274 | 867 |
1275 ttypes = htab_create (31, ttypes_filter_hash, ttypes_filter_eq, free); | 868 ttypes = htab_create (31, ttypes_filter_hash, ttypes_filter_eq, free); |
1276 ehspec = htab_create (31, ehspec_filter_hash, ehspec_filter_eq, free); | 869 ehspec = htab_create (31, ehspec_filter_hash, ehspec_filter_eq, free); |
1277 | 870 |
1278 for (i = cfun->eh->last_region_number; i > 0; --i) | 871 for (i = 1; VEC_iterate (eh_region, cfun->eh->region_array, i, r); ++i) |
1279 { | 872 { |
1280 struct eh_region *r; | 873 if (r == NULL) |
1281 | |
1282 r = VEC_index (eh_region, cfun->eh->region_array, i); | |
1283 | |
1284 /* Mind we don't process a region more than once. */ | |
1285 if (!r || r->region_number != i) | |
1286 continue; | 874 continue; |
1287 | 875 |
1288 switch (r->type) | 876 switch (r->type) |
1289 { | 877 { |
1290 case ERT_CATCH: | 878 case ERT_TRY: |
1291 /* Whatever type_list is (NULL or true list), we build a list | 879 for (c = r->u.eh_try.first_catch; c ; c = c->next_catch) |
1292 of filters for the region. */ | |
1293 r->u.eh_catch.filter_list = NULL_TREE; | |
1294 | |
1295 if (r->u.eh_catch.type_list != NULL) | |
1296 { | 880 { |
1297 /* Get a filter value for each of the types caught and store | 881 /* Whatever type_list is (NULL or true list), we build a list |
1298 them in the region's dedicated list. */ | 882 of filters for the region. */ |
1299 tree tp_node = r->u.eh_catch.type_list; | 883 c->filter_list = NULL_TREE; |
1300 | 884 |
1301 for (;tp_node; tp_node = TREE_CHAIN (tp_node)) | 885 if (c->type_list != NULL) |
1302 { | 886 { |
1303 int flt = add_ttypes_entry (ttypes, TREE_VALUE (tp_node)); | 887 /* Get a filter value for each of the types caught and store |
888 them in the region's dedicated list. */ | |
889 tree tp_node = c->type_list; | |
890 | |
891 for ( ; tp_node; tp_node = TREE_CHAIN (tp_node)) | |
892 { | |
893 int flt = add_ttypes_entry (ttypes, TREE_VALUE (tp_node)); | |
894 tree flt_node = build_int_cst (NULL_TREE, flt); | |
895 | |
896 c->filter_list | |
897 = tree_cons (NULL_TREE, flt_node, c->filter_list); | |
898 } | |
899 } | |
900 else | |
901 { | |
902 /* Get a filter value for the NULL list also since it | |
903 will need an action record anyway. */ | |
904 int flt = add_ttypes_entry (ttypes, NULL); | |
1304 tree flt_node = build_int_cst (NULL_TREE, flt); | 905 tree flt_node = build_int_cst (NULL_TREE, flt); |
1305 | 906 |
1306 r->u.eh_catch.filter_list | 907 c->filter_list |
1307 = tree_cons (NULL_TREE, flt_node, r->u.eh_catch.filter_list); | 908 = tree_cons (NULL_TREE, flt_node, NULL); |
1308 } | 909 } |
1309 } | 910 } |
1310 else | |
1311 { | |
1312 /* Get a filter value for the NULL list also since it will need | |
1313 an action record anyway. */ | |
1314 int flt = add_ttypes_entry (ttypes, NULL); | |
1315 tree flt_node = build_int_cst (NULL_TREE, flt); | |
1316 | |
1317 r->u.eh_catch.filter_list | |
1318 = tree_cons (NULL_TREE, flt_node, r->u.eh_catch.filter_list); | |
1319 } | |
1320 | |
1321 break; | 911 break; |
1322 | 912 |
1323 case ERT_ALLOWED_EXCEPTIONS: | 913 case ERT_ALLOWED_EXCEPTIONS: |
1324 r->u.allowed.filter | 914 r->u.allowed.filter |
1325 = add_ehspec_entry (ehspec, ttypes, r->u.allowed.type_list); | 915 = add_ehspec_entry (ehspec, ttypes, r->u.allowed.type_list); |
1359 bb = create_basic_block (seq, last, BLOCK_FOR_INSN (insn)->prev_bb); | 949 bb = create_basic_block (seq, last, BLOCK_FOR_INSN (insn)->prev_bb); |
1360 update_bb_for_insn (bb); | 950 update_bb_for_insn (bb); |
1361 bb->flags |= BB_SUPERBLOCK; | 951 bb->flags |= BB_SUPERBLOCK; |
1362 return bb; | 952 return bb; |
1363 } | 953 } |
1364 | |
1365 /* Generate the code to actually handle exceptions, which will follow the | |
1366 landing pads. */ | |
1367 | |
1368 static void | |
1369 build_post_landing_pads (void) | |
1370 { | |
1371 int i; | |
1372 | |
1373 for (i = cfun->eh->last_region_number; i > 0; --i) | |
1374 { | |
1375 struct eh_region *region; | |
1376 rtx seq; | |
1377 | |
1378 region = VEC_index (eh_region, cfun->eh->region_array, i); | |
1379 /* Mind we don't process a region more than once. */ | |
1380 if (!region || region->region_number != i) | |
1381 continue; | |
1382 | |
1383 switch (region->type) | |
1384 { | |
1385 case ERT_TRY: | |
1386 /* ??? Collect the set of all non-overlapping catch handlers | |
1387 all the way up the chain until blocked by a cleanup. */ | |
1388 /* ??? Outer try regions can share landing pads with inner | |
1389 try regions if the types are completely non-overlapping, | |
1390 and there are no intervening cleanups. */ | |
1391 | |
1392 region->post_landing_pad = gen_label_rtx (); | |
1393 | |
1394 start_sequence (); | |
1395 | |
1396 emit_label (region->post_landing_pad); | |
1397 | |
1398 /* ??? It is mighty inconvenient to call back into the | |
1399 switch statement generation code in expand_end_case. | |
1400 Rapid prototyping sez a sequence of ifs. */ | |
1401 { | |
1402 struct eh_region *c; | |
1403 for (c = region->u.eh_try.eh_catch; c ; c = c->u.eh_catch.next_catch) | |
1404 { | |
1405 if (c->u.eh_catch.type_list == NULL) | |
1406 emit_jump (c->label); | |
1407 else | |
1408 { | |
1409 /* Need for one cmp/jump per type caught. Each type | |
1410 list entry has a matching entry in the filter list | |
1411 (see assign_filter_values). */ | |
1412 tree tp_node = c->u.eh_catch.type_list; | |
1413 tree flt_node = c->u.eh_catch.filter_list; | |
1414 | |
1415 for (; tp_node; ) | |
1416 { | |
1417 emit_cmp_and_jump_insns | |
1418 (crtl->eh.filter, | |
1419 GEN_INT (tree_low_cst (TREE_VALUE (flt_node), 0)), | |
1420 EQ, NULL_RTX, | |
1421 targetm.eh_return_filter_mode (), 0, c->label); | |
1422 | |
1423 tp_node = TREE_CHAIN (tp_node); | |
1424 flt_node = TREE_CHAIN (flt_node); | |
1425 } | |
1426 } | |
1427 } | |
1428 } | |
1429 | |
1430 /* We delay the generation of the _Unwind_Resume until we generate | |
1431 landing pads. We emit a marker here so as to get good control | |
1432 flow data in the meantime. */ | |
1433 region->resume | |
1434 = emit_jump_insn (gen_rtx_RESX (VOIDmode, region->region_number)); | |
1435 emit_barrier (); | |
1436 | |
1437 seq = get_insns (); | |
1438 end_sequence (); | |
1439 | |
1440 emit_to_new_bb_before (seq, region->u.eh_try.eh_catch->label); | |
1441 | |
1442 break; | |
1443 | |
1444 case ERT_ALLOWED_EXCEPTIONS: | |
1445 region->post_landing_pad = gen_label_rtx (); | |
1446 | |
1447 start_sequence (); | |
1448 | |
1449 emit_label (region->post_landing_pad); | |
1450 | |
1451 emit_cmp_and_jump_insns (crtl->eh.filter, | |
1452 GEN_INT (region->u.allowed.filter), | |
1453 EQ, NULL_RTX, | |
1454 targetm.eh_return_filter_mode (), 0, region->label); | |
1455 | |
1456 /* We delay the generation of the _Unwind_Resume until we generate | |
1457 landing pads. We emit a marker here so as to get good control | |
1458 flow data in the meantime. */ | |
1459 region->resume | |
1460 = emit_jump_insn (gen_rtx_RESX (VOIDmode, region->region_number)); | |
1461 emit_barrier (); | |
1462 | |
1463 seq = get_insns (); | |
1464 end_sequence (); | |
1465 | |
1466 emit_to_new_bb_before (seq, region->label); | |
1467 break; | |
1468 | |
1469 case ERT_CLEANUP: | |
1470 case ERT_MUST_NOT_THROW: | |
1471 region->post_landing_pad = region->label; | |
1472 break; | |
1473 | |
1474 case ERT_CATCH: | |
1475 case ERT_THROW: | |
1476 /* Nothing to do. */ | |
1477 break; | |
1478 | |
1479 default: | |
1480 gcc_unreachable (); | |
1481 } | |
1482 } | |
1483 } | |
1484 | |
1485 /* Replace RESX patterns with jumps to the next handler if any, or calls to | |
1486 _Unwind_Resume otherwise. */ | |
1487 | |
1488 static void | |
1489 connect_post_landing_pads (void) | |
1490 { | |
1491 int i; | |
1492 | |
1493 for (i = cfun->eh->last_region_number; i > 0; --i) | |
1494 { | |
1495 struct eh_region *region; | |
1496 struct eh_region *outer; | |
1497 rtx seq; | |
1498 rtx barrier; | |
1499 | |
1500 region = VEC_index (eh_region, cfun->eh->region_array, i); | |
1501 /* Mind we don't process a region more than once. */ | |
1502 if (!region || region->region_number != i) | |
1503 continue; | |
1504 | |
1505 /* If there is no RESX, or it has been deleted by flow, there's | |
1506 nothing to fix up. */ | |
1507 if (! region->resume || INSN_DELETED_P (region->resume)) | |
1508 continue; | |
1509 | |
1510 /* Search for another landing pad in this function. */ | |
1511 for (outer = region->outer; outer ; outer = outer->outer) | |
1512 if (outer->post_landing_pad) | |
1513 break; | |
1514 | |
1515 start_sequence (); | |
1516 | |
1517 if (outer) | |
1518 { | |
1519 edge e; | |
1520 basic_block src, dest; | |
1521 | |
1522 emit_jump (outer->post_landing_pad); | |
1523 src = BLOCK_FOR_INSN (region->resume); | |
1524 dest = BLOCK_FOR_INSN (outer->post_landing_pad); | |
1525 while (EDGE_COUNT (src->succs) > 0) | |
1526 remove_edge (EDGE_SUCC (src, 0)); | |
1527 e = make_edge (src, dest, 0); | |
1528 e->probability = REG_BR_PROB_BASE; | |
1529 e->count = src->count; | |
1530 } | |
1531 else | |
1532 { | |
1533 emit_library_call (unwind_resume_libfunc, LCT_THROW, | |
1534 VOIDmode, 1, crtl->eh.exc_ptr, ptr_mode); | |
1535 | |
1536 /* What we just emitted was a throwing libcall, so it got a | |
1537 barrier automatically added after it. If the last insn in | |
1538 the libcall sequence isn't the barrier, it's because the | |
1539 target emits multiple insns for a call, and there are insns | |
1540 after the actual call insn (which are redundant and would be | |
1541 optimized away). The barrier is inserted exactly after the | |
1542 call insn, so let's go get that and delete the insns after | |
1543 it, because below we need the barrier to be the last insn in | |
1544 the sequence. */ | |
1545 delete_insns_since (NEXT_INSN (last_call_insn ())); | |
1546 } | |
1547 | |
1548 seq = get_insns (); | |
1549 end_sequence (); | |
1550 barrier = emit_insn_before (seq, region->resume); | |
1551 /* Avoid duplicate barrier. */ | |
1552 gcc_assert (BARRIER_P (barrier)); | |
1553 delete_insn (barrier); | |
1554 delete_insn (region->resume); | |
1555 | |
1556 /* ??? From tree-ssa we can wind up with catch regions whose | |
1557 label is not instantiated, but whose resx is present. Now | |
1558 that we've dealt with the resx, kill the region. */ | |
1559 if (region->label == NULL && region->type == ERT_CLEANUP) | |
1560 remove_eh_handler (region); | |
1561 } | |
1562 } | |
1563 | |
1564 | 954 |
955 /* Expand the extra code needed at landing pads for dwarf2 unwinding. */ | |
956 | |
1565 static void | 957 static void |
1566 dw2_build_landing_pads (void) | 958 dw2_build_landing_pads (void) |
1567 { | 959 { |
1568 int i; | 960 int i; |
1569 | 961 eh_landing_pad lp; |
1570 for (i = cfun->eh->last_region_number; i > 0; --i) | 962 |
1571 { | 963 for (i = 1; VEC_iterate (eh_landing_pad, cfun->eh->lp_array, i, lp); ++i) |
1572 struct eh_region *region; | 964 { |
965 eh_region region; | |
966 basic_block bb; | |
1573 rtx seq; | 967 rtx seq; |
1574 basic_block bb; | |
1575 edge e; | 968 edge e; |
1576 | 969 |
1577 region = VEC_index (eh_region, cfun->eh->region_array, i); | 970 if (lp == NULL || lp->post_landing_pad == NULL) |
1578 /* Mind we don't process a region more than once. */ | |
1579 if (!region || region->region_number != i) | |
1580 continue; | 971 continue; |
1581 | 972 |
1582 if (region->type != ERT_CLEANUP | |
1583 && region->type != ERT_TRY | |
1584 && region->type != ERT_ALLOWED_EXCEPTIONS) | |
1585 continue; | |
1586 | |
1587 start_sequence (); | 973 start_sequence (); |
1588 | 974 |
1589 region->landing_pad = gen_label_rtx (); | 975 lp->landing_pad = gen_label_rtx (); |
1590 emit_label (region->landing_pad); | 976 emit_label (lp->landing_pad); |
1591 | 977 |
1592 #ifdef HAVE_exception_receiver | 978 #ifdef HAVE_exception_receiver |
1593 if (HAVE_exception_receiver) | 979 if (HAVE_exception_receiver) |
1594 emit_insn (gen_exception_receiver ()); | 980 emit_insn (gen_exception_receiver ()); |
1595 else | 981 else |
1599 emit_insn (gen_nonlocal_goto_receiver ()); | 985 emit_insn (gen_nonlocal_goto_receiver ()); |
1600 else | 986 else |
1601 #endif | 987 #endif |
1602 { /* Nothing */ } | 988 { /* Nothing */ } |
1603 | 989 |
1604 emit_move_insn (crtl->eh.exc_ptr, | 990 region = lp->region; |
1605 gen_rtx_REG (ptr_mode, EH_RETURN_DATA_REGNO (0))); | 991 if (region->exc_ptr_reg) |
1606 emit_move_insn (crtl->eh.filter, | 992 emit_move_insn (region->exc_ptr_reg, |
1607 gen_rtx_REG (targetm.eh_return_filter_mode (), | 993 gen_rtx_REG (ptr_mode, EH_RETURN_DATA_REGNO (0))); |
1608 EH_RETURN_DATA_REGNO (1))); | 994 if (region->filter_reg) |
995 emit_move_insn (region->filter_reg, | |
996 gen_rtx_REG (targetm.eh_return_filter_mode (), | |
997 EH_RETURN_DATA_REGNO (1))); | |
1609 | 998 |
1610 seq = get_insns (); | 999 seq = get_insns (); |
1611 end_sequence (); | 1000 end_sequence (); |
1612 | 1001 |
1613 bb = emit_to_new_bb_before (seq, region->post_landing_pad); | 1002 bb = emit_to_new_bb_before (seq, label_rtx (lp->post_landing_pad)); |
1614 e = make_edge (bb, bb->next_bb, EDGE_FALLTHRU); | 1003 e = make_edge (bb, bb->next_bb, EDGE_FALLTHRU); |
1615 e->count = bb->count; | 1004 e->count = bb->count; |
1616 e->probability = REG_BR_PROB_BASE; | 1005 e->probability = REG_BR_PROB_BASE; |
1617 } | 1006 } |
1618 } | 1007 } |
1619 | 1008 |
1620 | 1009 |
1621 struct sjlj_lp_info | 1010 static VEC (int, heap) *sjlj_lp_call_site_index; |
1622 { | 1011 |
1623 int directly_reachable; | 1012 /* Process all active landing pads. Assign each one a compact dispatch |
1624 int action_index; | 1013 index, and a call-site index. */ |
1625 int dispatch_index; | 1014 |
1626 int call_site_index; | 1015 static int |
1627 }; | 1016 sjlj_assign_call_site_values (void) |
1628 | |
1629 static bool | |
1630 sjlj_find_directly_reachable_regions (struct sjlj_lp_info *lp_info) | |
1631 { | |
1632 rtx insn; | |
1633 bool found_one = false; | |
1634 | |
1635 for (insn = get_insns (); insn ; insn = NEXT_INSN (insn)) | |
1636 { | |
1637 struct eh_region *region; | |
1638 enum reachable_code rc; | |
1639 tree type_thrown; | |
1640 rtx note; | |
1641 | |
1642 if (! INSN_P (insn)) | |
1643 continue; | |
1644 | |
1645 note = find_reg_note (insn, REG_EH_REGION, NULL_RTX); | |
1646 if (!note || INTVAL (XEXP (note, 0)) <= 0) | |
1647 continue; | |
1648 | |
1649 region = VEC_index (eh_region, cfun->eh->region_array, INTVAL (XEXP (note, 0))); | |
1650 | |
1651 type_thrown = NULL_TREE; | |
1652 if (region->type == ERT_THROW) | |
1653 { | |
1654 type_thrown = region->u.eh_throw.type; | |
1655 region = region->outer; | |
1656 } | |
1657 | |
1658 /* Find the first containing region that might handle the exception. | |
1659 That's the landing pad to which we will transfer control. */ | |
1660 rc = RNL_NOT_CAUGHT; | |
1661 for (; region; region = region->outer) | |
1662 { | |
1663 rc = reachable_next_level (region, type_thrown, NULL); | |
1664 if (rc != RNL_NOT_CAUGHT) | |
1665 break; | |
1666 } | |
1667 if (rc == RNL_MAYBE_CAUGHT || rc == RNL_CAUGHT) | |
1668 { | |
1669 lp_info[region->region_number].directly_reachable = 1; | |
1670 found_one = true; | |
1671 } | |
1672 } | |
1673 | |
1674 return found_one; | |
1675 } | |
1676 | |
1677 static void | |
1678 sjlj_assign_call_site_values (rtx dispatch_label, struct sjlj_lp_info *lp_info) | |
1679 { | 1017 { |
1680 htab_t ar_hash; | 1018 htab_t ar_hash; |
1681 int i, index; | 1019 int i, disp_index; |
1682 | 1020 eh_landing_pad lp; |
1683 /* First task: build the action table. */ | 1021 |
1684 | 1022 crtl->eh.action_record_data = VEC_alloc (uchar, gc, 64); |
1685 VARRAY_UCHAR_INIT (crtl->eh.action_record_data, 64, "action_record_data"); | |
1686 ar_hash = htab_create (31, action_record_hash, action_record_eq, free); | 1023 ar_hash = htab_create (31, action_record_hash, action_record_eq, free); |
1687 | 1024 |
1688 for (i = cfun->eh->last_region_number; i > 0; --i) | 1025 disp_index = 0; |
1689 if (lp_info[i].directly_reachable) | 1026 call_site_base = 1; |
1027 for (i = 1; VEC_iterate (eh_landing_pad, cfun->eh->lp_array, i, lp); ++i) | |
1028 if (lp && lp->post_landing_pad) | |
1690 { | 1029 { |
1691 struct eh_region *r = VEC_index (eh_region, cfun->eh->region_array, i); | 1030 int action, call_site; |
1692 | 1031 |
1693 r->landing_pad = dispatch_label; | 1032 /* First: build the action table. */ |
1694 lp_info[i].action_index = collect_one_action_chain (ar_hash, r); | 1033 action = collect_one_action_chain (ar_hash, lp->region); |
1695 if (lp_info[i].action_index != -1) | 1034 if (action != -1) |
1696 crtl->uses_eh_lsda = 1; | 1035 crtl->uses_eh_lsda = 1; |
1697 } | 1036 |
1698 | 1037 /* Next: assign call-site values. If dwarf2 terms, this would be |
1699 htab_delete (ar_hash); | 1038 the region number assigned by convert_to_eh_region_ranges, but |
1700 | 1039 handles no-action and must-not-throw differently. */ |
1701 /* Next: assign dispatch values. In dwarf2 terms, this would be the | |
1702 landing pad label for the region. For sjlj though, there is one | |
1703 common landing pad from which we dispatch to the post-landing pads. | |
1704 | |
1705 A region receives a dispatch index if it is directly reachable | |
1706 and requires in-function processing. Regions that share post-landing | |
1707 pads may share dispatch indices. */ | |
1708 /* ??? Post-landing pad sharing doesn't actually happen at the moment | |
1709 (see build_post_landing_pads) so we don't bother checking for it. */ | |
1710 | |
1711 index = 0; | |
1712 for (i = cfun->eh->last_region_number; i > 0; --i) | |
1713 if (lp_info[i].directly_reachable) | |
1714 lp_info[i].dispatch_index = index++; | |
1715 | |
1716 /* Finally: assign call-site values. If dwarf2 terms, this would be | |
1717 the region number assigned by convert_to_eh_region_ranges, but | |
1718 handles no-action and must-not-throw differently. */ | |
1719 | |
1720 call_site_base = 1; | |
1721 for (i = cfun->eh->last_region_number; i > 0; --i) | |
1722 if (lp_info[i].directly_reachable) | |
1723 { | |
1724 int action = lp_info[i].action_index; | |
1725 | |
1726 /* Map must-not-throw to otherwise unused call-site index 0. */ | 1040 /* Map must-not-throw to otherwise unused call-site index 0. */ |
1727 if (action == -2) | 1041 if (action == -2) |
1728 index = 0; | 1042 call_site = 0; |
1729 /* Map no-action to otherwise unused call-site index -1. */ | 1043 /* Map no-action to otherwise unused call-site index -1. */ |
1730 else if (action == -1) | 1044 else if (action == -1) |
1731 index = -1; | 1045 call_site = -1; |
1732 /* Otherwise, look it up in the table. */ | 1046 /* Otherwise, look it up in the table. */ |
1733 else | 1047 else |
1734 index = add_call_site (GEN_INT (lp_info[i].dispatch_index), action); | 1048 call_site = add_call_site (GEN_INT (disp_index), action, 0); |
1735 | 1049 VEC_replace (int, sjlj_lp_call_site_index, i, call_site); |
1736 lp_info[i].call_site_index = index; | 1050 |
1051 disp_index++; | |
1737 } | 1052 } |
1738 } | 1053 |
1054 htab_delete (ar_hash); | |
1055 | |
1056 return disp_index; | |
1057 } | |
1058 | |
1059 /* Emit code to record the current call-site index before every | |
1060 insn that can throw. */ | |
1739 | 1061 |
1740 static void | 1062 static void |
1741 sjlj_mark_call_sites (struct sjlj_lp_info *lp_info) | 1063 sjlj_mark_call_sites (void) |
1742 { | 1064 { |
1743 int last_call_site = -2; | 1065 int last_call_site = -2; |
1744 rtx insn, mem; | 1066 rtx insn, mem; |
1745 | 1067 |
1746 for (insn = get_insns (); insn ; insn = NEXT_INSN (insn)) | 1068 for (insn = get_insns (); insn ; insn = NEXT_INSN (insn)) |
1747 { | 1069 { |
1748 struct eh_region *region; | 1070 eh_landing_pad lp; |
1071 eh_region r; | |
1072 bool nothrow; | |
1749 int this_call_site; | 1073 int this_call_site; |
1750 rtx note, before, p; | 1074 rtx before, p; |
1751 | 1075 |
1752 /* Reset value tracking at extended basic block boundaries. */ | 1076 /* Reset value tracking at extended basic block boundaries. */ |
1753 if (LABEL_P (insn)) | 1077 if (LABEL_P (insn)) |
1754 last_call_site = -2; | 1078 last_call_site = -2; |
1755 | 1079 |
1756 if (! INSN_P (insn)) | 1080 if (! INSN_P (insn)) |
1757 continue; | 1081 continue; |
1758 | 1082 |
1759 note = find_reg_note (insn, REG_EH_REGION, NULL_RTX); | 1083 nothrow = get_eh_region_and_lp_from_rtx (insn, &r, &lp); |
1760 if (!note) | 1084 if (nothrow) |
1085 continue; | |
1086 if (lp) | |
1087 this_call_site = VEC_index (int, sjlj_lp_call_site_index, lp->index); | |
1088 else if (r == NULL) | |
1761 { | 1089 { |
1762 /* Calls (and trapping insns) without notes are outside any | 1090 /* Calls (and trapping insns) without notes are outside any |
1763 exception handling region in this function. Mark them as | 1091 exception handling region in this function. Mark them as |
1764 no action. */ | 1092 no action. */ |
1765 if (CALL_P (insn) | 1093 this_call_site = -1; |
1766 || (flag_non_call_exceptions | |
1767 && may_trap_p (PATTERN (insn)))) | |
1768 this_call_site = -1; | |
1769 else | |
1770 continue; | |
1771 } | 1094 } |
1772 else | 1095 else |
1773 { | 1096 { |
1774 /* Calls that are known to not throw need not be marked. */ | 1097 gcc_assert (r->type == ERT_MUST_NOT_THROW); |
1775 if (INTVAL (XEXP (note, 0)) <= 0) | 1098 this_call_site = 0; |
1776 continue; | |
1777 | |
1778 region = VEC_index (eh_region, cfun->eh->region_array, INTVAL (XEXP (note, 0))); | |
1779 this_call_site = lp_info[region->region_number].call_site_index; | |
1780 } | 1099 } |
1781 | 1100 |
1782 if (this_call_site == last_call_site) | 1101 if (this_call_site == last_call_site) |
1783 continue; | 1102 continue; |
1784 | 1103 |
1804 static void | 1123 static void |
1805 sjlj_emit_function_enter (rtx dispatch_label) | 1124 sjlj_emit_function_enter (rtx dispatch_label) |
1806 { | 1125 { |
1807 rtx fn_begin, fc, mem, seq; | 1126 rtx fn_begin, fc, mem, seq; |
1808 bool fn_begin_outside_block; | 1127 bool fn_begin_outside_block; |
1128 rtx personality = get_personality_function (current_function_decl); | |
1809 | 1129 |
1810 fc = crtl->eh.sjlj_fc; | 1130 fc = crtl->eh.sjlj_fc; |
1811 | 1131 |
1812 start_sequence (); | 1132 start_sequence (); |
1813 | 1133 |
1814 /* We're storing this libcall's address into memory instead of | 1134 /* We're storing this libcall's address into memory instead of |
1815 calling it directly. Thus, we must call assemble_external_libcall | 1135 calling it directly. Thus, we must call assemble_external_libcall |
1816 here, as we can not depend on emit_library_call to do it for us. */ | 1136 here, as we can not depend on emit_library_call to do it for us. */ |
1817 assemble_external_libcall (eh_personality_libfunc); | 1137 assemble_external_libcall (personality); |
1818 mem = adjust_address (fc, Pmode, sjlj_fc_personality_ofs); | 1138 mem = adjust_address (fc, Pmode, sjlj_fc_personality_ofs); |
1819 emit_move_insn (mem, eh_personality_libfunc); | 1139 emit_move_insn (mem, personality); |
1820 | 1140 |
1821 mem = adjust_address (fc, Pmode, sjlj_fc_lsda_ofs); | 1141 mem = adjust_address (fc, Pmode, sjlj_fc_lsda_ofs); |
1822 if (crtl->uses_eh_lsda) | 1142 if (crtl->uses_eh_lsda) |
1823 { | 1143 { |
1824 char buf[20]; | 1144 char buf[20]; |
1885 } | 1205 } |
1886 | 1206 |
1887 static void | 1207 static void |
1888 sjlj_emit_function_exit (void) | 1208 sjlj_emit_function_exit (void) |
1889 { | 1209 { |
1890 rtx seq; | 1210 rtx seq, insn; |
1891 edge e; | |
1892 edge_iterator ei; | |
1893 | 1211 |
1894 start_sequence (); | 1212 start_sequence (); |
1895 | 1213 |
1896 emit_library_call (unwind_sjlj_unregister_libfunc, LCT_NORMAL, VOIDmode, | 1214 emit_library_call (unwind_sjlj_unregister_libfunc, LCT_NORMAL, VOIDmode, |
1897 1, XEXP (crtl->eh.sjlj_fc, 0), Pmode); | 1215 1, XEXP (crtl->eh.sjlj_fc, 0), Pmode); |
1901 | 1219 |
1902 /* ??? Really this can be done in any block at loop level 0 that | 1220 /* ??? Really this can be done in any block at loop level 0 that |
1903 post-dominates all can_throw_internal instructions. This is | 1221 post-dominates all can_throw_internal instructions. This is |
1904 the last possible moment. */ | 1222 the last possible moment. */ |
1905 | 1223 |
1906 FOR_EACH_EDGE (e, ei, EXIT_BLOCK_PTR->preds) | 1224 insn = crtl->eh.sjlj_exit_after; |
1907 if (e->flags & EDGE_FALLTHRU) | 1225 if (LABEL_P (insn)) |
1908 break; | 1226 insn = NEXT_INSN (insn); |
1909 if (e) | 1227 |
1910 { | 1228 emit_insn_after (seq, insn); |
1911 rtx insn; | |
1912 | |
1913 /* Figure out whether the place we are supposed to insert libcall | |
1914 is inside the last basic block or after it. In the other case | |
1915 we need to emit to edge. */ | |
1916 gcc_assert (e->src->next_bb == EXIT_BLOCK_PTR); | |
1917 for (insn = BB_HEAD (e->src); ; insn = NEXT_INSN (insn)) | |
1918 { | |
1919 if (insn == crtl->eh.sjlj_exit_after) | |
1920 { | |
1921 if (LABEL_P (insn)) | |
1922 insn = NEXT_INSN (insn); | |
1923 emit_insn_after (seq, insn); | |
1924 return; | |
1925 } | |
1926 if (insn == BB_END (e->src)) | |
1927 break; | |
1928 } | |
1929 insert_insn_on_edge (seq, e); | |
1930 } | |
1931 } | 1229 } |
1932 | 1230 |
1933 static void | 1231 static void |
1934 sjlj_emit_dispatch_table (rtx dispatch_label, struct sjlj_lp_info *lp_info) | 1232 sjlj_emit_dispatch_table (rtx dispatch_label, int num_dispatch) |
1935 { | 1233 { |
1936 enum machine_mode unwind_word_mode = targetm.unwind_word_mode (); | 1234 enum machine_mode unwind_word_mode = targetm.unwind_word_mode (); |
1937 enum machine_mode filter_mode = targetm.eh_return_filter_mode (); | 1235 enum machine_mode filter_mode = targetm.eh_return_filter_mode (); |
1938 int i, first_reachable; | 1236 eh_landing_pad lp; |
1939 rtx mem, dispatch, seq, fc; | 1237 rtx mem, seq, fc, before, exc_ptr_reg, filter_reg; |
1940 rtx before; | 1238 rtx first_reachable_label; |
1941 basic_block bb; | 1239 basic_block bb; |
1240 eh_region r; | |
1942 edge e; | 1241 edge e; |
1242 int i, disp_index; | |
1243 gimple switch_stmt; | |
1943 | 1244 |
1944 fc = crtl->eh.sjlj_fc; | 1245 fc = crtl->eh.sjlj_fc; |
1945 | 1246 |
1946 start_sequence (); | 1247 start_sequence (); |
1947 | 1248 |
1948 emit_label (dispatch_label); | 1249 emit_label (dispatch_label); |
1949 | 1250 |
1950 #ifndef DONT_USE_BUILTIN_SETJMP | 1251 #ifndef DONT_USE_BUILTIN_SETJMP |
1951 expand_builtin_setjmp_receiver (dispatch_label); | 1252 expand_builtin_setjmp_receiver (dispatch_label); |
1952 #endif | 1253 |
1953 | 1254 /* The caller of expand_builtin_setjmp_receiver is responsible for |
1954 /* Load up dispatch index, exc_ptr and filter values from the | 1255 making sure that the label doesn't vanish. The only other caller |
1955 function context. */ | 1256 is the expander for __builtin_setjmp_receiver, which places this |
1956 mem = adjust_address (fc, TYPE_MODE (integer_type_node), | 1257 label on the nonlocal_goto_label list. Since we're modeling these |
1957 sjlj_fc_call_site_ofs); | 1258 CFG edges more exactly, we can use the forced_labels list instead. */ |
1958 dispatch = copy_to_reg (mem); | 1259 LABEL_PRESERVE_P (dispatch_label) = 1; |
1959 | 1260 forced_labels |
1261 = gen_rtx_EXPR_LIST (VOIDmode, dispatch_label, forced_labels); | |
1262 #endif | |
1263 | |
1264 /* Load up exc_ptr and filter values from the function context. */ | |
1960 mem = adjust_address (fc, unwind_word_mode, sjlj_fc_data_ofs); | 1265 mem = adjust_address (fc, unwind_word_mode, sjlj_fc_data_ofs); |
1961 if (unwind_word_mode != ptr_mode) | 1266 if (unwind_word_mode != ptr_mode) |
1962 { | 1267 { |
1963 #ifdef POINTERS_EXTEND_UNSIGNED | 1268 #ifdef POINTERS_EXTEND_UNSIGNED |
1964 mem = convert_memory_address (ptr_mode, mem); | 1269 mem = convert_memory_address (ptr_mode, mem); |
1965 #else | 1270 #else |
1966 mem = convert_to_mode (ptr_mode, mem, 0); | 1271 mem = convert_to_mode (ptr_mode, mem, 0); |
1967 #endif | 1272 #endif |
1968 } | 1273 } |
1969 emit_move_insn (crtl->eh.exc_ptr, mem); | 1274 exc_ptr_reg = force_reg (ptr_mode, mem); |
1970 | 1275 |
1971 mem = adjust_address (fc, unwind_word_mode, | 1276 mem = adjust_address (fc, unwind_word_mode, |
1972 sjlj_fc_data_ofs + GET_MODE_SIZE (unwind_word_mode)); | 1277 sjlj_fc_data_ofs + GET_MODE_SIZE (unwind_word_mode)); |
1973 if (unwind_word_mode != filter_mode) | 1278 if (unwind_word_mode != filter_mode) |
1974 mem = convert_to_mode (filter_mode, mem, 0); | 1279 mem = convert_to_mode (filter_mode, mem, 0); |
1975 emit_move_insn (crtl->eh.filter, mem); | 1280 filter_reg = force_reg (filter_mode, mem); |
1976 | 1281 |
1977 /* Jump to one of the directly reachable regions. */ | 1282 /* Jump to one of the directly reachable regions. */ |
1978 /* ??? This really ought to be using a switch statement. */ | 1283 |
1979 | 1284 disp_index = 0; |
1980 first_reachable = 0; | 1285 first_reachable_label = NULL; |
1981 for (i = cfun->eh->last_region_number; i > 0; --i) | 1286 |
1982 { | 1287 /* If there's exactly one call site in the function, don't bother |
1983 if (! lp_info[i].directly_reachable) | 1288 generating a switch statement. */ |
1984 continue; | 1289 switch_stmt = NULL; |
1985 | 1290 if (num_dispatch > 1) |
1986 if (! first_reachable) | 1291 { |
1987 { | 1292 tree disp; |
1988 first_reachable = i; | 1293 |
1989 continue; | 1294 mem = adjust_address (fc, TYPE_MODE (integer_type_node), |
1990 } | 1295 sjlj_fc_call_site_ofs); |
1991 | 1296 disp = make_tree (integer_type_node, mem); |
1992 emit_cmp_and_jump_insns (dispatch, GEN_INT (lp_info[i].dispatch_index), | 1297 |
1993 EQ, NULL_RTX, TYPE_MODE (integer_type_node), 0, | 1298 switch_stmt = gimple_build_switch_nlabels (num_dispatch, disp, NULL); |
1994 ((struct eh_region *)VEC_index (eh_region, cfun->eh->region_array, i)) | 1299 } |
1995 ->post_landing_pad); | 1300 |
1301 for (i = 1; VEC_iterate (eh_landing_pad, cfun->eh->lp_array, i, lp); ++i) | |
1302 if (lp && lp->post_landing_pad) | |
1303 { | |
1304 rtx seq2, label; | |
1305 | |
1306 start_sequence (); | |
1307 | |
1308 lp->landing_pad = dispatch_label; | |
1309 | |
1310 if (num_dispatch > 1) | |
1311 { | |
1312 tree t_label, case_elt; | |
1313 | |
1314 t_label = create_artificial_label (UNKNOWN_LOCATION); | |
1315 case_elt = build3 (CASE_LABEL_EXPR, void_type_node, | |
1316 build_int_cst (NULL, disp_index), | |
1317 NULL, t_label); | |
1318 gimple_switch_set_label (switch_stmt, disp_index, case_elt); | |
1319 | |
1320 label = label_rtx (t_label); | |
1321 } | |
1322 else | |
1323 label = gen_label_rtx (); | |
1324 | |
1325 if (disp_index == 0) | |
1326 first_reachable_label = label; | |
1327 emit_label (label); | |
1328 | |
1329 r = lp->region; | |
1330 if (r->exc_ptr_reg) | |
1331 emit_move_insn (r->exc_ptr_reg, exc_ptr_reg); | |
1332 if (r->filter_reg) | |
1333 emit_move_insn (r->filter_reg, filter_reg); | |
1334 | |
1335 seq2 = get_insns (); | |
1336 end_sequence (); | |
1337 | |
1338 before = label_rtx (lp->post_landing_pad); | |
1339 bb = emit_to_new_bb_before (seq2, before); | |
1340 e = make_edge (bb, bb->next_bb, EDGE_FALLTHRU); | |
1341 e->count = bb->count; | |
1342 e->probability = REG_BR_PROB_BASE; | |
1343 | |
1344 disp_index++; | |
1345 } | |
1346 gcc_assert (disp_index == num_dispatch); | |
1347 | |
1348 if (num_dispatch > 1) | |
1349 { | |
1350 expand_case (switch_stmt); | |
1351 expand_builtin_trap (); | |
1996 } | 1352 } |
1997 | 1353 |
1998 seq = get_insns (); | 1354 seq = get_insns (); |
1999 end_sequence (); | 1355 end_sequence (); |
2000 | 1356 |
2001 before = (((struct eh_region *)VEC_index (eh_region, cfun->eh->region_array, first_reachable)) | 1357 bb = emit_to_new_bb_before (seq, first_reachable_label); |
2002 ->post_landing_pad); | 1358 if (num_dispatch == 1) |
2003 | 1359 { |
2004 bb = emit_to_new_bb_before (seq, before); | 1360 e = make_edge (bb, bb->next_bb, EDGE_FALLTHRU); |
2005 e = make_edge (bb, bb->next_bb, EDGE_FALLTHRU); | 1361 e->count = bb->count; |
2006 e->count = bb->count; | 1362 e->probability = REG_BR_PROB_BASE; |
2007 e->probability = REG_BR_PROB_BASE; | 1363 } |
2008 } | 1364 } |
2009 | 1365 |
2010 static void | 1366 static void |
2011 sjlj_build_landing_pads (void) | 1367 sjlj_build_landing_pads (void) |
2012 { | 1368 { |
2013 struct sjlj_lp_info *lp_info; | 1369 int num_dispatch; |
2014 | 1370 |
2015 lp_info = XCNEWVEC (struct sjlj_lp_info, cfun->eh->last_region_number + 1); | 1371 num_dispatch = VEC_length (eh_landing_pad, cfun->eh->lp_array); |
2016 | 1372 if (num_dispatch == 0) |
2017 if (sjlj_find_directly_reachable_regions (lp_info)) | 1373 return; |
1374 VEC_safe_grow (int, heap, sjlj_lp_call_site_index, num_dispatch); | |
1375 | |
1376 num_dispatch = sjlj_assign_call_site_values (); | |
1377 if (num_dispatch > 0) | |
2018 { | 1378 { |
2019 rtx dispatch_label = gen_label_rtx (); | 1379 rtx dispatch_label = gen_label_rtx (); |
2020 int align = STACK_SLOT_ALIGNMENT (sjlj_fc_type_node, | 1380 int align = STACK_SLOT_ALIGNMENT (sjlj_fc_type_node, |
2021 TYPE_MODE (sjlj_fc_type_node), | 1381 TYPE_MODE (sjlj_fc_type_node), |
2022 TYPE_ALIGN (sjlj_fc_type_node)); | 1382 TYPE_ALIGN (sjlj_fc_type_node)); |
2023 crtl->eh.sjlj_fc | 1383 crtl->eh.sjlj_fc |
2024 = assign_stack_local (TYPE_MODE (sjlj_fc_type_node), | 1384 = assign_stack_local (TYPE_MODE (sjlj_fc_type_node), |
2025 int_size_in_bytes (sjlj_fc_type_node), | 1385 int_size_in_bytes (sjlj_fc_type_node), |
2026 align); | 1386 align); |
2027 | 1387 |
2028 sjlj_assign_call_site_values (dispatch_label, lp_info); | 1388 sjlj_mark_call_sites (); |
2029 sjlj_mark_call_sites (lp_info); | |
2030 | |
2031 sjlj_emit_function_enter (dispatch_label); | 1389 sjlj_emit_function_enter (dispatch_label); |
2032 sjlj_emit_dispatch_table (dispatch_label, lp_info); | 1390 sjlj_emit_dispatch_table (dispatch_label, num_dispatch); |
2033 sjlj_emit_function_exit (); | 1391 sjlj_emit_function_exit (); |
2034 } | 1392 } |
2035 | 1393 |
2036 free (lp_info); | 1394 VEC_free (int, heap, sjlj_lp_call_site_index); |
2037 } | 1395 } |
2038 | 1396 |
2039 void | 1397 /* After initial rtl generation, call back to finish generating |
1398 exception support code. */ | |
1399 | |
1400 static void | |
2040 finish_eh_generation (void) | 1401 finish_eh_generation (void) |
2041 { | 1402 { |
2042 basic_block bb; | 1403 basic_block bb; |
2043 | 1404 |
2044 /* Nothing to do if no regions created. */ | |
2045 if (cfun->eh->region_tree == NULL) | |
2046 return; | |
2047 | |
2048 /* The object here is to provide find_basic_blocks with detailed | |
2049 information (via reachable_handlers) on how exception control | |
2050 flows within the function. In this first pass, we can include | |
2051 type information garnered from ERT_THROW and ERT_ALLOWED_EXCEPTIONS | |
2052 regions, and hope that it will be useful in deleting unreachable | |
2053 handlers. Subsequently, we will generate landing pads which will | |
2054 connect many of the handlers, and then type information will not | |
2055 be effective. Still, this is a win over previous implementations. */ | |
2056 | |
2057 /* These registers are used by the landing pads. Make sure they | |
2058 have been generated. */ | |
2059 get_exception_pointer (); | |
2060 get_exception_filter (); | |
2061 | |
2062 /* Construct the landing pads. */ | 1405 /* Construct the landing pads. */ |
2063 | |
2064 assign_filter_values (); | |
2065 build_post_landing_pads (); | |
2066 connect_post_landing_pads (); | |
2067 if (USING_SJLJ_EXCEPTIONS) | 1406 if (USING_SJLJ_EXCEPTIONS) |
2068 sjlj_build_landing_pads (); | 1407 sjlj_build_landing_pads (); |
2069 else | 1408 else |
2070 dw2_build_landing_pads (); | 1409 dw2_build_landing_pads (); |
2071 | |
2072 crtl->eh.built_landing_pads = 1; | |
2073 | |
2074 /* We've totally changed the CFG. Start over. */ | |
2075 find_exception_handler_labels (); | |
2076 break_superblocks (); | 1410 break_superblocks (); |
1411 | |
2077 if (USING_SJLJ_EXCEPTIONS | 1412 if (USING_SJLJ_EXCEPTIONS |
2078 /* Kludge for Alpha/Tru64 (see alpha_gp_save_rtx). */ | 1413 /* Kludge for Alpha/Tru64 (see alpha_gp_save_rtx). */ |
2079 || single_succ_edge (ENTRY_BLOCK_PTR)->insns.r) | 1414 || single_succ_edge (ENTRY_BLOCK_PTR)->insns.r) |
2080 commit_edge_insertions (); | 1415 commit_edge_insertions (); |
1416 | |
1417 /* Redirect all EH edges from the post_landing_pad to the landing pad. */ | |
2081 FOR_EACH_BB (bb) | 1418 FOR_EACH_BB (bb) |
2082 { | 1419 { |
1420 eh_landing_pad lp; | |
1421 edge_iterator ei; | |
2083 edge e; | 1422 edge e; |
2084 edge_iterator ei; | 1423 |
2085 bool eh = false; | 1424 lp = get_eh_landing_pad_from_rtx (BB_END (bb)); |
2086 for (ei = ei_start (bb->succs); (e = ei_safe_edge (ei)); ) | 1425 |
1426 FOR_EACH_EDGE (e, ei, bb->succs) | |
1427 if (e->flags & EDGE_EH) | |
1428 break; | |
1429 | |
1430 /* We should not have generated any new throwing insns during this | |
1431 pass, and we should not have lost any EH edges, so we only need | |
1432 to handle two cases here: | |
1433 (1) reachable handler and an existing edge to post-landing-pad, | |
1434 (2) no reachable handler and no edge. */ | |
1435 gcc_assert ((lp != NULL) == (e != NULL)); | |
1436 if (lp != NULL) | |
2087 { | 1437 { |
2088 if (e->flags & EDGE_EH) | 1438 gcc_assert (BB_HEAD (e->dest) == label_rtx (lp->post_landing_pad)); |
2089 { | 1439 |
2090 remove_edge (e); | 1440 redirect_edge_succ (e, BLOCK_FOR_INSN (lp->landing_pad)); |
2091 eh = true; | 1441 e->flags |= (CALL_P (BB_END (bb)) |
2092 } | 1442 ? EDGE_ABNORMAL | EDGE_ABNORMAL_CALL |
2093 else | 1443 : EDGE_ABNORMAL); |
2094 ei_next (&ei); | |
2095 } | 1444 } |
2096 if (eh) | 1445 } |
2097 rtl_make_eh_edge (NULL, bb, BB_END (bb)); | 1446 } |
2098 } | 1447 |
2099 } | 1448 static bool |
1449 gate_handle_eh (void) | |
1450 { | |
1451 /* Nothing to do if no regions created. */ | |
1452 return cfun->eh->region_tree != NULL; | |
1453 } | |
1454 | |
1455 /* Complete generation of exception handling code. */ | |
1456 static unsigned int | |
1457 rest_of_handle_eh (void) | |
1458 { | |
1459 finish_eh_generation (); | |
1460 cleanup_cfg (CLEANUP_NO_INSN_DEL); | |
1461 return 0; | |
1462 } | |
1463 | |
1464 struct rtl_opt_pass pass_rtl_eh = | |
1465 { | |
1466 { | |
1467 RTL_PASS, | |
1468 "rtl eh", /* name */ | |
1469 gate_handle_eh, /* gate */ | |
1470 rest_of_handle_eh, /* execute */ | |
1471 NULL, /* sub */ | |
1472 NULL, /* next */ | |
1473 0, /* static_pass_number */ | |
1474 TV_JUMP, /* tv_id */ | |
1475 0, /* properties_required */ | |
1476 0, /* properties_provided */ | |
1477 0, /* properties_destroyed */ | |
1478 0, /* todo_flags_start */ | |
1479 TODO_dump_func /* todo_flags_finish */ | |
1480 } | |
1481 }; | |
2100 | 1482 |
2101 static hashval_t | |
2102 ehl_hash (const void *pentry) | |
2103 { | |
2104 const struct ehl_map_entry *const entry | |
2105 = (const struct ehl_map_entry *) pentry; | |
2106 | |
2107 /* 2^32 * ((sqrt(5) - 1) / 2) */ | |
2108 const hashval_t scaled_golden_ratio = 0x9e3779b9; | |
2109 return CODE_LABEL_NUMBER (entry->label) * scaled_golden_ratio; | |
2110 } | |
2111 | |
2112 static int | |
2113 ehl_eq (const void *pentry, const void *pdata) | |
2114 { | |
2115 const struct ehl_map_entry *const entry | |
2116 = (const struct ehl_map_entry *) pentry; | |
2117 const struct ehl_map_entry *const data | |
2118 = (const struct ehl_map_entry *) pdata; | |
2119 | |
2120 return entry->label == data->label; | |
2121 } | |
2122 | |
2123 /* This section handles removing dead code for flow. */ | 1483 /* This section handles removing dead code for flow. */ |
2124 | 1484 |
2125 /* Remove LABEL from exception_handler_label_map. */ | 1485 void |
2126 | 1486 remove_eh_landing_pad (eh_landing_pad lp) |
2127 static void | 1487 { |
2128 remove_exception_handler_label (rtx label) | 1488 eh_landing_pad *pp; |
2129 { | 1489 |
2130 struct ehl_map_entry **slot, tmp; | 1490 for (pp = &lp->region->landing_pads; *pp != lp; pp = &(*pp)->next_lp) |
2131 | 1491 continue; |
2132 /* If exception_handler_label_map was not built yet, | 1492 *pp = lp->next_lp; |
2133 there is nothing to do. */ | 1493 |
2134 if (crtl->eh.exception_handler_label_map == NULL) | 1494 if (lp->post_landing_pad) |
2135 return; | 1495 EH_LANDING_PAD_NR (lp->post_landing_pad) = 0; |
2136 | 1496 VEC_replace (eh_landing_pad, cfun->eh->lp_array, lp->index, NULL); |
2137 tmp.label = label; | 1497 } |
2138 slot = (struct ehl_map_entry **) | 1498 |
2139 htab_find_slot (crtl->eh.exception_handler_label_map, &tmp, NO_INSERT); | 1499 /* Splice REGION from the region tree. */ |
2140 gcc_assert (slot); | 1500 |
2141 | 1501 void |
2142 htab_clear_slot (crtl->eh.exception_handler_label_map, (void **) slot); | 1502 remove_eh_handler (eh_region region) |
2143 } | 1503 { |
2144 | 1504 eh_region *pp, *pp_start, p, outer; |
2145 /* Splice REGION from the region tree etc. */ | 1505 eh_landing_pad lp; |
2146 | 1506 |
2147 static void | 1507 for (lp = region->landing_pads; lp ; lp = lp->next_lp) |
2148 remove_eh_handler (struct eh_region *region) | 1508 { |
2149 { | 1509 if (lp->post_landing_pad) |
2150 struct eh_region **pp, **pp_start, *p, *outer, *inner; | 1510 EH_LANDING_PAD_NR (lp->post_landing_pad) = 0; |
2151 rtx lab; | 1511 VEC_replace (eh_landing_pad, cfun->eh->lp_array, lp->index, NULL); |
2152 | 1512 } |
2153 /* For the benefit of efficiently handling REG_EH_REGION notes, | |
2154 replace this region in the region array with its containing | |
2155 region. Note that previous region deletions may result in | |
2156 multiple copies of this region in the array, so we have a | |
2157 list of alternate numbers by which we are known. */ | |
2158 | 1513 |
2159 outer = region->outer; | 1514 outer = region->outer; |
2160 VEC_replace (eh_region, cfun->eh->region_array, region->region_number, outer); | |
2161 if (region->aka) | |
2162 { | |
2163 unsigned i; | |
2164 bitmap_iterator bi; | |
2165 | |
2166 EXECUTE_IF_SET_IN_BITMAP (region->aka, 0, i, bi) | |
2167 { | |
2168 VEC_replace (eh_region, cfun->eh->region_array, i, outer); | |
2169 } | |
2170 } | |
2171 | |
2172 if (outer) | |
2173 { | |
2174 if (!outer->aka) | |
2175 outer->aka = BITMAP_GGC_ALLOC (); | |
2176 if (region->aka) | |
2177 bitmap_ior_into (outer->aka, region->aka); | |
2178 bitmap_set_bit (outer->aka, region->region_number); | |
2179 } | |
2180 | |
2181 if (crtl->eh.built_landing_pads) | |
2182 lab = region->landing_pad; | |
2183 else | |
2184 lab = region->label; | |
2185 if (lab) | |
2186 remove_exception_handler_label (lab); | |
2187 | |
2188 if (outer) | 1515 if (outer) |
2189 pp_start = &outer->inner; | 1516 pp_start = &outer->inner; |
2190 else | 1517 else |
2191 pp_start = &cfun->eh->region_tree; | 1518 pp_start = &cfun->eh->region_tree; |
2192 for (pp = pp_start, p = *pp; p != region; pp = &p->next_peer, p = *pp) | 1519 for (pp = pp_start, p = *pp; p != region; pp = &p->next_peer, p = *pp) |
2193 continue; | 1520 continue; |
1521 if (region->inner) | |
1522 { | |
1523 *pp = p = region->inner; | |
1524 do | |
1525 { | |
1526 p->outer = outer; | |
1527 pp = &p->next_peer; | |
1528 p = *pp; | |
1529 } | |
1530 while (p); | |
1531 } | |
2194 *pp = region->next_peer; | 1532 *pp = region->next_peer; |
2195 | 1533 |
2196 inner = region->inner; | 1534 VEC_replace (eh_region, cfun->eh->region_array, region->index, NULL); |
2197 if (inner) | 1535 } |
2198 { | 1536 |
2199 for (p = inner; p->next_peer ; p = p->next_peer) | 1537 /* Invokes CALLBACK for every exception handler landing pad label. |
2200 p->outer = outer; | 1538 Only used by reload hackery; should not be used by new code. */ |
2201 p->outer = outer; | |
2202 | |
2203 p->next_peer = *pp_start; | |
2204 *pp_start = inner; | |
2205 } | |
2206 | |
2207 if (region->type == ERT_CATCH) | |
2208 { | |
2209 struct eh_region *eh_try, *next, *prev; | |
2210 | |
2211 for (eh_try = region->next_peer; | |
2212 eh_try->type == ERT_CATCH; | |
2213 eh_try = eh_try->next_peer) | |
2214 continue; | |
2215 gcc_assert (eh_try->type == ERT_TRY); | |
2216 | |
2217 next = region->u.eh_catch.next_catch; | |
2218 prev = region->u.eh_catch.prev_catch; | |
2219 | |
2220 if (next) | |
2221 next->u.eh_catch.prev_catch = prev; | |
2222 else | |
2223 eh_try->u.eh_try.last_catch = prev; | |
2224 if (prev) | |
2225 prev->u.eh_catch.next_catch = next; | |
2226 else | |
2227 { | |
2228 eh_try->u.eh_try.eh_catch = next; | |
2229 if (! next) | |
2230 remove_eh_handler (eh_try); | |
2231 } | |
2232 } | |
2233 } | |
2234 | |
2235 /* LABEL heads a basic block that is about to be deleted. If this | |
2236 label corresponds to an exception region, we may be able to | |
2237 delete the region. */ | |
2238 | |
2239 void | |
2240 maybe_remove_eh_handler (rtx label) | |
2241 { | |
2242 struct ehl_map_entry **slot, tmp; | |
2243 struct eh_region *region; | |
2244 | |
2245 /* ??? After generating landing pads, it's not so simple to determine | |
2246 if the region data is completely unused. One must examine the | |
2247 landing pad and the post landing pad, and whether an inner try block | |
2248 is referencing the catch handlers directly. */ | |
2249 if (crtl->eh.built_landing_pads) | |
2250 return; | |
2251 | |
2252 tmp.label = label; | |
2253 slot = (struct ehl_map_entry **) | |
2254 htab_find_slot (crtl->eh.exception_handler_label_map, &tmp, NO_INSERT); | |
2255 if (! slot) | |
2256 return; | |
2257 region = (*slot)->region; | |
2258 if (! region) | |
2259 return; | |
2260 | |
2261 /* Flow will want to remove MUST_NOT_THROW regions as unreachable | |
2262 because there is no path to the fallback call to terminate. | |
2263 But the region continues to affect call-site data until there | |
2264 are no more contained calls, which we don't see here. */ | |
2265 if (region->type == ERT_MUST_NOT_THROW) | |
2266 { | |
2267 htab_clear_slot (crtl->eh.exception_handler_label_map, (void **) slot); | |
2268 region->label = NULL_RTX; | |
2269 } | |
2270 else | |
2271 remove_eh_handler (region); | |
2272 } | |
2273 | |
2274 /* Invokes CALLBACK for every exception handler label. Only used by old | |
2275 loop hackery; should not be used by new code. */ | |
2276 | 1539 |
2277 void | 1540 void |
2278 for_each_eh_label (void (*callback) (rtx)) | 1541 for_each_eh_label (void (*callback) (rtx)) |
2279 { | 1542 { |
2280 htab_traverse (crtl->eh.exception_handler_label_map, for_each_eh_label_1, | 1543 eh_landing_pad lp; |
2281 (void *) &callback); | 1544 int i; |
2282 } | 1545 |
2283 | 1546 for (i = 1; VEC_iterate (eh_landing_pad, cfun->eh->lp_array, i, lp); ++i) |
2284 static int | 1547 { |
2285 for_each_eh_label_1 (void **pentry, void *data) | 1548 if (lp) |
2286 { | 1549 { |
2287 struct ehl_map_entry *entry = *(struct ehl_map_entry **)pentry; | 1550 rtx lab = lp->landing_pad; |
2288 void (*callback) (rtx) = *(void (**) (rtx)) data; | 1551 if (lab && LABEL_P (lab)) |
2289 | 1552 (*callback) (lab); |
2290 (*callback) (entry->label); | 1553 } |
2291 return 1; | 1554 } |
2292 } | 1555 } |
2293 | 1556 |
2294 /* Invoke CALLBACK for every exception region in the current function. */ | 1557 /* Create the REG_EH_REGION note for INSN, given its ECF_FLAGS for a |
1558 call insn. | |
1559 | |
1560 At the gimple level, we use LP_NR | |
1561 > 0 : The statement transfers to landing pad LP_NR | |
1562 = 0 : The statement is outside any EH region | |
1563 < 0 : The statement is within MUST_NOT_THROW region -LP_NR. | |
1564 | |
1565 At the rtl level, we use LP_NR | |
1566 > 0 : The insn transfers to landing pad LP_NR | |
1567 = 0 : The insn cannot throw | |
1568 < 0 : The insn is within MUST_NOT_THROW region -LP_NR | |
1569 = INT_MIN : The insn cannot throw or execute a nonlocal-goto. | |
1570 missing note: The insn is outside any EH region. | |
1571 | |
1572 ??? This difference probably ought to be avoided. We could stand | |
1573 to record nothrow for arbitrary gimple statements, and so avoid | |
1574 some moderately complex lookups in stmt_could_throw_p. Perhaps | |
1575 NOTHROW should be mapped on both sides to INT_MIN. Perhaps the | |
1576 no-nonlocal-goto property should be recorded elsewhere as a bit | |
1577 on the call_insn directly. Perhaps we should make more use of | |
1578 attaching the trees to call_insns (reachable via symbol_ref in | |
1579 direct call cases) and just pull the data out of the trees. */ | |
2295 | 1580 |
2296 void | 1581 void |
2297 for_each_eh_region (void (*callback) (struct eh_region *)) | 1582 make_reg_eh_region_note (rtx insn, int ecf_flags, int lp_nr) |
2298 { | 1583 { |
2299 int i, n = cfun->eh->last_region_number; | 1584 rtx value; |
2300 for (i = 1; i <= n; ++i) | 1585 if (ecf_flags & ECF_NOTHROW) |
2301 { | 1586 value = const0_rtx; |
2302 struct eh_region *region; | 1587 else if (lp_nr != 0) |
2303 | 1588 value = GEN_INT (lp_nr); |
2304 region = VEC_index (eh_region, cfun->eh->region_array, i); | |
2305 if (region) | |
2306 (*callback) (region); | |
2307 } | |
2308 } | |
2309 | |
2310 /* This section describes CFG exception edges for flow. */ | |
2311 | |
2312 /* For communicating between calls to reachable_next_level. */ | |
2313 struct reachable_info | |
2314 { | |
2315 tree types_caught; | |
2316 tree types_allowed; | |
2317 void (*callback) (struct eh_region *, void *); | |
2318 void *callback_data; | |
2319 bool saw_any_handlers; | |
2320 }; | |
2321 | |
2322 /* A subroutine of reachable_next_level. Return true if TYPE, or a | |
2323 base class of TYPE, is in HANDLED. */ | |
2324 | |
2325 static int | |
2326 check_handled (tree handled, tree type) | |
2327 { | |
2328 tree t; | |
2329 | |
2330 /* We can check for exact matches without front-end help. */ | |
2331 if (! lang_eh_type_covers) | |
2332 { | |
2333 for (t = handled; t ; t = TREE_CHAIN (t)) | |
2334 if (TREE_VALUE (t) == type) | |
2335 return 1; | |
2336 } | |
2337 else | 1589 else |
2338 { | |
2339 for (t = handled; t ; t = TREE_CHAIN (t)) | |
2340 if ((*lang_eh_type_covers) (TREE_VALUE (t), type)) | |
2341 return 1; | |
2342 } | |
2343 | |
2344 return 0; | |
2345 } | |
2346 | |
2347 /* A subroutine of reachable_next_level. If we are collecting a list | |
2348 of handlers, add one. After landing pad generation, reference | |
2349 it instead of the handlers themselves. Further, the handlers are | |
2350 all wired together, so by referencing one, we've got them all. | |
2351 Before landing pad generation we reference each handler individually. | |
2352 | |
2353 LP_REGION contains the landing pad; REGION is the handler. */ | |
2354 | |
2355 static void | |
2356 add_reachable_handler (struct reachable_info *info, | |
2357 struct eh_region *lp_region, struct eh_region *region) | |
2358 { | |
2359 if (! info) | |
2360 return; | 1590 return; |
2361 | 1591 add_reg_note (insn, REG_EH_REGION, value); |
2362 info->saw_any_handlers = true; | 1592 } |
2363 | 1593 |
2364 if (crtl->eh.built_landing_pads) | 1594 /* Create a REG_EH_REGION note for a CALL_INSN that cannot throw |
2365 info->callback (lp_region, info->callback_data); | 1595 nor perform a non-local goto. Replace the region note if it |
1596 already exists. */ | |
1597 | |
1598 void | |
1599 make_reg_eh_region_note_nothrow_nononlocal (rtx insn) | |
1600 { | |
1601 rtx note = find_reg_note (insn, REG_EH_REGION, NULL_RTX); | |
1602 rtx intmin = GEN_INT (INT_MIN); | |
1603 | |
1604 if (note != 0) | |
1605 XEXP (note, 0) = intmin; | |
2366 else | 1606 else |
2367 info->callback (region, info->callback_data); | 1607 add_reg_note (insn, REG_EH_REGION, intmin); |
2368 } | 1608 } |
2369 | 1609 |
2370 /* Process one level of exception regions for reachability. | 1610 /* Return true if INSN could throw, assuming no REG_EH_REGION note |
2371 If TYPE_THROWN is non-null, then it is the *exact* type being | 1611 to the contrary. */ |
2372 propagated. If INFO is non-null, then collect handler labels | 1612 |
2373 and caught/allowed type information between invocations. */ | 1613 bool |
2374 | 1614 insn_could_throw_p (const_rtx insn) |
2375 static enum reachable_code | 1615 { |
2376 reachable_next_level (struct eh_region *region, tree type_thrown, | 1616 if (CALL_P (insn)) |
2377 struct reachable_info *info) | 1617 return true; |
2378 { | 1618 if (INSN_P (insn) && flag_non_call_exceptions) |
2379 switch (region->type) | 1619 return may_trap_p (PATTERN (insn)); |
2380 { | 1620 return false; |
2381 case ERT_CLEANUP: | 1621 } |
2382 /* Before landing-pad generation, we model control flow | 1622 |
2383 directly to the individual handlers. In this way we can | 1623 /* Copy an REG_EH_REGION note to each insn that might throw beginning |
2384 see that catch handler types may shadow one another. */ | 1624 at FIRST and ending at LAST. NOTE_OR_INSN is either the source insn |
2385 add_reachable_handler (info, region, region); | 1625 to look for a note, or the note itself. */ |
2386 return RNL_MAYBE_CAUGHT; | |
2387 | |
2388 case ERT_TRY: | |
2389 { | |
2390 struct eh_region *c; | |
2391 enum reachable_code ret = RNL_NOT_CAUGHT; | |
2392 | |
2393 for (c = region->u.eh_try.eh_catch; c ; c = c->u.eh_catch.next_catch) | |
2394 { | |
2395 /* A catch-all handler ends the search. */ | |
2396 if (c->u.eh_catch.type_list == NULL) | |
2397 { | |
2398 add_reachable_handler (info, region, c); | |
2399 return RNL_CAUGHT; | |
2400 } | |
2401 | |
2402 if (type_thrown) | |
2403 { | |
2404 /* If we have at least one type match, end the search. */ | |
2405 tree tp_node = c->u.eh_catch.type_list; | |
2406 | |
2407 for (; tp_node; tp_node = TREE_CHAIN (tp_node)) | |
2408 { | |
2409 tree type = TREE_VALUE (tp_node); | |
2410 | |
2411 if (type == type_thrown | |
2412 || (lang_eh_type_covers | |
2413 && (*lang_eh_type_covers) (type, type_thrown))) | |
2414 { | |
2415 add_reachable_handler (info, region, c); | |
2416 return RNL_CAUGHT; | |
2417 } | |
2418 } | |
2419 | |
2420 /* If we have definitive information of a match failure, | |
2421 the catch won't trigger. */ | |
2422 if (lang_eh_type_covers) | |
2423 return RNL_NOT_CAUGHT; | |
2424 } | |
2425 | |
2426 /* At this point, we either don't know what type is thrown or | |
2427 don't have front-end assistance to help deciding if it is | |
2428 covered by one of the types in the list for this region. | |
2429 | |
2430 We'd then like to add this region to the list of reachable | |
2431 handlers since it is indeed potentially reachable based on the | |
2432 information we have. | |
2433 | |
2434 Actually, this handler is for sure not reachable if all the | |
2435 types it matches have already been caught. That is, it is only | |
2436 potentially reachable if at least one of the types it catches | |
2437 has not been previously caught. */ | |
2438 | |
2439 if (! info) | |
2440 ret = RNL_MAYBE_CAUGHT; | |
2441 else | |
2442 { | |
2443 tree tp_node = c->u.eh_catch.type_list; | |
2444 bool maybe_reachable = false; | |
2445 | |
2446 /* Compute the potential reachability of this handler and | |
2447 update the list of types caught at the same time. */ | |
2448 for (; tp_node; tp_node = TREE_CHAIN (tp_node)) | |
2449 { | |
2450 tree type = TREE_VALUE (tp_node); | |
2451 | |
2452 if (! check_handled (info->types_caught, type)) | |
2453 { | |
2454 info->types_caught | |
2455 = tree_cons (NULL, type, info->types_caught); | |
2456 | |
2457 maybe_reachable = true; | |
2458 } | |
2459 } | |
2460 | |
2461 if (maybe_reachable) | |
2462 { | |
2463 add_reachable_handler (info, region, c); | |
2464 | |
2465 /* ??? If the catch type is a base class of every allowed | |
2466 type, then we know we can stop the search. */ | |
2467 ret = RNL_MAYBE_CAUGHT; | |
2468 } | |
2469 } | |
2470 } | |
2471 | |
2472 return ret; | |
2473 } | |
2474 | |
2475 case ERT_ALLOWED_EXCEPTIONS: | |
2476 /* An empty list of types definitely ends the search. */ | |
2477 if (region->u.allowed.type_list == NULL_TREE) | |
2478 { | |
2479 add_reachable_handler (info, region, region); | |
2480 return RNL_CAUGHT; | |
2481 } | |
2482 | |
2483 /* Collect a list of lists of allowed types for use in detecting | |
2484 when a catch may be transformed into a catch-all. */ | |
2485 if (info) | |
2486 info->types_allowed = tree_cons (NULL_TREE, | |
2487 region->u.allowed.type_list, | |
2488 info->types_allowed); | |
2489 | |
2490 /* If we have definitive information about the type hierarchy, | |
2491 then we can tell if the thrown type will pass through the | |
2492 filter. */ | |
2493 if (type_thrown && lang_eh_type_covers) | |
2494 { | |
2495 if (check_handled (region->u.allowed.type_list, type_thrown)) | |
2496 return RNL_NOT_CAUGHT; | |
2497 else | |
2498 { | |
2499 add_reachable_handler (info, region, region); | |
2500 return RNL_CAUGHT; | |
2501 } | |
2502 } | |
2503 | |
2504 add_reachable_handler (info, region, region); | |
2505 return RNL_MAYBE_CAUGHT; | |
2506 | |
2507 case ERT_CATCH: | |
2508 /* Catch regions are handled by their controlling try region. */ | |
2509 return RNL_NOT_CAUGHT; | |
2510 | |
2511 case ERT_MUST_NOT_THROW: | |
2512 /* Here we end our search, since no exceptions may propagate. | |
2513 If we've touched down at some landing pad previous, then the | |
2514 explicit function call we generated may be used. Otherwise | |
2515 the call is made by the runtime. | |
2516 | |
2517 Before inlining, do not perform this optimization. We may | |
2518 inline a subroutine that contains handlers, and that will | |
2519 change the value of saw_any_handlers. */ | |
2520 | |
2521 if ((info && info->saw_any_handlers) || !cfun->after_inlining) | |
2522 { | |
2523 add_reachable_handler (info, region, region); | |
2524 return RNL_CAUGHT; | |
2525 } | |
2526 else | |
2527 return RNL_BLOCKED; | |
2528 | |
2529 case ERT_THROW: | |
2530 case ERT_UNKNOWN: | |
2531 /* Shouldn't see these here. */ | |
2532 gcc_unreachable (); | |
2533 break; | |
2534 default: | |
2535 gcc_unreachable (); | |
2536 } | |
2537 } | |
2538 | |
2539 /* Invoke CALLBACK on each region reachable from REGION_NUMBER. */ | |
2540 | 1626 |
2541 void | 1627 void |
2542 foreach_reachable_handler (int region_number, bool is_resx, | 1628 copy_reg_eh_region_note_forward (rtx note_or_insn, rtx first, rtx last) |
2543 void (*callback) (struct eh_region *, void *), | 1629 { |
2544 void *callback_data) | 1630 rtx insn, note = note_or_insn; |
2545 { | 1631 |
2546 struct reachable_info info; | 1632 if (INSN_P (note_or_insn)) |
2547 struct eh_region *region; | 1633 { |
2548 tree type_thrown; | 1634 note = find_reg_note (note_or_insn, REG_EH_REGION, NULL_RTX); |
2549 | 1635 if (note == NULL) |
2550 memset (&info, 0, sizeof (info)); | |
2551 info.callback = callback; | |
2552 info.callback_data = callback_data; | |
2553 | |
2554 region = VEC_index (eh_region, cfun->eh->region_array, region_number); | |
2555 | |
2556 type_thrown = NULL_TREE; | |
2557 if (is_resx) | |
2558 { | |
2559 /* A RESX leaves a region instead of entering it. Thus the | |
2560 region itself may have been deleted out from under us. */ | |
2561 if (region == NULL) | |
2562 return; | 1636 return; |
2563 region = region->outer; | 1637 } |
2564 } | 1638 note = XEXP (note, 0); |
2565 else if (region->type == ERT_THROW) | 1639 |
2566 { | 1640 for (insn = first; insn != last ; insn = NEXT_INSN (insn)) |
2567 type_thrown = region->u.eh_throw.type; | 1641 if (!find_reg_note (insn, REG_EH_REGION, NULL_RTX) |
2568 region = region->outer; | 1642 && insn_could_throw_p (insn)) |
2569 } | 1643 add_reg_note (insn, REG_EH_REGION, note); |
2570 | 1644 } |
2571 while (region) | 1645 |
2572 { | 1646 /* Likewise, but iterate backward. */ |
2573 if (reachable_next_level (region, type_thrown, &info) >= RNL_CAUGHT) | 1647 |
2574 break; | 1648 void |
2575 /* If we have processed one cleanup, there is no point in | 1649 copy_reg_eh_region_note_backward (rtx note_or_insn, rtx last, rtx first) |
2576 processing any more of them. Each cleanup will have an edge | 1650 { |
2577 to the next outer cleanup region, so the flow graph will be | 1651 rtx insn, note = note_or_insn; |
2578 accurate. */ | 1652 |
2579 if (region->type == ERT_CLEANUP) | 1653 if (INSN_P (note_or_insn)) |
2580 region = region->u.cleanup.prev_try; | 1654 { |
2581 else | 1655 note = find_reg_note (note_or_insn, REG_EH_REGION, NULL_RTX); |
2582 region = region->outer; | 1656 if (note == NULL) |
2583 } | 1657 return; |
2584 } | 1658 } |
2585 | 1659 note = XEXP (note, 0); |
2586 /* Retrieve a list of labels of exception handlers which can be | 1660 |
2587 reached by a given insn. */ | 1661 for (insn = last; insn != first; insn = PREV_INSN (insn)) |
2588 | 1662 if (insn_could_throw_p (insn)) |
2589 static void | 1663 add_reg_note (insn, REG_EH_REGION, note); |
2590 arh_to_landing_pad (struct eh_region *region, void *data) | 1664 } |
2591 { | 1665 |
2592 rtx *p_handlers = (rtx *) data; | 1666 |
2593 if (! *p_handlers) | 1667 /* Extract all EH information from INSN. Return true if the insn |
2594 *p_handlers = alloc_INSN_LIST (region->landing_pad, NULL_RTX); | 1668 was marked NOTHROW. */ |
2595 } | 1669 |
2596 | 1670 static bool |
2597 static void | 1671 get_eh_region_and_lp_from_rtx (const_rtx insn, eh_region *pr, |
2598 arh_to_label (struct eh_region *region, void *data) | 1672 eh_landing_pad *plp) |
2599 { | 1673 { |
2600 rtx *p_handlers = (rtx *) data; | 1674 eh_landing_pad lp = NULL; |
2601 *p_handlers = alloc_INSN_LIST (region->label, *p_handlers); | 1675 eh_region r = NULL; |
2602 } | 1676 bool ret = false; |
2603 | |
2604 rtx | |
2605 reachable_handlers (rtx insn) | |
2606 { | |
2607 bool is_resx = false; | |
2608 rtx handlers = NULL; | |
2609 int region_number; | |
2610 | |
2611 if (JUMP_P (insn) | |
2612 && GET_CODE (PATTERN (insn)) == RESX) | |
2613 { | |
2614 region_number = XINT (PATTERN (insn), 0); | |
2615 is_resx = true; | |
2616 } | |
2617 else | |
2618 { | |
2619 rtx note = find_reg_note (insn, REG_EH_REGION, NULL_RTX); | |
2620 if (!note || INTVAL (XEXP (note, 0)) <= 0) | |
2621 return NULL; | |
2622 region_number = INTVAL (XEXP (note, 0)); | |
2623 } | |
2624 | |
2625 foreach_reachable_handler (region_number, is_resx, | |
2626 (crtl->eh.built_landing_pads | |
2627 ? arh_to_landing_pad | |
2628 : arh_to_label), | |
2629 &handlers); | |
2630 | |
2631 return handlers; | |
2632 } | |
2633 | |
2634 /* Determine if the given INSN can throw an exception that is caught | |
2635 within the function. */ | |
2636 | |
2637 bool | |
2638 can_throw_internal_1 (int region_number, bool is_resx) | |
2639 { | |
2640 struct eh_region *region; | |
2641 tree type_thrown; | |
2642 | |
2643 region = VEC_index (eh_region, cfun->eh->region_array, region_number); | |
2644 | |
2645 type_thrown = NULL_TREE; | |
2646 if (is_resx) | |
2647 region = region->outer; | |
2648 else if (region->type == ERT_THROW) | |
2649 { | |
2650 type_thrown = region->u.eh_throw.type; | |
2651 region = region->outer; | |
2652 } | |
2653 | |
2654 /* If this exception is ignored by each and every containing region, | |
2655 then control passes straight out. The runtime may handle some | |
2656 regions, which also do not require processing internally. */ | |
2657 for (; region; region = region->outer) | |
2658 { | |
2659 enum reachable_code how = reachable_next_level (region, type_thrown, 0); | |
2660 if (how == RNL_BLOCKED) | |
2661 return false; | |
2662 if (how != RNL_NOT_CAUGHT) | |
2663 return true; | |
2664 } | |
2665 | |
2666 return false; | |
2667 } | |
2668 | |
2669 bool | |
2670 can_throw_internal (const_rtx insn) | |
2671 { | |
2672 rtx note; | 1677 rtx note; |
1678 int lp_nr; | |
2673 | 1679 |
2674 if (! INSN_P (insn)) | 1680 if (! INSN_P (insn)) |
2675 return false; | 1681 goto egress; |
2676 | |
2677 if (JUMP_P (insn) | |
2678 && GET_CODE (PATTERN (insn)) == RESX | |
2679 && XINT (PATTERN (insn), 0) > 0) | |
2680 return can_throw_internal_1 (XINT (PATTERN (insn), 0), true); | |
2681 | 1682 |
2682 if (NONJUMP_INSN_P (insn) | 1683 if (NONJUMP_INSN_P (insn) |
2683 && GET_CODE (PATTERN (insn)) == SEQUENCE) | 1684 && GET_CODE (PATTERN (insn)) == SEQUENCE) |
2684 insn = XVECEXP (PATTERN (insn), 0, 0); | 1685 insn = XVECEXP (PATTERN (insn), 0, 0); |
2685 | 1686 |
2686 /* Every insn that might throw has an EH_REGION note. */ | |
2687 note = find_reg_note (insn, REG_EH_REGION, NULL_RTX); | 1687 note = find_reg_note (insn, REG_EH_REGION, NULL_RTX); |
2688 if (!note || INTVAL (XEXP (note, 0)) <= 0) | 1688 if (!note) |
2689 return false; | 1689 { |
2690 | 1690 ret = !insn_could_throw_p (insn); |
2691 return can_throw_internal_1 (INTVAL (XEXP (note, 0)), false); | 1691 goto egress; |
2692 } | 1692 } |
2693 | 1693 |
2694 /* Determine if the given INSN can throw an exception that is | 1694 lp_nr = INTVAL (XEXP (note, 0)); |
2695 visible outside the function. */ | 1695 if (lp_nr == 0 || lp_nr == INT_MIN) |
1696 { | |
1697 ret = true; | |
1698 goto egress; | |
1699 } | |
1700 | |
1701 if (lp_nr < 0) | |
1702 r = VEC_index (eh_region, cfun->eh->region_array, -lp_nr); | |
1703 else | |
1704 { | |
1705 lp = VEC_index (eh_landing_pad, cfun->eh->lp_array, lp_nr); | |
1706 r = lp->region; | |
1707 } | |
1708 | |
1709 egress: | |
1710 *plp = lp; | |
1711 *pr = r; | |
1712 return ret; | |
1713 } | |
1714 | |
1715 /* Return the landing pad to which INSN may go, or NULL if it does not | |
1716 have a reachable landing pad within this function. */ | |
1717 | |
1718 eh_landing_pad | |
1719 get_eh_landing_pad_from_rtx (const_rtx insn) | |
1720 { | |
1721 eh_landing_pad lp; | |
1722 eh_region r; | |
1723 | |
1724 get_eh_region_and_lp_from_rtx (insn, &r, &lp); | |
1725 return lp; | |
1726 } | |
1727 | |
1728 /* Return the region to which INSN may go, or NULL if it does not | |
1729 have a reachable region within this function. */ | |
1730 | |
1731 eh_region | |
1732 get_eh_region_from_rtx (const_rtx insn) | |
1733 { | |
1734 eh_landing_pad lp; | |
1735 eh_region r; | |
1736 | |
1737 get_eh_region_and_lp_from_rtx (insn, &r, &lp); | |
1738 return r; | |
1739 } | |
1740 | |
1741 /* Return true if INSN throws and is caught by something in this function. */ | |
2696 | 1742 |
2697 bool | 1743 bool |
2698 can_throw_external_1 (int region_number, bool is_resx) | 1744 can_throw_internal (const_rtx insn) |
2699 { | 1745 { |
2700 struct eh_region *region; | 1746 return get_eh_landing_pad_from_rtx (insn) != NULL; |
2701 tree type_thrown; | 1747 } |
2702 | 1748 |
2703 region = VEC_index (eh_region, cfun->eh->region_array, region_number); | 1749 /* Return true if INSN throws and escapes from the current function. */ |
2704 | |
2705 type_thrown = NULL_TREE; | |
2706 if (is_resx) | |
2707 region = region->outer; | |
2708 else if (region->type == ERT_THROW) | |
2709 { | |
2710 type_thrown = region->u.eh_throw.type; | |
2711 region = region->outer; | |
2712 } | |
2713 | |
2714 /* If the exception is caught or blocked by any containing region, | |
2715 then it is not seen by any calling function. */ | |
2716 for (; region ; region = region->outer) | |
2717 if (reachable_next_level (region, type_thrown, NULL) >= RNL_CAUGHT) | |
2718 return false; | |
2719 | |
2720 return true; | |
2721 } | |
2722 | 1750 |
2723 bool | 1751 bool |
2724 can_throw_external (const_rtx insn) | 1752 can_throw_external (const_rtx insn) |
2725 { | 1753 { |
2726 rtx note; | 1754 eh_landing_pad lp; |
1755 eh_region r; | |
1756 bool nothrow; | |
2727 | 1757 |
2728 if (! INSN_P (insn)) | 1758 if (! INSN_P (insn)) |
2729 return false; | 1759 return false; |
2730 | 1760 |
2731 if (JUMP_P (insn) | |
2732 && GET_CODE (PATTERN (insn)) == RESX | |
2733 && XINT (PATTERN (insn), 0) > 0) | |
2734 return can_throw_external_1 (XINT (PATTERN (insn), 0), true); | |
2735 | |
2736 if (NONJUMP_INSN_P (insn) | 1761 if (NONJUMP_INSN_P (insn) |
2737 && GET_CODE (PATTERN (insn)) == SEQUENCE) | 1762 && GET_CODE (PATTERN (insn)) == SEQUENCE) |
2738 insn = XVECEXP (PATTERN (insn), 0, 0); | 1763 { |
2739 | 1764 rtx seq = PATTERN (insn); |
2740 note = find_reg_note (insn, REG_EH_REGION, NULL_RTX); | 1765 int i, n = XVECLEN (seq, 0); |
2741 if (!note) | 1766 |
2742 { | 1767 for (i = 0; i < n; i++) |
2743 /* Calls (and trapping insns) without notes are outside any | 1768 if (can_throw_external (XVECEXP (seq, 0, i))) |
2744 exception handling region in this function. We have to | 1769 return true; |
2745 assume it might throw. Given that the front end and middle | 1770 |
2746 ends mark known NOTHROW functions, this isn't so wildly | 1771 return false; |
2747 inaccurate. */ | 1772 } |
2748 return (CALL_P (insn) | 1773 |
2749 || (flag_non_call_exceptions | 1774 nothrow = get_eh_region_and_lp_from_rtx (insn, &r, &lp); |
2750 && may_trap_p (PATTERN (insn)))); | 1775 |
2751 } | 1776 /* If we can't throw, we obviously can't throw external. */ |
2752 if (INTVAL (XEXP (note, 0)) <= 0) | 1777 if (nothrow) |
2753 return false; | 1778 return false; |
2754 | 1779 |
2755 return can_throw_external_1 (INTVAL (XEXP (note, 0)), false); | 1780 /* If we have an internal landing pad, then we're not external. */ |
2756 } | 1781 if (lp != NULL) |
2757 | 1782 return false; |
1783 | |
1784 /* If we're not within an EH region, then we are external. */ | |
1785 if (r == NULL) | |
1786 return true; | |
1787 | |
1788 /* The only thing that ought to be left is MUST_NOT_THROW regions, | |
1789 which don't always have landing pads. */ | |
1790 gcc_assert (r->type == ERT_MUST_NOT_THROW); | |
1791 return false; | |
1792 } | |
1793 | |
1794 /* Return true if INSN cannot throw at all. */ | |
1795 | |
1796 bool | |
1797 insn_nothrow_p (const_rtx insn) | |
1798 { | |
1799 eh_landing_pad lp; | |
1800 eh_region r; | |
1801 | |
1802 if (! INSN_P (insn)) | |
1803 return true; | |
1804 | |
1805 if (NONJUMP_INSN_P (insn) | |
1806 && GET_CODE (PATTERN (insn)) == SEQUENCE) | |
1807 { | |
1808 rtx seq = PATTERN (insn); | |
1809 int i, n = XVECLEN (seq, 0); | |
1810 | |
1811 for (i = 0; i < n; i++) | |
1812 if (!insn_nothrow_p (XVECEXP (seq, 0, i))) | |
1813 return false; | |
1814 | |
1815 return true; | |
1816 } | |
1817 | |
1818 return get_eh_region_and_lp_from_rtx (insn, &r, &lp); | |
1819 } | |
1820 | |
1821 /* Return true if INSN can perform a non-local goto. */ | |
1822 /* ??? This test is here in this file because it (ab)uses REG_EH_REGION. */ | |
1823 | |
1824 bool | |
1825 can_nonlocal_goto (const_rtx insn) | |
1826 { | |
1827 if (nonlocal_goto_handler_labels && CALL_P (insn)) | |
1828 { | |
1829 rtx note = find_reg_note (insn, REG_EH_REGION, NULL_RTX); | |
1830 if (!note || INTVAL (XEXP (note, 0)) != INT_MIN) | |
1831 return true; | |
1832 } | |
1833 return false; | |
1834 } | |
1835 | |
2758 /* Set TREE_NOTHROW and crtl->all_throwers_are_sibcalls. */ | 1836 /* Set TREE_NOTHROW and crtl->all_throwers_are_sibcalls. */ |
2759 | 1837 |
2760 unsigned int | 1838 static unsigned int |
2761 set_nothrow_function_flags (void) | 1839 set_nothrow_function_flags (void) |
2762 { | 1840 { |
2763 rtx insn; | 1841 rtx insn; |
2764 | 1842 |
2765 /* If we don't know that this implementation of the function will | 1843 crtl->nothrow = 1; |
2766 actually be used, then we must not set TREE_NOTHROW, since | |
2767 callers must not assume that this function does not throw. */ | |
2768 if (DECL_REPLACEABLE_P (current_function_decl)) | |
2769 return 0; | |
2770 | |
2771 TREE_NOTHROW (current_function_decl) = 1; | |
2772 | 1844 |
2773 /* Assume crtl->all_throwers_are_sibcalls until we encounter | 1845 /* Assume crtl->all_throwers_are_sibcalls until we encounter |
2774 something that can throw an exception. We specifically exempt | 1846 something that can throw an exception. We specifically exempt |
2775 CALL_INSNs that are SIBLING_CALL_P, as these are really jumps, | 1847 CALL_INSNs that are SIBLING_CALL_P, as these are really jumps, |
2776 and can't throw. Most CALL_INSNs are not SIBLING_CALL_P, so this | 1848 and can't throw. Most CALL_INSNs are not SIBLING_CALL_P, so this |
2777 is optimistic. */ | 1849 is optimistic. */ |
2778 | 1850 |
2779 crtl->all_throwers_are_sibcalls = 1; | 1851 crtl->all_throwers_are_sibcalls = 1; |
2780 | 1852 |
1853 /* If we don't know that this implementation of the function will | |
1854 actually be used, then we must not set TREE_NOTHROW, since | |
1855 callers must not assume that this function does not throw. */ | |
1856 if (TREE_NOTHROW (current_function_decl)) | |
1857 return 0; | |
1858 | |
2781 if (! flag_exceptions) | 1859 if (! flag_exceptions) |
2782 return 0; | 1860 return 0; |
2783 | 1861 |
2784 for (insn = get_insns (); insn; insn = NEXT_INSN (insn)) | 1862 for (insn = get_insns (); insn; insn = NEXT_INSN (insn)) |
2785 if (can_throw_external (insn)) | 1863 if (can_throw_external (insn)) |
2786 { | 1864 { |
2787 TREE_NOTHROW (current_function_decl) = 0; | 1865 crtl->nothrow = 0; |
2788 | 1866 |
2789 if (!CALL_P (insn) || !SIBLING_CALL_P (insn)) | 1867 if (!CALL_P (insn) || !SIBLING_CALL_P (insn)) |
2790 { | 1868 { |
2791 crtl->all_throwers_are_sibcalls = 0; | 1869 crtl->all_throwers_are_sibcalls = 0; |
2792 return 0; | 1870 return 0; |
2795 | 1873 |
2796 for (insn = crtl->epilogue_delay_list; insn; | 1874 for (insn = crtl->epilogue_delay_list; insn; |
2797 insn = XEXP (insn, 1)) | 1875 insn = XEXP (insn, 1)) |
2798 if (can_throw_external (insn)) | 1876 if (can_throw_external (insn)) |
2799 { | 1877 { |
2800 TREE_NOTHROW (current_function_decl) = 0; | 1878 crtl->nothrow = 0; |
2801 | 1879 |
2802 if (!CALL_P (insn) || !SIBLING_CALL_P (insn)) | 1880 if (!CALL_P (insn) || !SIBLING_CALL_P (insn)) |
2803 { | 1881 { |
2804 crtl->all_throwers_are_sibcalls = 0; | 1882 crtl->all_throwers_are_sibcalls = 0; |
2805 return 0; | 1883 return 0; |
2806 } | 1884 } |
2807 } | 1885 } |
1886 if (crtl->nothrow | |
1887 && (cgraph_function_body_availability (cgraph_node | |
1888 (current_function_decl)) | |
1889 >= AVAIL_AVAILABLE)) | |
1890 { | |
1891 struct cgraph_node *node = cgraph_node (current_function_decl); | |
1892 struct cgraph_edge *e; | |
1893 for (e = node->callers; e; e = e->next_caller) | |
1894 e->can_throw_external = false; | |
1895 cgraph_set_nothrow_flag (node, true); | |
1896 | |
1897 if (dump_file) | |
1898 fprintf (dump_file, "Marking function nothrow: %s\n\n", | |
1899 current_function_name ()); | |
1900 } | |
2808 return 0; | 1901 return 0; |
2809 } | 1902 } |
2810 | 1903 |
2811 struct rtl_opt_pass pass_set_nothrow_function_flags = | 1904 struct rtl_opt_pass pass_set_nothrow_function_flags = |
2812 { | 1905 { |
2813 { | 1906 { |
2814 RTL_PASS, | 1907 RTL_PASS, |
2815 NULL, /* name */ | 1908 "nothrow", /* name */ |
2816 NULL, /* gate */ | 1909 NULL, /* gate */ |
2817 set_nothrow_function_flags, /* execute */ | 1910 set_nothrow_function_flags, /* execute */ |
2818 NULL, /* sub */ | 1911 NULL, /* sub */ |
2819 NULL, /* next */ | 1912 NULL, /* next */ |
2820 0, /* static_pass_number */ | 1913 0, /* static_pass_number */ |
2821 0, /* tv_id */ | 1914 TV_NONE, /* tv_id */ |
2822 0, /* properties_required */ | 1915 0, /* properties_required */ |
2823 0, /* properties_provided */ | 1916 0, /* properties_provided */ |
2824 0, /* properties_destroyed */ | 1917 0, /* properties_destroyed */ |
2825 0, /* todo_flags_start */ | 1918 0, /* todo_flags_start */ |
2826 0, /* todo_flags_finish */ | 1919 TODO_dump_func, /* todo_flags_finish */ |
2827 } | 1920 } |
2828 }; | 1921 }; |
2829 | 1922 |
2830 | 1923 |
2831 /* Various hooks for unwind library. */ | 1924 /* Various hooks for unwind library. */ |
1925 | |
1926 /* Expand the EH support builtin functions: | |
1927 __builtin_eh_pointer and __builtin_eh_filter. */ | |
1928 | |
1929 static eh_region | |
1930 expand_builtin_eh_common (tree region_nr_t) | |
1931 { | |
1932 HOST_WIDE_INT region_nr; | |
1933 eh_region region; | |
1934 | |
1935 gcc_assert (host_integerp (region_nr_t, 0)); | |
1936 region_nr = tree_low_cst (region_nr_t, 0); | |
1937 | |
1938 region = VEC_index (eh_region, cfun->eh->region_array, region_nr); | |
1939 | |
1940 /* ??? We shouldn't have been able to delete a eh region without | |
1941 deleting all the code that depended on it. */ | |
1942 gcc_assert (region != NULL); | |
1943 | |
1944 return region; | |
1945 } | |
1946 | |
1947 /* Expand to the exc_ptr value from the given eh region. */ | |
1948 | |
1949 rtx | |
1950 expand_builtin_eh_pointer (tree exp) | |
1951 { | |
1952 eh_region region | |
1953 = expand_builtin_eh_common (CALL_EXPR_ARG (exp, 0)); | |
1954 if (region->exc_ptr_reg == NULL) | |
1955 region->exc_ptr_reg = gen_reg_rtx (ptr_mode); | |
1956 return region->exc_ptr_reg; | |
1957 } | |
1958 | |
1959 /* Expand to the filter value from the given eh region. */ | |
1960 | |
1961 rtx | |
1962 expand_builtin_eh_filter (tree exp) | |
1963 { | |
1964 eh_region region | |
1965 = expand_builtin_eh_common (CALL_EXPR_ARG (exp, 0)); | |
1966 if (region->filter_reg == NULL) | |
1967 region->filter_reg = gen_reg_rtx (targetm.eh_return_filter_mode ()); | |
1968 return region->filter_reg; | |
1969 } | |
1970 | |
1971 /* Copy the exc_ptr and filter values from one landing pad's registers | |
1972 to another. This is used to inline the resx statement. */ | |
1973 | |
1974 rtx | |
1975 expand_builtin_eh_copy_values (tree exp) | |
1976 { | |
1977 eh_region dst | |
1978 = expand_builtin_eh_common (CALL_EXPR_ARG (exp, 0)); | |
1979 eh_region src | |
1980 = expand_builtin_eh_common (CALL_EXPR_ARG (exp, 1)); | |
1981 enum machine_mode fmode = targetm.eh_return_filter_mode (); | |
1982 | |
1983 if (dst->exc_ptr_reg == NULL) | |
1984 dst->exc_ptr_reg = gen_reg_rtx (ptr_mode); | |
1985 if (src->exc_ptr_reg == NULL) | |
1986 src->exc_ptr_reg = gen_reg_rtx (ptr_mode); | |
1987 | |
1988 if (dst->filter_reg == NULL) | |
1989 dst->filter_reg = gen_reg_rtx (fmode); | |
1990 if (src->filter_reg == NULL) | |
1991 src->filter_reg = gen_reg_rtx (fmode); | |
1992 | |
1993 emit_move_insn (dst->exc_ptr_reg, src->exc_ptr_reg); | |
1994 emit_move_insn (dst->filter_reg, src->filter_reg); | |
1995 | |
1996 return const0_rtx; | |
1997 } | |
2832 | 1998 |
2833 /* Do any necessary initialization to access arbitrary stack frames. | 1999 /* Do any necessary initialization to access arbitrary stack frames. |
2834 On the SPARC, this means flushing the register windows. */ | 2000 On the SPARC, this means flushing the register windows. */ |
2835 | 2001 |
2836 void | 2002 void |
2842 | 2008 |
2843 #ifdef SETUP_FRAME_ADDRESSES | 2009 #ifdef SETUP_FRAME_ADDRESSES |
2844 SETUP_FRAME_ADDRESSES (); | 2010 SETUP_FRAME_ADDRESSES (); |
2845 #endif | 2011 #endif |
2846 } | 2012 } |
2013 | |
2014 /* Map a non-negative number to an eh return data register number; expands | |
2015 to -1 if no return data register is associated with the input number. | |
2016 At least the inputs 0 and 1 must be mapped; the target may provide more. */ | |
2847 | 2017 |
2848 rtx | 2018 rtx |
2849 expand_builtin_eh_return_data_regno (tree exp) | 2019 expand_builtin_eh_return_data_regno (tree exp) |
2850 { | 2020 { |
2851 tree which = CALL_EXPR_ARG (exp, 0); | 2021 tree which = CALL_EXPR_ARG (exp, 0); |
2951 if (!crtl->eh.ehr_label) | 2121 if (!crtl->eh.ehr_label) |
2952 crtl->eh.ehr_label = gen_label_rtx (); | 2122 crtl->eh.ehr_label = gen_label_rtx (); |
2953 emit_jump (crtl->eh.ehr_label); | 2123 emit_jump (crtl->eh.ehr_label); |
2954 } | 2124 } |
2955 | 2125 |
2126 /* Expand __builtin_eh_return. This exit path from the function loads up | |
2127 the eh return data registers, adjusts the stack, and branches to a | |
2128 given PC other than the normal return address. */ | |
2129 | |
2956 void | 2130 void |
2957 expand_eh_return (void) | 2131 expand_eh_return (void) |
2958 { | 2132 { |
2959 rtx around_label; | 2133 rtx around_label; |
2960 | 2134 |
3056 slot = (struct action_record **) htab_find_slot (ar_hash, &tmp, INSERT); | 2230 slot = (struct action_record **) htab_find_slot (ar_hash, &tmp, INSERT); |
3057 | 2231 |
3058 if ((new_ar = *slot) == NULL) | 2232 if ((new_ar = *slot) == NULL) |
3059 { | 2233 { |
3060 new_ar = XNEW (struct action_record); | 2234 new_ar = XNEW (struct action_record); |
3061 new_ar->offset = VARRAY_ACTIVE_SIZE (crtl->eh.action_record_data) + 1; | 2235 new_ar->offset = VEC_length (uchar, crtl->eh.action_record_data) + 1; |
3062 new_ar->filter = filter; | 2236 new_ar->filter = filter; |
3063 new_ar->next = next; | 2237 new_ar->next = next; |
3064 *slot = new_ar; | 2238 *slot = new_ar; |
3065 | 2239 |
3066 /* The filter value goes in untouched. The link to the next | 2240 /* The filter value goes in untouched. The link to the next |
3068 that there is no next record. So convert the absolute 1 based | 2242 that there is no next record. So convert the absolute 1 based |
3069 indices we've been carrying around into a displacement. */ | 2243 indices we've been carrying around into a displacement. */ |
3070 | 2244 |
3071 push_sleb128 (&crtl->eh.action_record_data, filter); | 2245 push_sleb128 (&crtl->eh.action_record_data, filter); |
3072 if (next) | 2246 if (next) |
3073 next -= VARRAY_ACTIVE_SIZE (crtl->eh.action_record_data) + 1; | 2247 next -= VEC_length (uchar, crtl->eh.action_record_data) + 1; |
3074 push_sleb128 (&crtl->eh.action_record_data, next); | 2248 push_sleb128 (&crtl->eh.action_record_data, next); |
3075 } | 2249 } |
3076 | 2250 |
3077 return new_ar->offset; | 2251 return new_ar->offset; |
3078 } | 2252 } |
3079 | 2253 |
3080 static int | 2254 static int |
3081 collect_one_action_chain (htab_t ar_hash, struct eh_region *region) | 2255 collect_one_action_chain (htab_t ar_hash, eh_region region) |
3082 { | 2256 { |
3083 struct eh_region *c; | |
3084 int next; | 2257 int next; |
3085 | 2258 |
3086 /* If we've reached the top of the region chain, then we have | 2259 /* If we've reached the top of the region chain, then we have |
3087 no actions, and require no landing pad. */ | 2260 no actions, and require no landing pad. */ |
3088 if (region == NULL) | 2261 if (region == NULL) |
3089 return -1; | 2262 return -1; |
3090 | 2263 |
3091 switch (region->type) | 2264 switch (region->type) |
3092 { | 2265 { |
3093 case ERT_CLEANUP: | 2266 case ERT_CLEANUP: |
3094 /* A cleanup adds a zero filter to the beginning of the chain, but | 2267 { |
3095 there are special cases to look out for. If there are *only* | 2268 eh_region r; |
3096 cleanups along a path, then it compresses to a zero action. | 2269 /* A cleanup adds a zero filter to the beginning of the chain, but |
3097 Further, if there are multiple cleanups along a path, we only | 2270 there are special cases to look out for. If there are *only* |
3098 need to represent one of them, as that is enough to trigger | 2271 cleanups along a path, then it compresses to a zero action. |
3099 entry to the landing pad at runtime. */ | 2272 Further, if there are multiple cleanups along a path, we only |
3100 next = collect_one_action_chain (ar_hash, region->outer); | 2273 need to represent one of them, as that is enough to trigger |
3101 if (next <= 0) | 2274 entry to the landing pad at runtime. */ |
3102 return 0; | 2275 next = collect_one_action_chain (ar_hash, region->outer); |
3103 for (c = region->outer; c ; c = c->outer) | 2276 if (next <= 0) |
3104 if (c->type == ERT_CLEANUP) | 2277 return 0; |
3105 return next; | 2278 for (r = region->outer; r ; r = r->outer) |
3106 return add_action_record (ar_hash, 0, next); | 2279 if (r->type == ERT_CLEANUP) |
2280 return next; | |
2281 return add_action_record (ar_hash, 0, next); | |
2282 } | |
3107 | 2283 |
3108 case ERT_TRY: | 2284 case ERT_TRY: |
3109 /* Process the associated catch regions in reverse order. | 2285 { |
3110 If there's a catch-all handler, then we don't need to | 2286 eh_catch c; |
3111 search outer regions. Use a magic -3 value to record | 2287 |
3112 that we haven't done the outer search. */ | 2288 /* Process the associated catch regions in reverse order. |
3113 next = -3; | 2289 If there's a catch-all handler, then we don't need to |
3114 for (c = region->u.eh_try.last_catch; c ; c = c->u.eh_catch.prev_catch) | 2290 search outer regions. Use a magic -3 value to record |
3115 { | 2291 that we haven't done the outer search. */ |
3116 if (c->u.eh_catch.type_list == NULL) | 2292 next = -3; |
3117 { | 2293 for (c = region->u.eh_try.last_catch; c ; c = c->prev_catch) |
3118 /* Retrieve the filter from the head of the filter list | 2294 { |
3119 where we have stored it (see assign_filter_values). */ | 2295 if (c->type_list == NULL) |
3120 int filter | 2296 { |
3121 = TREE_INT_CST_LOW (TREE_VALUE (c->u.eh_catch.filter_list)); | 2297 /* Retrieve the filter from the head of the filter list |
3122 | 2298 where we have stored it (see assign_filter_values). */ |
3123 next = add_action_record (ar_hash, filter, 0); | 2299 int filter = TREE_INT_CST_LOW (TREE_VALUE (c->filter_list)); |
3124 } | 2300 next = add_action_record (ar_hash, filter, 0); |
3125 else | 2301 } |
3126 { | 2302 else |
3127 /* Once the outer search is done, trigger an action record for | 2303 { |
3128 each filter we have. */ | 2304 /* Once the outer search is done, trigger an action record for |
3129 tree flt_node; | 2305 each filter we have. */ |
3130 | 2306 tree flt_node; |
3131 if (next == -3) | 2307 |
3132 { | 2308 if (next == -3) |
3133 next = collect_one_action_chain (ar_hash, region->outer); | 2309 { |
3134 | 2310 next = collect_one_action_chain (ar_hash, region->outer); |
3135 /* If there is no next action, terminate the chain. */ | 2311 |
3136 if (next == -1) | 2312 /* If there is no next action, terminate the chain. */ |
3137 next = 0; | 2313 if (next == -1) |
3138 /* If all outer actions are cleanups or must_not_throw, | 2314 next = 0; |
3139 we'll have no action record for it, since we had wanted | 2315 /* If all outer actions are cleanups or must_not_throw, |
3140 to encode these states in the call-site record directly. | 2316 we'll have no action record for it, since we had wanted |
3141 Add a cleanup action to the chain to catch these. */ | 2317 to encode these states in the call-site record directly. |
3142 else if (next <= 0) | 2318 Add a cleanup action to the chain to catch these. */ |
3143 next = add_action_record (ar_hash, 0, 0); | 2319 else if (next <= 0) |
3144 } | 2320 next = add_action_record (ar_hash, 0, 0); |
3145 | 2321 } |
3146 flt_node = c->u.eh_catch.filter_list; | 2322 |
3147 for (; flt_node; flt_node = TREE_CHAIN (flt_node)) | 2323 flt_node = c->filter_list; |
3148 { | 2324 for (; flt_node; flt_node = TREE_CHAIN (flt_node)) |
3149 int filter = TREE_INT_CST_LOW (TREE_VALUE (flt_node)); | 2325 { |
3150 next = add_action_record (ar_hash, filter, next); | 2326 int filter = TREE_INT_CST_LOW (TREE_VALUE (flt_node)); |
3151 } | 2327 next = add_action_record (ar_hash, filter, next); |
3152 } | 2328 } |
3153 } | 2329 } |
3154 return next; | 2330 } |
2331 return next; | |
2332 } | |
3155 | 2333 |
3156 case ERT_ALLOWED_EXCEPTIONS: | 2334 case ERT_ALLOWED_EXCEPTIONS: |
3157 /* An exception specification adds its filter to the | 2335 /* An exception specification adds its filter to the |
3158 beginning of the chain. */ | 2336 beginning of the chain. */ |
3159 next = collect_one_action_chain (ar_hash, region->outer); | 2337 next = collect_one_action_chain (ar_hash, region->outer); |
3174 /* A must-not-throw region with no inner handlers or cleanups | 2352 /* A must-not-throw region with no inner handlers or cleanups |
3175 requires no call-site entry. Note that this differs from | 2353 requires no call-site entry. Note that this differs from |
3176 the no handler or cleanup case in that we do require an lsda | 2354 the no handler or cleanup case in that we do require an lsda |
3177 to be generated. Return a magic -2 value to record this. */ | 2355 to be generated. Return a magic -2 value to record this. */ |
3178 return -2; | 2356 return -2; |
3179 | 2357 } |
3180 case ERT_CATCH: | 2358 |
3181 case ERT_THROW: | 2359 gcc_unreachable (); |
3182 /* CATCH regions are handled in TRY above. THROW regions are | |
3183 for optimization information only and produce no output. */ | |
3184 return collect_one_action_chain (ar_hash, region->outer); | |
3185 | |
3186 default: | |
3187 gcc_unreachable (); | |
3188 } | |
3189 } | 2360 } |
3190 | 2361 |
3191 static int | 2362 static int |
3192 add_call_site (rtx landing_pad, int action) | 2363 add_call_site (rtx landing_pad, int action, int section) |
3193 { | 2364 { |
3194 call_site_record record; | 2365 call_site_record record; |
3195 | 2366 |
3196 record = GGC_NEW (struct call_site_record); | 2367 record = GGC_NEW (struct call_site_record_d); |
3197 record->landing_pad = landing_pad; | 2368 record->landing_pad = landing_pad; |
3198 record->action = action; | 2369 record->action = action; |
3199 | 2370 |
3200 VEC_safe_push (call_site_record, gc, crtl->eh.call_site_record, record); | 2371 VEC_safe_push (call_site_record, gc, |
3201 | 2372 crtl->eh.call_site_record[section], record); |
3202 return call_site_base + VEC_length (call_site_record, crtl->eh.call_site_record) - 1; | 2373 |
2374 return call_site_base + VEC_length (call_site_record, | |
2375 crtl->eh.call_site_record[section]) - 1; | |
3203 } | 2376 } |
3204 | 2377 |
3205 /* Turn REG_EH_REGION notes back into NOTE_INSN_EH_REGION notes. | 2378 /* Turn REG_EH_REGION notes back into NOTE_INSN_EH_REGION notes. |
3206 The new note numbers will not refer to region numbers, but | 2379 The new note numbers will not refer to region numbers, but |
3207 instead to call site entries. */ | 2380 instead to call site entries. */ |
3208 | 2381 |
3209 unsigned int | 2382 static unsigned int |
3210 convert_to_eh_region_ranges (void) | 2383 convert_to_eh_region_ranges (void) |
3211 { | 2384 { |
3212 rtx insn, iter, note; | 2385 rtx insn, iter, note; |
3213 htab_t ar_hash; | 2386 htab_t ar_hash; |
3214 int last_action = -3; | 2387 int last_action = -3; |
3215 rtx last_action_insn = NULL_RTX; | 2388 rtx last_action_insn = NULL_RTX; |
3216 rtx last_landing_pad = NULL_RTX; | 2389 rtx last_landing_pad = NULL_RTX; |
3217 rtx first_no_action_insn = NULL_RTX; | 2390 rtx first_no_action_insn = NULL_RTX; |
3218 int call_site = 0; | 2391 int call_site = 0; |
3219 | 2392 int cur_sec = 0; |
3220 if (USING_SJLJ_EXCEPTIONS || cfun->eh->region_tree == NULL) | 2393 rtx section_switch_note = NULL_RTX; |
3221 return 0; | 2394 rtx first_no_action_insn_before_switch = NULL_RTX; |
3222 | 2395 rtx last_no_action_insn_before_switch = NULL_RTX; |
3223 VARRAY_UCHAR_INIT (crtl->eh.action_record_data, 64, "action_record_data"); | 2396 rtx *pad_map = NULL; |
2397 sbitmap pad_loc = NULL; | |
2398 int min_labelno = 0, max_labelno = 0; | |
2399 int saved_call_site_base = call_site_base; | |
2400 | |
2401 crtl->eh.action_record_data = VEC_alloc (uchar, gc, 64); | |
3224 | 2402 |
3225 ar_hash = htab_create (31, action_record_hash, action_record_eq, free); | 2403 ar_hash = htab_create (31, action_record_hash, action_record_eq, free); |
3226 | 2404 |
3227 for (iter = get_insns (); iter ; iter = NEXT_INSN (iter)) | 2405 for (iter = get_insns (); iter ; iter = NEXT_INSN (iter)) |
3228 if (INSN_P (iter)) | 2406 if (INSN_P (iter)) |
3229 { | 2407 { |
3230 struct eh_region *region; | 2408 eh_landing_pad lp; |
2409 eh_region region; | |
2410 bool nothrow; | |
3231 int this_action; | 2411 int this_action; |
3232 rtx this_landing_pad; | 2412 rtx this_landing_pad; |
3233 | 2413 |
3234 insn = iter; | 2414 insn = iter; |
3235 if (NONJUMP_INSN_P (insn) | 2415 if (NONJUMP_INSN_P (insn) |
3236 && GET_CODE (PATTERN (insn)) == SEQUENCE) | 2416 && GET_CODE (PATTERN (insn)) == SEQUENCE) |
3237 insn = XVECEXP (PATTERN (insn), 0, 0); | 2417 insn = XVECEXP (PATTERN (insn), 0, 0); |
3238 | 2418 |
3239 note = find_reg_note (insn, REG_EH_REGION, NULL_RTX); | 2419 nothrow = get_eh_region_and_lp_from_rtx (insn, ®ion, &lp); |
3240 if (!note) | 2420 if (nothrow) |
3241 { | 2421 continue; |
3242 if (! (CALL_P (insn) | 2422 if (region) |
3243 || (flag_non_call_exceptions | 2423 this_action = collect_one_action_chain (ar_hash, region); |
3244 && may_trap_p (PATTERN (insn))))) | |
3245 continue; | |
3246 this_action = -1; | |
3247 region = NULL; | |
3248 } | |
3249 else | 2424 else |
3250 { | 2425 this_action = -1; |
3251 if (INTVAL (XEXP (note, 0)) <= 0) | |
3252 continue; | |
3253 region = VEC_index (eh_region, cfun->eh->region_array, INTVAL (XEXP (note, 0))); | |
3254 this_action = collect_one_action_chain (ar_hash, region); | |
3255 } | |
3256 | 2426 |
3257 /* Existence of catch handlers, or must-not-throw regions | 2427 /* Existence of catch handlers, or must-not-throw regions |
3258 implies that an lsda is needed (even if empty). */ | 2428 implies that an lsda is needed (even if empty). */ |
3259 if (this_action != -1) | 2429 if (this_action != -1) |
3260 crtl->uses_eh_lsda = 1; | 2430 crtl->uses_eh_lsda = 1; |
3265 { | 2435 { |
3266 first_no_action_insn = iter; | 2436 first_no_action_insn = iter; |
3267 last_action = -1; | 2437 last_action = -1; |
3268 } | 2438 } |
3269 | 2439 |
3270 /* Cleanups and handlers may share action chains but not | |
3271 landing pads. Collect the landing pad for this region. */ | |
3272 if (this_action >= 0) | 2440 if (this_action >= 0) |
3273 { | 2441 this_landing_pad = lp->landing_pad; |
3274 struct eh_region *o; | |
3275 for (o = region; ! o->landing_pad ; o = o->outer) | |
3276 continue; | |
3277 this_landing_pad = o->landing_pad; | |
3278 } | |
3279 else | 2442 else |
3280 this_landing_pad = NULL_RTX; | 2443 this_landing_pad = NULL_RTX; |
3281 | 2444 |
3282 /* Differing actions or landing pads implies a change in call-site | 2445 /* Differing actions or landing pads implies a change in call-site |
3283 info, which implies some EH_REGION note should be emitted. */ | 2446 info, which implies some EH_REGION note should be emitted. */ |
3288 action was must-not-throw (-2), then we do not need an | 2451 action was must-not-throw (-2), then we do not need an |
3289 end note. */ | 2452 end note. */ |
3290 if (last_action >= -1) | 2453 if (last_action >= -1) |
3291 { | 2454 { |
3292 /* If we delayed the creation of the begin, do it now. */ | 2455 /* If we delayed the creation of the begin, do it now. */ |
2456 if (first_no_action_insn_before_switch) | |
2457 { | |
2458 call_site = add_call_site (NULL_RTX, 0, 0); | |
2459 note | |
2460 = emit_note_before (NOTE_INSN_EH_REGION_BEG, | |
2461 first_no_action_insn_before_switch); | |
2462 NOTE_EH_HANDLER (note) = call_site; | |
2463 if (first_no_action_insn) | |
2464 { | |
2465 note | |
2466 = emit_note_after (NOTE_INSN_EH_REGION_END, | |
2467 last_no_action_insn_before_switch); | |
2468 NOTE_EH_HANDLER (note) = call_site; | |
2469 } | |
2470 else | |
2471 gcc_assert (last_action_insn | |
2472 == last_no_action_insn_before_switch); | |
2473 } | |
3293 if (first_no_action_insn) | 2474 if (first_no_action_insn) |
3294 { | 2475 { |
3295 call_site = add_call_site (NULL_RTX, 0); | 2476 call_site = add_call_site (NULL_RTX, 0, cur_sec); |
3296 note = emit_note_before (NOTE_INSN_EH_REGION_BEG, | 2477 note = emit_note_before (NOTE_INSN_EH_REGION_BEG, |
3297 first_no_action_insn); | 2478 first_no_action_insn); |
3298 NOTE_EH_HANDLER (note) = call_site; | 2479 NOTE_EH_HANDLER (note) = call_site; |
3299 first_no_action_insn = NULL_RTX; | 2480 first_no_action_insn = NULL_RTX; |
3300 } | 2481 } |
3307 /* If the new action is must-not-throw, then no region notes | 2488 /* If the new action is must-not-throw, then no region notes |
3308 are created. */ | 2489 are created. */ |
3309 if (this_action >= -1) | 2490 if (this_action >= -1) |
3310 { | 2491 { |
3311 call_site = add_call_site (this_landing_pad, | 2492 call_site = add_call_site (this_landing_pad, |
3312 this_action < 0 ? 0 : this_action); | 2493 this_action < 0 ? 0 : this_action, |
2494 cur_sec); | |
3313 note = emit_note_before (NOTE_INSN_EH_REGION_BEG, iter); | 2495 note = emit_note_before (NOTE_INSN_EH_REGION_BEG, iter); |
3314 NOTE_EH_HANDLER (note) = call_site; | 2496 NOTE_EH_HANDLER (note) = call_site; |
3315 } | 2497 } |
3316 | 2498 |
3317 last_action = this_action; | 2499 last_action = this_action; |
3318 last_landing_pad = this_landing_pad; | 2500 last_landing_pad = this_landing_pad; |
3319 } | 2501 } |
3320 last_action_insn = iter; | 2502 last_action_insn = iter; |
3321 } | 2503 } |
2504 else if (NOTE_P (iter) | |
2505 && NOTE_KIND (iter) == NOTE_INSN_SWITCH_TEXT_SECTIONS) | |
2506 { | |
2507 gcc_assert (section_switch_note == NULL_RTX); | |
2508 gcc_assert (flag_reorder_blocks_and_partition); | |
2509 section_switch_note = iter; | |
2510 if (first_no_action_insn) | |
2511 { | |
2512 first_no_action_insn_before_switch = first_no_action_insn; | |
2513 last_no_action_insn_before_switch = last_action_insn; | |
2514 first_no_action_insn = NULL_RTX; | |
2515 gcc_assert (last_action == -1); | |
2516 last_action = -3; | |
2517 } | |
2518 /* Force closing of current EH region before section switch and | |
2519 opening a new one afterwards. */ | |
2520 else if (last_action != -3) | |
2521 last_landing_pad = pc_rtx; | |
2522 call_site_base += VEC_length (call_site_record, | |
2523 crtl->eh.call_site_record[cur_sec]); | |
2524 cur_sec++; | |
2525 gcc_assert (crtl->eh.call_site_record[cur_sec] == NULL); | |
2526 crtl->eh.call_site_record[cur_sec] | |
2527 = VEC_alloc (call_site_record, gc, 10); | |
2528 max_labelno = max_label_num (); | |
2529 min_labelno = get_first_label_num (); | |
2530 pad_map = XCNEWVEC (rtx, max_labelno - min_labelno + 1); | |
2531 pad_loc = sbitmap_alloc (max_labelno - min_labelno + 1); | |
2532 } | |
2533 else if (LABEL_P (iter) && pad_map) | |
2534 SET_BIT (pad_loc, CODE_LABEL_NUMBER (iter) - min_labelno); | |
3322 | 2535 |
3323 if (last_action >= -1 && ! first_no_action_insn) | 2536 if (last_action >= -1 && ! first_no_action_insn) |
3324 { | 2537 { |
3325 note = emit_note_after (NOTE_INSN_EH_REGION_END, last_action_insn); | 2538 note = emit_note_after (NOTE_INSN_EH_REGION_END, last_action_insn); |
3326 NOTE_EH_HANDLER (note) = call_site; | 2539 NOTE_EH_HANDLER (note) = call_site; |
3327 } | 2540 } |
3328 | 2541 |
2542 call_site_base = saved_call_site_base; | |
2543 | |
2544 if (pad_map) | |
2545 { | |
2546 /* When doing hot/cold partitioning, ensure landing pads are | |
2547 always in the same section as the EH region, .gcc_except_table | |
2548 can't express it otherwise. */ | |
2549 for (cur_sec = 0; cur_sec < 2; cur_sec++) | |
2550 { | |
2551 int i, idx; | |
2552 int n = VEC_length (call_site_record, | |
2553 crtl->eh.call_site_record[cur_sec]); | |
2554 basic_block prev_bb = NULL, padbb; | |
2555 | |
2556 for (i = 0; i < n; ++i) | |
2557 { | |
2558 struct call_site_record_d *cs = | |
2559 VEC_index (call_site_record, | |
2560 crtl->eh.call_site_record[cur_sec], i); | |
2561 rtx jump, note; | |
2562 | |
2563 if (cs->landing_pad == NULL_RTX) | |
2564 continue; | |
2565 idx = CODE_LABEL_NUMBER (cs->landing_pad) - min_labelno; | |
2566 /* If the landing pad is in the correct section, nothing | |
2567 is needed. */ | |
2568 if (TEST_BIT (pad_loc, idx) ^ (cur_sec == 0)) | |
2569 continue; | |
2570 /* Otherwise, if we haven't seen this pad yet, we need to | |
2571 add a new label and jump to the correct section. */ | |
2572 if (pad_map[idx] == NULL_RTX) | |
2573 { | |
2574 pad_map[idx] = gen_label_rtx (); | |
2575 if (prev_bb == NULL) | |
2576 for (iter = section_switch_note; | |
2577 iter; iter = PREV_INSN (iter)) | |
2578 if (NOTE_INSN_BASIC_BLOCK_P (iter)) | |
2579 { | |
2580 prev_bb = NOTE_BASIC_BLOCK (iter); | |
2581 break; | |
2582 } | |
2583 if (cur_sec == 0) | |
2584 { | |
2585 note = emit_label_before (pad_map[idx], | |
2586 section_switch_note); | |
2587 jump = emit_jump_insn_before (gen_jump (cs->landing_pad), | |
2588 section_switch_note); | |
2589 } | |
2590 else | |
2591 { | |
2592 jump = emit_jump_insn_after (gen_jump (cs->landing_pad), | |
2593 section_switch_note); | |
2594 note = emit_label_after (pad_map[idx], | |
2595 section_switch_note); | |
2596 } | |
2597 JUMP_LABEL (jump) = cs->landing_pad; | |
2598 add_reg_note (jump, REG_CROSSING_JUMP, NULL_RTX); | |
2599 iter = NEXT_INSN (cs->landing_pad); | |
2600 if (iter && NOTE_INSN_BASIC_BLOCK_P (iter)) | |
2601 padbb = NOTE_BASIC_BLOCK (iter); | |
2602 else | |
2603 padbb = NULL; | |
2604 if (padbb && prev_bb | |
2605 && BB_PARTITION (padbb) != BB_UNPARTITIONED) | |
2606 { | |
2607 basic_block bb; | |
2608 int part | |
2609 = BB_PARTITION (padbb) == BB_COLD_PARTITION | |
2610 ? BB_HOT_PARTITION : BB_COLD_PARTITION; | |
2611 edge_iterator ei; | |
2612 edge e; | |
2613 | |
2614 bb = create_basic_block (note, jump, prev_bb); | |
2615 make_single_succ_edge (bb, padbb, EDGE_CROSSING); | |
2616 BB_SET_PARTITION (bb, part); | |
2617 for (ei = ei_start (padbb->preds); | |
2618 (e = ei_safe_edge (ei)); ) | |
2619 { | |
2620 if ((e->flags & (EDGE_EH|EDGE_CROSSING)) | |
2621 == (EDGE_EH|EDGE_CROSSING)) | |
2622 { | |
2623 redirect_edge_succ (e, bb); | |
2624 e->flags &= ~EDGE_CROSSING; | |
2625 } | |
2626 else | |
2627 ei_next (&ei); | |
2628 } | |
2629 if (cur_sec == 0) | |
2630 prev_bb = bb; | |
2631 } | |
2632 } | |
2633 cs->landing_pad = pad_map[idx]; | |
2634 } | |
2635 } | |
2636 | |
2637 sbitmap_free (pad_loc); | |
2638 XDELETEVEC (pad_map); | |
2639 } | |
2640 | |
3329 htab_delete (ar_hash); | 2641 htab_delete (ar_hash); |
3330 return 0; | 2642 return 0; |
2643 } | |
2644 | |
2645 static bool | |
2646 gate_convert_to_eh_region_ranges (void) | |
2647 { | |
2648 /* Nothing to do for SJLJ exceptions or if no regions created. */ | |
2649 return !(USING_SJLJ_EXCEPTIONS || cfun->eh->region_tree == NULL); | |
3331 } | 2650 } |
3332 | 2651 |
3333 struct rtl_opt_pass pass_convert_to_eh_region_ranges = | 2652 struct rtl_opt_pass pass_convert_to_eh_region_ranges = |
3334 { | 2653 { |
3335 { | 2654 { |
3336 RTL_PASS, | 2655 RTL_PASS, |
3337 "eh_ranges", /* name */ | 2656 "eh_ranges", /* name */ |
3338 NULL, /* gate */ | 2657 gate_convert_to_eh_region_ranges, /* gate */ |
3339 convert_to_eh_region_ranges, /* execute */ | 2658 convert_to_eh_region_ranges, /* execute */ |
3340 NULL, /* sub */ | 2659 NULL, /* sub */ |
3341 NULL, /* next */ | 2660 NULL, /* next */ |
3342 0, /* static_pass_number */ | 2661 0, /* static_pass_number */ |
3343 0, /* tv_id */ | 2662 TV_NONE, /* tv_id */ |
3344 0, /* properties_required */ | 2663 0, /* properties_required */ |
3345 0, /* properties_provided */ | 2664 0, /* properties_provided */ |
3346 0, /* properties_destroyed */ | 2665 0, /* properties_destroyed */ |
3347 0, /* todo_flags_start */ | 2666 0, /* todo_flags_start */ |
3348 TODO_dump_func, /* todo_flags_finish */ | 2667 TODO_dump_func, /* todo_flags_finish */ |
3349 } | 2668 } |
3350 }; | 2669 }; |
3351 | |
3352 | 2670 |
3353 static void | 2671 static void |
3354 push_uleb128 (varray_type *data_area, unsigned int value) | 2672 push_uleb128 (VEC (uchar, gc) **data_area, unsigned int value) |
3355 { | 2673 { |
3356 do | 2674 do |
3357 { | 2675 { |
3358 unsigned char byte = value & 0x7f; | 2676 unsigned char byte = value & 0x7f; |
3359 value >>= 7; | 2677 value >>= 7; |
3360 if (value) | 2678 if (value) |
3361 byte |= 0x80; | 2679 byte |= 0x80; |
3362 VARRAY_PUSH_UCHAR (*data_area, byte); | 2680 VEC_safe_push (uchar, gc, *data_area, byte); |
3363 } | 2681 } |
3364 while (value); | 2682 while (value); |
3365 } | 2683 } |
3366 | 2684 |
3367 static void | 2685 static void |
3368 push_sleb128 (varray_type *data_area, int value) | 2686 push_sleb128 (VEC (uchar, gc) **data_area, int value) |
3369 { | 2687 { |
3370 unsigned char byte; | 2688 unsigned char byte; |
3371 int more; | 2689 int more; |
3372 | 2690 |
3373 do | 2691 do |
3376 value >>= 7; | 2694 value >>= 7; |
3377 more = ! ((value == 0 && (byte & 0x40) == 0) | 2695 more = ! ((value == 0 && (byte & 0x40) == 0) |
3378 || (value == -1 && (byte & 0x40) != 0)); | 2696 || (value == -1 && (byte & 0x40) != 0)); |
3379 if (more) | 2697 if (more) |
3380 byte |= 0x80; | 2698 byte |= 0x80; |
3381 VARRAY_PUSH_UCHAR (*data_area, byte); | 2699 VEC_safe_push (uchar, gc, *data_area, byte); |
3382 } | 2700 } |
3383 while (more); | 2701 while (more); |
3384 } | 2702 } |
3385 | 2703 |
3386 | 2704 |
3387 #ifndef HAVE_AS_LEB128 | 2705 #ifndef HAVE_AS_LEB128 |
3388 static int | 2706 static int |
3389 dw2_size_of_call_site_table (void) | 2707 dw2_size_of_call_site_table (int section) |
3390 { | 2708 { |
3391 int n = VEC_length (call_site_record, crtl->eh.call_site_record); | 2709 int n = VEC_length (call_site_record, crtl->eh.call_site_record[section]); |
3392 int size = n * (4 + 4 + 4); | 2710 int size = n * (4 + 4 + 4); |
3393 int i; | 2711 int i; |
3394 | 2712 |
3395 for (i = 0; i < n; ++i) | 2713 for (i = 0; i < n; ++i) |
3396 { | 2714 { |
3397 struct call_site_record *cs = VEC_index (call_site_record, crtl->eh.call_site_record, i); | 2715 struct call_site_record_d *cs = |
2716 VEC_index (call_site_record, crtl->eh.call_site_record[section], i); | |
3398 size += size_of_uleb128 (cs->action); | 2717 size += size_of_uleb128 (cs->action); |
3399 } | 2718 } |
3400 | 2719 |
3401 return size; | 2720 return size; |
3402 } | 2721 } |
3403 | 2722 |
3404 static int | 2723 static int |
3405 sjlj_size_of_call_site_table (void) | 2724 sjlj_size_of_call_site_table (void) |
3406 { | 2725 { |
3407 int n = VEC_length (call_site_record, crtl->eh.call_site_record); | 2726 int n = VEC_length (call_site_record, crtl->eh.call_site_record[0]); |
3408 int size = 0; | 2727 int size = 0; |
3409 int i; | 2728 int i; |
3410 | 2729 |
3411 for (i = 0; i < n; ++i) | 2730 for (i = 0; i < n; ++i) |
3412 { | 2731 { |
3413 struct call_site_record *cs = VEC_index (call_site_record, crtl->eh.call_site_record, i); | 2732 struct call_site_record_d *cs = |
2733 VEC_index (call_site_record, crtl->eh.call_site_record[0], i); | |
3414 size += size_of_uleb128 (INTVAL (cs->landing_pad)); | 2734 size += size_of_uleb128 (INTVAL (cs->landing_pad)); |
3415 size += size_of_uleb128 (cs->action); | 2735 size += size_of_uleb128 (cs->action); |
3416 } | 2736 } |
3417 | 2737 |
3418 return size; | 2738 return size; |
3419 } | 2739 } |
3420 #endif | 2740 #endif |
3421 | 2741 |
3422 static void | 2742 static void |
3423 dw2_output_call_site_table (void) | 2743 dw2_output_call_site_table (int cs_format, int section) |
3424 { | 2744 { |
3425 int n = VEC_length (call_site_record, crtl->eh.call_site_record); | 2745 int n = VEC_length (call_site_record, crtl->eh.call_site_record[section]); |
3426 int i; | 2746 int i; |
2747 const char *begin; | |
2748 | |
2749 if (section == 0) | |
2750 begin = current_function_func_begin_label; | |
2751 else if (first_function_block_is_cold) | |
2752 begin = crtl->subsections.hot_section_label; | |
2753 else | |
2754 begin = crtl->subsections.cold_section_label; | |
3427 | 2755 |
3428 for (i = 0; i < n; ++i) | 2756 for (i = 0; i < n; ++i) |
3429 { | 2757 { |
3430 struct call_site_record *cs = VEC_index (call_site_record, crtl->eh.call_site_record, i); | 2758 struct call_site_record_d *cs = |
2759 VEC_index (call_site_record, crtl->eh.call_site_record[section], i); | |
3431 char reg_start_lab[32]; | 2760 char reg_start_lab[32]; |
3432 char reg_end_lab[32]; | 2761 char reg_end_lab[32]; |
3433 char landing_pad_lab[32]; | 2762 char landing_pad_lab[32]; |
3434 | 2763 |
3435 ASM_GENERATE_INTERNAL_LABEL (reg_start_lab, "LEHB", call_site_base + i); | 2764 ASM_GENERATE_INTERNAL_LABEL (reg_start_lab, "LEHB", call_site_base + i); |
3441 | 2770 |
3442 /* ??? Perhaps use insn length scaling if the assembler supports | 2771 /* ??? Perhaps use insn length scaling if the assembler supports |
3443 generic arithmetic. */ | 2772 generic arithmetic. */ |
3444 /* ??? Perhaps use attr_length to choose data1 or data2 instead of | 2773 /* ??? Perhaps use attr_length to choose data1 or data2 instead of |
3445 data4 if the function is small enough. */ | 2774 data4 if the function is small enough. */ |
3446 #ifdef HAVE_AS_LEB128 | 2775 if (cs_format == DW_EH_PE_uleb128) |
3447 dw2_asm_output_delta_uleb128 (reg_start_lab, | 2776 { |
3448 current_function_func_begin_label, | 2777 dw2_asm_output_delta_uleb128 (reg_start_lab, begin, |
3449 "region %d start", i); | 2778 "region %d start", i); |
3450 dw2_asm_output_delta_uleb128 (reg_end_lab, reg_start_lab, | 2779 dw2_asm_output_delta_uleb128 (reg_end_lab, reg_start_lab, |
3451 "length"); | 2780 "length"); |
3452 if (cs->landing_pad) | 2781 if (cs->landing_pad) |
3453 dw2_asm_output_delta_uleb128 (landing_pad_lab, | 2782 dw2_asm_output_delta_uleb128 (landing_pad_lab, begin, |
3454 current_function_func_begin_label, | 2783 "landing pad"); |
3455 "landing pad"); | 2784 else |
2785 dw2_asm_output_data_uleb128 (0, "landing pad"); | |
2786 } | |
3456 else | 2787 else |
3457 dw2_asm_output_data_uleb128 (0, "landing pad"); | 2788 { |
3458 #else | 2789 dw2_asm_output_delta (4, reg_start_lab, begin, |
3459 dw2_asm_output_delta (4, reg_start_lab, | 2790 "region %d start", i); |
3460 current_function_func_begin_label, | 2791 dw2_asm_output_delta (4, reg_end_lab, reg_start_lab, "length"); |
3461 "region %d start", i); | 2792 if (cs->landing_pad) |
3462 dw2_asm_output_delta (4, reg_end_lab, reg_start_lab, "length"); | 2793 dw2_asm_output_delta (4, landing_pad_lab, begin, |
3463 if (cs->landing_pad) | 2794 "landing pad"); |
3464 dw2_asm_output_delta (4, landing_pad_lab, | 2795 else |
3465 current_function_func_begin_label, | 2796 dw2_asm_output_data (4, 0, "landing pad"); |
3466 "landing pad"); | 2797 } |
3467 else | |
3468 dw2_asm_output_data (4, 0, "landing pad"); | |
3469 #endif | |
3470 dw2_asm_output_data_uleb128 (cs->action, "action"); | 2798 dw2_asm_output_data_uleb128 (cs->action, "action"); |
3471 } | 2799 } |
3472 | 2800 |
3473 call_site_base += n; | 2801 call_site_base += n; |
3474 } | 2802 } |
3475 | 2803 |
3476 static void | 2804 static void |
3477 sjlj_output_call_site_table (void) | 2805 sjlj_output_call_site_table (void) |
3478 { | 2806 { |
3479 int n = VEC_length (call_site_record, crtl->eh.call_site_record); | 2807 int n = VEC_length (call_site_record, crtl->eh.call_site_record[0]); |
3480 int i; | 2808 int i; |
3481 | 2809 |
3482 for (i = 0; i < n; ++i) | 2810 for (i = 0; i < n; ++i) |
3483 { | 2811 { |
3484 struct call_site_record *cs = VEC_index (call_site_record, crtl->eh.call_site_record, i); | 2812 struct call_site_record_d *cs = |
2813 VEC_index (call_site_record, crtl->eh.call_site_record[0], i); | |
3485 | 2814 |
3486 dw2_asm_output_data_uleb128 (INTVAL (cs->landing_pad), | 2815 dw2_asm_output_data_uleb128 (INTVAL (cs->landing_pad), |
3487 "region %d landing pad", i); | 2816 "region %d landing pad", i); |
3488 dw2_asm_output_data_uleb128 (cs->action, "action"); | 2817 dw2_asm_output_data_uleb128 (cs->action, "action"); |
3489 } | 2818 } |
3558 value = const0_rtx; | 2887 value = const0_rtx; |
3559 else | 2888 else |
3560 { | 2889 { |
3561 struct varpool_node *node; | 2890 struct varpool_node *node; |
3562 | 2891 |
3563 type = lookup_type_for_runtime (type); | 2892 /* FIXME lto. pass_ipa_free_lang_data changes all types to |
2893 runtime types so TYPE should already be a runtime type | |
2894 reference. When pass_ipa_free_lang data is made a default | |
2895 pass, we can then remove the call to lookup_type_for_runtime | |
2896 below. */ | |
2897 if (TYPE_P (type)) | |
2898 type = lookup_type_for_runtime (type); | |
2899 | |
3564 value = expand_expr (type, NULL_RTX, VOIDmode, EXPAND_INITIALIZER); | 2900 value = expand_expr (type, NULL_RTX, VOIDmode, EXPAND_INITIALIZER); |
3565 | 2901 |
3566 /* Let cgraph know that the rtti decl is used. Not all of the | 2902 /* Let cgraph know that the rtti decl is used. Not all of the |
3567 paths below go through assemble_integer, which would take | 2903 paths below go through assemble_integer, which would take |
3568 care of this for us. */ | 2904 care of this for us. */ |
3591 tt_format_size * BITS_PER_UNIT, 1); | 2927 tt_format_size * BITS_PER_UNIT, 1); |
3592 else | 2928 else |
3593 dw2_asm_output_encoded_addr_rtx (tt_format, value, is_public, NULL); | 2929 dw2_asm_output_encoded_addr_rtx (tt_format, value, is_public, NULL); |
3594 } | 2930 } |
3595 | 2931 |
3596 void | 2932 static void |
3597 output_function_exception_table (const char * ARG_UNUSED (fnname)) | 2933 output_one_function_exception_table (const char * ARG_UNUSED (fnname), |
3598 { | 2934 int section, rtx ARG_UNUSED (personality)) |
3599 int tt_format, cs_format, lp_format, i, n; | 2935 { |
2936 int tt_format, cs_format, lp_format, i; | |
3600 #ifdef HAVE_AS_LEB128 | 2937 #ifdef HAVE_AS_LEB128 |
3601 char ttype_label[32]; | 2938 char ttype_label[32]; |
3602 char cs_after_size_label[32]; | 2939 char cs_after_size_label[32]; |
3603 char cs_end_label[32]; | 2940 char cs_end_label[32]; |
3604 #else | 2941 #else |
3605 int call_site_len; | 2942 int call_site_len; |
3606 #endif | 2943 #endif |
3607 int have_tt_data; | 2944 int have_tt_data; |
3608 int tt_format_size = 0; | 2945 int tt_format_size = 0; |
3609 | 2946 |
3610 /* Not all functions need anything. */ | |
3611 if (! crtl->uses_eh_lsda) | |
3612 return; | |
3613 | |
3614 if (eh_personality_libfunc) | |
3615 assemble_external_libcall (eh_personality_libfunc); | |
3616 | |
3617 #ifdef TARGET_UNWIND_INFO | 2947 #ifdef TARGET_UNWIND_INFO |
3618 /* TODO: Move this into target file. */ | 2948 /* TODO: Move this into target file. */ |
3619 fputs ("\t.personality\t", asm_out_file); | 2949 fputs ("\t.personality\t", asm_out_file); |
3620 output_addr_const (asm_out_file, eh_personality_libfunc); | 2950 output_addr_const (asm_out_file, personality); |
3621 fputs ("\n\t.handlerdata\n", asm_out_file); | 2951 fputs ("\n\t.handlerdata\n", asm_out_file); |
3622 /* Note that varasm still thinks we're in the function's code section. | 2952 /* Note that varasm still thinks we're in the function's code section. |
3623 The ".endp" directive that will immediately follow will take us back. */ | 2953 The ".endp" directive that will immediately follow will take us back. */ |
3624 #else | 2954 #else |
3625 switch_to_exception_section (fnname); | 2955 switch_to_exception_section (fnname); |
3626 #endif | 2956 #endif |
3627 | 2957 |
3628 /* If the target wants a label to begin the table, emit it here. */ | 2958 /* If the target wants a label to begin the table, emit it here. */ |
3629 targetm.asm_out.except_table_label (asm_out_file); | 2959 targetm.asm_out.except_table_label (asm_out_file); |
3630 | 2960 |
3631 have_tt_data = (VEC_length (tree, crtl->eh.ttype_data) > 0 | 2961 have_tt_data = (VEC_length (tree, cfun->eh->ttype_data) |
3632 || VARRAY_ACTIVE_SIZE (crtl->eh.ehspec_data) > 0); | 2962 || (targetm.arm_eabi_unwinder |
2963 ? VEC_length (tree, cfun->eh->ehspec_data.arm_eabi) | |
2964 : VEC_length (uchar, cfun->eh->ehspec_data.other))); | |
3633 | 2965 |
3634 /* Indicate the format of the @TType entries. */ | 2966 /* Indicate the format of the @TType entries. */ |
3635 if (! have_tt_data) | 2967 if (! have_tt_data) |
3636 tt_format = DW_EH_PE_omit; | 2968 tt_format = DW_EH_PE_omit; |
3637 else | 2969 else |
3638 { | 2970 { |
3639 tt_format = ASM_PREFERRED_EH_DATA_FORMAT (/*code=*/0, /*global=*/1); | 2971 tt_format = ASM_PREFERRED_EH_DATA_FORMAT (/*code=*/0, /*global=*/1); |
3640 #ifdef HAVE_AS_LEB128 | 2972 #ifdef HAVE_AS_LEB128 |
3641 ASM_GENERATE_INTERNAL_LABEL (ttype_label, "LLSDATT", | 2973 ASM_GENERATE_INTERNAL_LABEL (ttype_label, |
2974 section ? "LLSDATTC" : "LLSDATT", | |
3642 current_function_funcdef_no); | 2975 current_function_funcdef_no); |
3643 #endif | 2976 #endif |
3644 tt_format_size = size_of_encoded_value (tt_format); | 2977 tt_format_size = size_of_encoded_value (tt_format); |
3645 | 2978 |
3646 assemble_align (tt_format_size * BITS_PER_UNIT); | 2979 assemble_align (tt_format_size * BITS_PER_UNIT); |
3647 } | 2980 } |
3648 | 2981 |
3649 targetm.asm_out.internal_label (asm_out_file, "LLSDA", | 2982 targetm.asm_out.internal_label (asm_out_file, section ? "LLSDAC" : "LLSDA", |
3650 current_function_funcdef_no); | 2983 current_function_funcdef_no); |
3651 | 2984 |
3652 /* The LSDA header. */ | 2985 /* The LSDA header. */ |
3653 | 2986 |
3654 /* Indicate the format of the landing pad start pointer. An omitted | 2987 /* Indicate the format of the landing pad start pointer. An omitted |
3655 field implies @LPStart == @Start. */ | 2988 field implies @LPStart == @Start. */ |
3668 | 3001 |
3669 #ifndef HAVE_AS_LEB128 | 3002 #ifndef HAVE_AS_LEB128 |
3670 if (USING_SJLJ_EXCEPTIONS) | 3003 if (USING_SJLJ_EXCEPTIONS) |
3671 call_site_len = sjlj_size_of_call_site_table (); | 3004 call_site_len = sjlj_size_of_call_site_table (); |
3672 else | 3005 else |
3673 call_site_len = dw2_size_of_call_site_table (); | 3006 call_site_len = dw2_size_of_call_site_table (section); |
3674 #endif | 3007 #endif |
3675 | 3008 |
3676 /* A pc-relative 4-byte displacement to the @TType data. */ | 3009 /* A pc-relative 4-byte displacement to the @TType data. */ |
3677 if (have_tt_data) | 3010 if (have_tt_data) |
3678 { | 3011 { |
3679 #ifdef HAVE_AS_LEB128 | 3012 #ifdef HAVE_AS_LEB128 |
3680 char ttype_after_disp_label[32]; | 3013 char ttype_after_disp_label[32]; |
3681 ASM_GENERATE_INTERNAL_LABEL (ttype_after_disp_label, "LLSDATTD", | 3014 ASM_GENERATE_INTERNAL_LABEL (ttype_after_disp_label, |
3015 section ? "LLSDATTDC" : "LLSDATTD", | |
3682 current_function_funcdef_no); | 3016 current_function_funcdef_no); |
3683 dw2_asm_output_delta_uleb128 (ttype_label, ttype_after_disp_label, | 3017 dw2_asm_output_delta_uleb128 (ttype_label, ttype_after_disp_label, |
3684 "@TType base offset"); | 3018 "@TType base offset"); |
3685 ASM_OUTPUT_LABEL (asm_out_file, ttype_after_disp_label); | 3019 ASM_OUTPUT_LABEL (asm_out_file, ttype_after_disp_label); |
3686 #else | 3020 #else |
3688 unsigned int before_disp, after_disp, last_disp, disp; | 3022 unsigned int before_disp, after_disp, last_disp, disp; |
3689 | 3023 |
3690 before_disp = 1 + 1; | 3024 before_disp = 1 + 1; |
3691 after_disp = (1 + size_of_uleb128 (call_site_len) | 3025 after_disp = (1 + size_of_uleb128 (call_site_len) |
3692 + call_site_len | 3026 + call_site_len |
3693 + VARRAY_ACTIVE_SIZE (crtl->eh.action_record_data) | 3027 + VEC_length (uchar, crtl->eh.action_record_data) |
3694 + (VEC_length (tree, crtl->eh.ttype_data) | 3028 + (VEC_length (tree, cfun->eh->ttype_data) |
3695 * tt_format_size)); | 3029 * tt_format_size)); |
3696 | 3030 |
3697 disp = after_disp; | 3031 disp = after_disp; |
3698 do | 3032 do |
3699 { | 3033 { |
3722 #endif | 3056 #endif |
3723 dw2_asm_output_data (1, cs_format, "call-site format (%s)", | 3057 dw2_asm_output_data (1, cs_format, "call-site format (%s)", |
3724 eh_data_format_name (cs_format)); | 3058 eh_data_format_name (cs_format)); |
3725 | 3059 |
3726 #ifdef HAVE_AS_LEB128 | 3060 #ifdef HAVE_AS_LEB128 |
3727 ASM_GENERATE_INTERNAL_LABEL (cs_after_size_label, "LLSDACSB", | 3061 ASM_GENERATE_INTERNAL_LABEL (cs_after_size_label, |
3062 section ? "LLSDACSBC" : "LLSDACSB", | |
3728 current_function_funcdef_no); | 3063 current_function_funcdef_no); |
3729 ASM_GENERATE_INTERNAL_LABEL (cs_end_label, "LLSDACSE", | 3064 ASM_GENERATE_INTERNAL_LABEL (cs_end_label, |
3065 section ? "LLSDACSEC" : "LLSDACSE", | |
3730 current_function_funcdef_no); | 3066 current_function_funcdef_no); |
3731 dw2_asm_output_delta_uleb128 (cs_end_label, cs_after_size_label, | 3067 dw2_asm_output_delta_uleb128 (cs_end_label, cs_after_size_label, |
3732 "Call-site table length"); | 3068 "Call-site table length"); |
3733 ASM_OUTPUT_LABEL (asm_out_file, cs_after_size_label); | 3069 ASM_OUTPUT_LABEL (asm_out_file, cs_after_size_label); |
3734 if (USING_SJLJ_EXCEPTIONS) | 3070 if (USING_SJLJ_EXCEPTIONS) |
3735 sjlj_output_call_site_table (); | 3071 sjlj_output_call_site_table (); |
3736 else | 3072 else |
3737 dw2_output_call_site_table (); | 3073 dw2_output_call_site_table (cs_format, section); |
3738 ASM_OUTPUT_LABEL (asm_out_file, cs_end_label); | 3074 ASM_OUTPUT_LABEL (asm_out_file, cs_end_label); |
3739 #else | 3075 #else |
3740 dw2_asm_output_data_uleb128 (call_site_len,"Call-site table length"); | 3076 dw2_asm_output_data_uleb128 (call_site_len, "Call-site table length"); |
3741 if (USING_SJLJ_EXCEPTIONS) | 3077 if (USING_SJLJ_EXCEPTIONS) |
3742 sjlj_output_call_site_table (); | 3078 sjlj_output_call_site_table (); |
3743 else | 3079 else |
3744 dw2_output_call_site_table (); | 3080 dw2_output_call_site_table (cs_format, section); |
3745 #endif | 3081 #endif |
3746 | 3082 |
3747 /* ??? Decode and interpret the data for flag_debug_asm. */ | 3083 /* ??? Decode and interpret the data for flag_debug_asm. */ |
3748 n = VARRAY_ACTIVE_SIZE (crtl->eh.action_record_data); | 3084 { |
3749 for (i = 0; i < n; ++i) | 3085 uchar uc; |
3750 dw2_asm_output_data (1, VARRAY_UCHAR (crtl->eh.action_record_data, i), | 3086 for (i = 0; VEC_iterate (uchar, crtl->eh.action_record_data, i, uc); ++i) |
3751 (i ? NULL : "Action record table")); | 3087 dw2_asm_output_data (1, uc, i ? NULL : "Action record table"); |
3088 } | |
3752 | 3089 |
3753 if (have_tt_data) | 3090 if (have_tt_data) |
3754 assemble_align (tt_format_size * BITS_PER_UNIT); | 3091 assemble_align (tt_format_size * BITS_PER_UNIT); |
3755 | 3092 |
3756 i = VEC_length (tree, crtl->eh.ttype_data); | 3093 i = VEC_length (tree, cfun->eh->ttype_data); |
3757 while (i-- > 0) | 3094 while (i-- > 0) |
3758 { | 3095 { |
3759 tree type = VEC_index (tree, crtl->eh.ttype_data, i); | 3096 tree type = VEC_index (tree, cfun->eh->ttype_data, i); |
3760 output_ttype (type, tt_format, tt_format_size); | 3097 output_ttype (type, tt_format, tt_format_size); |
3761 } | 3098 } |
3762 | 3099 |
3763 #ifdef HAVE_AS_LEB128 | 3100 #ifdef HAVE_AS_LEB128 |
3764 if (have_tt_data) | 3101 if (have_tt_data) |
3765 ASM_OUTPUT_LABEL (asm_out_file, ttype_label); | 3102 ASM_OUTPUT_LABEL (asm_out_file, ttype_label); |
3766 #endif | 3103 #endif |
3767 | 3104 |
3768 /* ??? Decode and interpret the data for flag_debug_asm. */ | 3105 /* ??? Decode and interpret the data for flag_debug_asm. */ |
3769 n = VARRAY_ACTIVE_SIZE (crtl->eh.ehspec_data); | 3106 if (targetm.arm_eabi_unwinder) |
3770 for (i = 0; i < n; ++i) | 3107 { |
3771 { | 3108 tree type; |
3772 if (targetm.arm_eabi_unwinder) | 3109 for (i = 0; |
3773 { | 3110 VEC_iterate (tree, cfun->eh->ehspec_data.arm_eabi, i, type); ++i) |
3774 tree type = VARRAY_TREE (crtl->eh.ehspec_data, i); | 3111 output_ttype (type, tt_format, tt_format_size); |
3775 output_ttype (type, tt_format, tt_format_size); | 3112 } |
3776 } | 3113 else |
3777 else | 3114 { |
3778 dw2_asm_output_data (1, VARRAY_UCHAR (crtl->eh.ehspec_data, i), | 3115 uchar uc; |
3779 (i ? NULL : "Exception specification table")); | 3116 for (i = 0; |
3780 } | 3117 VEC_iterate (uchar, cfun->eh->ehspec_data.other, i, uc); ++i) |
3118 dw2_asm_output_data (1, uc, | |
3119 i ? NULL : "Exception specification table"); | |
3120 } | |
3121 } | |
3122 | |
3123 void | |
3124 output_function_exception_table (const char * ARG_UNUSED (fnname)) | |
3125 { | |
3126 rtx personality = get_personality_function (current_function_decl); | |
3127 | |
3128 /* Not all functions need anything. */ | |
3129 if (! crtl->uses_eh_lsda) | |
3130 return; | |
3131 | |
3132 if (personality) | |
3133 assemble_external_libcall (personality); | |
3134 | |
3135 output_one_function_exception_table (fnname, 0, personality); | |
3136 if (crtl->eh.call_site_record[1] != NULL) | |
3137 output_one_function_exception_table (fnname, 1, personality); | |
3781 | 3138 |
3782 switch_to_section (current_function_section ()); | 3139 switch_to_section (current_function_section ()); |
3783 } | 3140 } |
3784 | 3141 |
3785 void | 3142 void |
3791 htab_t | 3148 htab_t |
3792 get_eh_throw_stmt_table (struct function *fun) | 3149 get_eh_throw_stmt_table (struct function *fun) |
3793 { | 3150 { |
3794 return fun->eh->throw_stmt_table; | 3151 return fun->eh->throw_stmt_table; |
3795 } | 3152 } |
3796 | 3153 |
3154 /* Determine if the function needs an EH personality function. */ | |
3155 | |
3156 enum eh_personality_kind | |
3157 function_needs_eh_personality (struct function *fn) | |
3158 { | |
3159 enum eh_personality_kind kind = eh_personality_none; | |
3160 eh_region i; | |
3161 | |
3162 FOR_ALL_EH_REGION_FN (i, fn) | |
3163 { | |
3164 switch (i->type) | |
3165 { | |
3166 case ERT_CLEANUP: | |
3167 /* Can do with any personality including the generic C one. */ | |
3168 kind = eh_personality_any; | |
3169 break; | |
3170 | |
3171 case ERT_TRY: | |
3172 case ERT_ALLOWED_EXCEPTIONS: | |
3173 /* Always needs a EH personality function. The generic C | |
3174 personality doesn't handle these even for empty type lists. */ | |
3175 return eh_personality_lang; | |
3176 | |
3177 case ERT_MUST_NOT_THROW: | |
3178 /* Always needs a EH personality function. The language may specify | |
3179 what abort routine that must be used, e.g. std::terminate. */ | |
3180 return eh_personality_lang; | |
3181 } | |
3182 } | |
3183 | |
3184 return kind; | |
3185 } | |
3186 | |
3797 /* Dump EH information to OUT. */ | 3187 /* Dump EH information to OUT. */ |
3188 | |
3798 void | 3189 void |
3799 dump_eh_tree (FILE *out, struct function *fun) | 3190 dump_eh_tree (FILE * out, struct function *fun) |
3800 { | 3191 { |
3801 struct eh_region *i; | 3192 eh_region i; |
3802 int depth = 0; | 3193 int depth = 0; |
3803 static const char * const type_name[] = {"unknown", "cleanup", "try", "catch", | 3194 static const char *const type_name[] = { |
3804 "allowed_exceptions", "must_not_throw", | 3195 "cleanup", "try", "allowed_exceptions", "must_not_throw" |
3805 "throw"}; | 3196 }; |
3806 | 3197 |
3807 i = fun->eh->region_tree; | 3198 i = fun->eh->region_tree; |
3808 if (! i) | 3199 if (!i) |
3809 return; | 3200 return; |
3810 | 3201 |
3811 fprintf (out, "Eh tree:\n"); | 3202 fprintf (out, "Eh tree:\n"); |
3812 while (1) | 3203 while (1) |
3813 { | 3204 { |
3814 fprintf (out, " %*s %i %s", depth * 2, "", | 3205 fprintf (out, " %*s %i %s", depth * 2, "", |
3815 i->region_number, type_name [(int)i->type]); | 3206 i->index, type_name[(int) i->type]); |
3816 if (i->tree_label) | 3207 |
3208 if (i->landing_pads) | |
3817 { | 3209 { |
3818 fprintf (out, " tree_label:"); | 3210 eh_landing_pad lp; |
3819 print_generic_expr (out, i->tree_label, 0); | 3211 |
3212 fprintf (out, " land:"); | |
3213 if (current_ir_type () == IR_GIMPLE) | |
3214 { | |
3215 for (lp = i->landing_pads; lp ; lp = lp->next_lp) | |
3216 { | |
3217 fprintf (out, "{%i,", lp->index); | |
3218 print_generic_expr (out, lp->post_landing_pad, 0); | |
3219 fputc ('}', out); | |
3220 if (lp->next_lp) | |
3221 fputc (',', out); | |
3222 } | |
3223 } | |
3224 else | |
3225 { | |
3226 for (lp = i->landing_pads; lp ; lp = lp->next_lp); | |
3227 { | |
3228 fprintf (out, "{%i,", lp->index); | |
3229 if (lp->landing_pad) | |
3230 fprintf (out, "%i%s,", INSN_UID (lp->landing_pad), | |
3231 NOTE_P (lp->landing_pad) ? "(del)" : ""); | |
3232 else | |
3233 fprintf (out, "(nil),"); | |
3234 if (lp->post_landing_pad) | |
3235 { | |
3236 rtx lab = label_rtx (lp->post_landing_pad); | |
3237 fprintf (out, "%i%s}", INSN_UID (lab), | |
3238 NOTE_P (lab) ? "(del)" : ""); | |
3239 } | |
3240 else | |
3241 fprintf (out, "(nil)}"); | |
3242 if (lp->next_lp) | |
3243 fputc (',', out); | |
3244 } | |
3245 } | |
3820 } | 3246 } |
3821 fprintf (out, "\n"); | 3247 |
3248 switch (i->type) | |
3249 { | |
3250 case ERT_CLEANUP: | |
3251 case ERT_MUST_NOT_THROW: | |
3252 break; | |
3253 | |
3254 case ERT_TRY: | |
3255 { | |
3256 eh_catch c; | |
3257 fprintf (out, " catch:"); | |
3258 for (c = i->u.eh_try.first_catch; c; c = c->next_catch) | |
3259 { | |
3260 fputc ('{', out); | |
3261 if (c->label) | |
3262 { | |
3263 fprintf (out, "lab:"); | |
3264 print_generic_expr (out, c->label, 0); | |
3265 fputc (';', out); | |
3266 } | |
3267 print_generic_expr (out, c->type_list, 0); | |
3268 fputc ('}', out); | |
3269 if (c->next_catch) | |
3270 fputc (',', out); | |
3271 } | |
3272 } | |
3273 break; | |
3274 | |
3275 case ERT_ALLOWED_EXCEPTIONS: | |
3276 fprintf (out, " filter :%i types:", i->u.allowed.filter); | |
3277 print_generic_expr (out, i->u.allowed.type_list, 0); | |
3278 break; | |
3279 } | |
3280 fputc ('\n', out); | |
3281 | |
3822 /* If there are sub-regions, process them. */ | 3282 /* If there are sub-regions, process them. */ |
3823 if (i->inner) | 3283 if (i->inner) |
3824 i = i->inner, depth++; | 3284 i = i->inner, depth++; |
3825 /* If there are peers, process them. */ | 3285 /* If there are peers, process them. */ |
3826 else if (i->next_peer) | 3286 else if (i->next_peer) |
3827 i = i->next_peer; | 3287 i = i->next_peer; |
3828 /* Otherwise, step back up the tree to the next peer. */ | 3288 /* Otherwise, step back up the tree to the next peer. */ |
3829 else | 3289 else |
3830 { | 3290 { |
3831 do { | 3291 do |
3832 i = i->outer; | 3292 { |
3833 depth--; | 3293 i = i->outer; |
3834 if (i == NULL) | 3294 depth--; |
3835 return; | 3295 if (i == NULL) |
3836 } while (i->next_peer == NULL); | 3296 return; |
3297 } | |
3298 while (i->next_peer == NULL); | |
3837 i = i->next_peer; | 3299 i = i->next_peer; |
3838 } | 3300 } |
3839 } | 3301 } |
3840 } | 3302 } |
3841 | 3303 |
3842 /* Verify some basic invariants on EH datastructures. Could be extended to | 3304 /* Dump the EH tree for FN on stderr. */ |
3843 catch more. */ | 3305 |
3306 void | |
3307 debug_eh_tree (struct function *fn) | |
3308 { | |
3309 dump_eh_tree (stderr, fn); | |
3310 } | |
3311 | |
3312 /* Verify invariants on EH datastructures. */ | |
3313 | |
3844 void | 3314 void |
3845 verify_eh_tree (struct function *fun) | 3315 verify_eh_tree (struct function *fun) |
3846 { | 3316 { |
3847 struct eh_region *i, *outer = NULL; | 3317 eh_region r, outer; |
3318 int nvisited_lp, nvisited_r; | |
3319 int count_lp, count_r, depth, i; | |
3320 eh_landing_pad lp; | |
3848 bool err = false; | 3321 bool err = false; |
3849 int nvisited = 0; | 3322 |
3850 int count = 0; | 3323 if (!fun->eh->region_tree) |
3851 int j; | |
3852 int depth = 0; | |
3853 | |
3854 i = fun->eh->region_tree; | |
3855 if (! i) | |
3856 return; | 3324 return; |
3857 for (j = fun->eh->last_region_number; j > 0; --j) | 3325 |
3858 if ((i = VEC_index (eh_region, cfun->eh->region_array, j))) | 3326 count_r = 0; |
3327 for (i = 1; VEC_iterate (eh_region, fun->eh->region_array, i, r); ++i) | |
3328 if (r) | |
3859 { | 3329 { |
3860 count++; | 3330 if (r->index == i) |
3861 if (i->region_number != j) | 3331 count_r++; |
3332 else | |
3862 { | 3333 { |
3863 error ("region_array is corrupted for region %i", i->region_number); | 3334 error ("region_array is corrupted for region %i", r->index); |
3864 err = true; | 3335 err = true; |
3865 } | 3336 } |
3866 } | 3337 } |
3867 | 3338 |
3339 count_lp = 0; | |
3340 for (i = 1; VEC_iterate (eh_landing_pad, fun->eh->lp_array, i, lp); ++i) | |
3341 if (lp) | |
3342 { | |
3343 if (lp->index == i) | |
3344 count_lp++; | |
3345 else | |
3346 { | |
3347 error ("lp_array is corrupted for lp %i", lp->index); | |
3348 err = true; | |
3349 } | |
3350 } | |
3351 | |
3352 depth = nvisited_lp = nvisited_r = 0; | |
3353 outer = NULL; | |
3354 r = fun->eh->region_tree; | |
3868 while (1) | 3355 while (1) |
3869 { | 3356 { |
3870 if (VEC_index (eh_region, cfun->eh->region_array, i->region_number) != i) | 3357 if (VEC_index (eh_region, fun->eh->region_array, r->index) != r) |
3871 { | 3358 { |
3872 error ("region_array is corrupted for region %i", i->region_number); | 3359 error ("region_array is corrupted for region %i", r->index); |
3873 err = true; | 3360 err = true; |
3874 } | 3361 } |
3875 if (i->outer != outer) | 3362 if (r->outer != outer) |
3876 { | 3363 { |
3877 error ("outer block of region %i is wrong", i->region_number); | 3364 error ("outer block of region %i is wrong", r->index); |
3878 err = true; | |
3879 } | |
3880 if (i->may_contain_throw && outer && !outer->may_contain_throw) | |
3881 { | |
3882 error ("region %i may contain throw and is contained in region that may not", | |
3883 i->region_number); | |
3884 err = true; | 3365 err = true; |
3885 } | 3366 } |
3886 if (depth < 0) | 3367 if (depth < 0) |
3887 { | 3368 { |
3888 error ("negative nesting depth of region %i", i->region_number); | 3369 error ("negative nesting depth of region %i", r->index); |
3889 err = true; | 3370 err = true; |
3890 } | 3371 } |
3891 nvisited ++; | 3372 nvisited_r++; |
3892 /* If there are sub-regions, process them. */ | 3373 |
3893 if (i->inner) | 3374 for (lp = r->landing_pads; lp ; lp = lp->next_lp) |
3894 outer = i, i = i->inner, depth++; | 3375 { |
3895 /* If there are peers, process them. */ | 3376 if (VEC_index (eh_landing_pad, fun->eh->lp_array, lp->index) != lp) |
3896 else if (i->next_peer) | 3377 { |
3897 i = i->next_peer; | 3378 error ("lp_array is corrupted for lp %i", lp->index); |
3898 /* Otherwise, step back up the tree to the next peer. */ | 3379 err = true; |
3380 } | |
3381 if (lp->region != r) | |
3382 { | |
3383 error ("region of lp %i is wrong", lp->index); | |
3384 err = true; | |
3385 } | |
3386 nvisited_lp++; | |
3387 } | |
3388 | |
3389 if (r->inner) | |
3390 outer = r, r = r->inner, depth++; | |
3391 else if (r->next_peer) | |
3392 r = r->next_peer; | |
3899 else | 3393 else |
3900 { | 3394 { |
3901 do { | 3395 do |
3902 i = i->outer; | 3396 { |
3903 depth--; | 3397 r = r->outer; |
3904 if (i == NULL) | 3398 if (r == NULL) |
3905 { | 3399 goto region_done; |
3906 if (depth != -1) | 3400 depth--; |
3907 { | 3401 outer = r->outer; |
3908 error ("tree list ends on depth %i", depth + 1); | 3402 } |
3909 err = true; | 3403 while (r->next_peer == NULL); |
3910 } | 3404 r = r->next_peer; |
3911 if (count != nvisited) | |
3912 { | |
3913 error ("array does not match the region tree"); | |
3914 err = true; | |
3915 } | |
3916 if (err) | |
3917 { | |
3918 dump_eh_tree (stderr, fun); | |
3919 internal_error ("verify_eh_tree failed"); | |
3920 } | |
3921 return; | |
3922 } | |
3923 outer = i->outer; | |
3924 } while (i->next_peer == NULL); | |
3925 i = i->next_peer; | |
3926 } | 3405 } |
3927 } | 3406 } |
3928 } | 3407 region_done: |
3929 | 3408 if (depth != 0) |
3930 /* Initialize unwind_resume_libfunc. */ | 3409 { |
3931 | 3410 error ("tree list ends on depth %i", depth); |
3932 void | 3411 err = true; |
3933 default_init_unwind_resume_libfunc (void) | 3412 } |
3934 { | 3413 if (count_r != nvisited_r) |
3935 /* The default c++ routines aren't actually c++ specific, so use those. */ | 3414 { |
3936 unwind_resume_libfunc = | 3415 error ("region_array does not match region_tree"); |
3937 init_one_libfunc ( USING_SJLJ_EXCEPTIONS ? "_Unwind_SjLj_Resume" | 3416 err = true; |
3938 : "_Unwind_Resume"); | 3417 } |
3939 } | 3418 if (count_lp != nvisited_lp) |
3940 | 3419 { |
3420 error ("lp_array does not match region_tree"); | |
3421 err = true; | |
3422 } | |
3423 | |
3424 if (err) | |
3425 { | |
3426 dump_eh_tree (stderr, fun); | |
3427 internal_error ("verify_eh_tree failed"); | |
3428 } | |
3429 } | |
3941 | 3430 |
3942 static bool | |
3943 gate_handle_eh (void) | |
3944 { | |
3945 return doing_eh (0); | |
3946 } | |
3947 | |
3948 /* Complete generation of exception handling code. */ | |
3949 static unsigned int | |
3950 rest_of_handle_eh (void) | |
3951 { | |
3952 cleanup_cfg (CLEANUP_NO_INSN_DEL); | |
3953 finish_eh_generation (); | |
3954 cleanup_cfg (CLEANUP_NO_INSN_DEL); | |
3955 return 0; | |
3956 } | |
3957 | |
3958 struct rtl_opt_pass pass_rtl_eh = | |
3959 { | |
3960 { | |
3961 RTL_PASS, | |
3962 "eh", /* name */ | |
3963 gate_handle_eh, /* gate */ | |
3964 rest_of_handle_eh, /* execute */ | |
3965 NULL, /* sub */ | |
3966 NULL, /* next */ | |
3967 0, /* static_pass_number */ | |
3968 TV_JUMP, /* tv_id */ | |
3969 0, /* properties_required */ | |
3970 0, /* properties_provided */ | |
3971 0, /* properties_destroyed */ | |
3972 0, /* todo_flags_start */ | |
3973 TODO_dump_func /* todo_flags_finish */ | |
3974 } | |
3975 }; | |
3976 | |
3977 #include "gt-except.h" | 3431 #include "gt-except.h" |